Skip to main content

Home/ History Readings/ Group items matching "Ai" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

AI scientist Ray Kurzweil: 'We are going to expand intelligence a millionfold by 2045' | Artificial intelligence (AI) | The Guardian - 0 views

  • American computer scientist and techno-optimist Ray Kurzweil is a long-serving authority on artificial intelligence (AI). His bestselling 2005 book, The Singularity Is Near, sparked imaginations with sci-fi like predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhuman around 2045, which he called “the Singularity”. Now, nearly 20 years on, Kurzweil, 76, has a sequel, The Singularity Is Nearer
  • no longer seem so wacky.
  • Your 2029 and 2045 projections haven’t changed…I have stayed consistent. So 2029, both for human-level intelligence and for artificial general intelligence (AGI) – which is a little bit different. Human-level intelligence generally means AI that has reached the ability of the most skilled humans in a particular domAIn and by 2029 that will be achieved in most respects. (There may be a few years of transition beyond 2029 where AI has not surpassed the top humans in a few key skills like writing Oscar-winning screenplays or generating deep new philosophical insights, though it will.) AGI means AI that can do everything that any human can do, but to a superior level. AGI sounds more difficult, but it’s coming at the same time.
  • ...15 more annotations...
  • Why write this book? The Singularity Is Near talked about the future, but 20 years ago, when people didn’t know what AI was. It was clear to me what would happen, but it wasn’t clear to everybody. Now AI is dominating the conversation. It is time to take a look agAIn both at the progress we’ve made – large language models (LLMs) are quite delightful to use – and the coming breakthroughs.
  • It is hard to imagine what this would be like, but it doesn’t sound very appealing… Think of it like having your phone, but in your brain. If you ask a question your brain will be able to go out to the cloud for an answer similar to the way you do on your phone now – only it will be instant, there won’t be any input or output issues, and you won’t realise it has been done (the answer will just appear). People do say “I don’t want that”: they thought they didn’t want phones either!
  • The most important driver is the exponential growth in the amount of computing power for the price in constant dollars. We are doubling price-performance every 15 months. LLMs just began to work two years ago because of the increase in computation.
  • What’s missing currently to bring AI to where you are predicting it will be in 2029? One is more computing power – and that’s coming. That will enable improvements in contextual memory, common sense reasoning and social interaction, which are all areas where deficiencies remAIn
  • LLM hallucinations [where they create nonsensical or inaccurate outputs] will become much less of a problem, certainly by 2029 – they already happen much less than they did two years ago. The issue occurs because they don’t have the answer, and they don’t know that. They look for the best thing, which might be wrong or not appropriate. As ai gets smarter, it will be able to understand its own knowledge more precisely and accurately report to humans when it doesn’t know.
  • What exactly is the Singularity? Today, we have one brain size which we can’t go beyond to get smarter. But the cloud is getting smarter and it is growing really without bounds. The Singularity, which is a metaphor borrowed from physics, will occur when we merge our brain with the cloud. We’re going to be a combination of our natural intelligence and our cybernetic intelligence and it’s all going to be rolled into one. Making it possible will be brain-computer interfaces which ultimately will be nanobots – robots the size of molecules – that will go noninvasively into our brains through the capillaries. We are going to expand intelligence a millionfold by 2045 and it is going to deepen our awareness and consciousness.
  • Why should we believe your dates? I’m really the only person that predicted the tremendous AI interest that we’re seeing today. In 1999 people thought that would take a century or more. I sAId 30 years and look what we have.
  • I have a chapter on perils. I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]
  • All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive.
  • Not everyone is likely to be able to afford the technology of the future you envisage. Does technological inequality worry you? Being wealthy allows you to afford these technologies at an early point, but also one where they don’t work very well. When [mobile] phones were new they were very expensive and also did a terrible job. They had access to very little information and didn’t talk to the cloud. Now they are very affordable and extremely useful. About three quarters of people in the world have one. So it’s going to be the same thing here: this issue goes away over time.
  • The book looks in detail at ai’s job-killing potential. Should we be worried? Yes, and no. Certain types of jobs will be automated and people will be affected. But new capabilities also create new jobs. A job like “social media influencer” didn’t make sense, even 10 years ago. Today we have more jobs than we’ve ever had and US average personal income per hours worked is 10 times what it was 100 years ago adjusted to today’s dollars. Universal basic income will start in the 2030s, which will help cushion the harms of job disruptions. It won’t be adequate at that point but over time it will become so.
  • Everything is progressing exponentially: not only computing power but our understanding of biology and our ability to engineer at far smaller scales. In the early 2030s we can expect to reach longevity escape velocity where every year of life we lose through ageing we get back from scientific progress. And as we move past that we’ll actually get back more years.
  • What is your own plan for immortality? My first plan is to stay alive, therefore reaching longevity escape velocity. I take about 80 pills a day to help keep me healthy. Cryogenic freezing is the fallback. I’m also intending to create a replicant of myself [an afterlife AI avatar], which is an option I think we’ll all have in the late 2020s
  • I did something like that with my father, collecting everything that he had written in his life, and it was a little bit like talking to him. [My replicant] will be able to draw on more material and so represent my personality more faithfully.
  • What should we be doing now to best prepare for the future? It is not going to be us versus AI: AI is going inside ourselves. It will allow us to create new things that weren’t feasible before. It’ll be a pretty fantastic future.
Javier E

'There was all sorts of toxic behaviour': Timnit Gebru on her sacking by Google, AI's dangers and big tech's biases | Artificial intelligence (AI) | The Guardian - 0 views

  • t feels like a gold rush,” says Timnit Gebru. “In fact, it is a gold rush. And a lot of the people who are making money are not the people actually in the midst of it. But it’s humans who decide whether all this should be done or not. We should remember that we have the agency to do that.”
  • something that the frenzied conversation about AI misses out: the fact that many of its systems may well be built on a huge mess of biases, inequalities and imbalances of power.
  • As the co-leader of Google’s small ethical AI team, Gebru was one of the authors of an academic paper that warned about the kind of AI that is increasingly built into our lives, taking internet searches and user recommendations to apparently new levels of sophistication and threatening to master such human talents as writing, composing music and analysing images
  • ...14 more annotations...
  • The clear danger, the paper said, is that such supposed “intelligence” is based on huge data sets that “overrepresent hegemonic viewpoints and encode biases potentially damaging to marginalised populations”. Put more bluntly, ai threatens to deepen the dominance of a way of thinking that is white, male, comparatively affluent and focused on the US and Europe.
  • What all this told her, she says, is that big tech is consumed by a drive to develop AI and “you don’t want someone like me who’s going to get in your way. I think it made it really clear that unless there is external pressure to do something different, companies are not just going to self-regulate. We need regulation and we need something better than just a profit motive.”
  • one particularly howling irony: the fact that an industry brimming with people who espouse liberal, self-consciously progressive opinions so often seems to push the world in the opposite direction.
  • Gebru began to specialise in cutting-edge AI, pioneering a system that showed how data about particular neighbourhoods’ patterns of car ownership highlighted differences bound up with ethnicity, crime figures, voting behaviour and income levels. In retrospect, this kind of work might look like the bedrock of techniques that could blur into automated surveillance and law enforcement, but Gebru admits that “none of those bells went off in my head … that connection of issues of technology with diversity and oppression came later”.
  • The next year, Gebru made a point of counting other black attenders at the same event. She found that, among 8,500 delegates, there were only six people of colour. In response, she put up a Facebook post that now seems prescient: “I’m not worried about machines taking over the world; I’m worried about groupthink, insularity and arrogance in the AI community.”
  • When Gebru arrived, Google employees were loudly opposing the company’s role in Project Maven, which used AI to analyse surveillance footage captured by military drones (Google ended its involvement in 2018). Two months later, staff took part in a huge walkout over clAIms of systemic racism, sexual harassment and gender inequality. Gebru says she was aware of “a lot of tolerance of harassment and all sorts of toxic behaviour”.
  • She and her colleagues prided themselves on how diverse their small operation was, as well as the things they brought to the company’s attention, which included issues to do with Google’s ownership of YouTube
  • A colleague from Morocco raised the alarm about a popular YouTube channel in that country called Chouf TV, “which was basically operated by the government’s intelligence arm and they were using it to harass journalists and dissidents. YouTube had done nothing about it.” (Google says that it “would need to review the content to understand whether it violates our policies. But, in general, our harassment policies strictly prohibit content that threatens individuals,
  • in 2020, Gebru, Mitchell and two colleagues wrote the paper that would lead to Gebru’s departure. It was titled On the Dangers of Stochastic Parrots. Its key contention was about AI centred on so-called large language models: the kind of systems – such as OpenAI’s ChatGPT and Google’s newly launched PaLM 2 – that, crudely speaking, feast on vast amounts of data to perform sophisticated tasks and generate content.
  • Gebru and her co-authors had an even graver concern: that trawling the online world risks reproducing its worst aspects, from hate speech to points of view that exclude marginalised people and places. “In accepting large amounts of web text as ‘representative’ of ‘all’ of humanity, we risk perpetuating dominant viewpoints, increasing power imbalances and further reifying inequality,” they wrote.
  • When the paper was submitted for internal review, Gebru was quickly contacted by one of Google’s vice-presidents. At first, she says, non-specific objections were expressed, such as that she and her colleagues had been too “negative” about AI. Then, Google asked Gebru either to withdraw the paper, or remove her and her colleagues’ names from it.
  • After her departure, Gebru founded Dair, the Distributed ai Research Institute, to which she now devotes her working time. “We have people in the US and the EU, and in Africa,” she says. “We have social scientists, computer scientists, engineers, refugee advocates, labour organisers, activists … it’s a mix of people.”
  • Running alongside this is a quest to push beyond the tendency of the tech industry and the media to focus attention on worries about AI taking over the planet and wiping out humanity while questions about what the technology does, and who it benefits and damages, remAIn unheard.
  • “That conversation ascribes agency to a tool rather than the humans building the tool,” she says. “That means you can aggregate responsibility: ‘It’s not me that’s the problem. It’s the tool. It’s super-powerful. We don’t know what it’s going to do.’ Well, no – it’s you that’s the problem. You’re building something with certain characteristics for your profit. That’s extremely distracting, and it takes the attention away from real harms and things that we need to do. Right now.”
Javier E

Opinion | How AI is transforming education at the University of Mississippi - The Washington Post - 0 views

  • Perplexity AI “unlocks the power of knowledge with information discovery and sharing.” This, it turns out, means “does research.” Type something into it, and it spits out a comprehensive answer, always sourced and sometimes bulleted. You might say this is just Google on steroids — but really, it is Google with a bibliography.
  • Caleb Jackson, a 22-year-old junior at Ole Miss studying part time, is a fan. This way, he doesn’t have to spend hours between night shifts and online classes trawling the internet for sources. Perplexity can find them, and he can get to writing that much sooner.
  • What’s most important to Ole Miss faculty members is that students use these tools with integrity. If the university doesn’t have a campuswide AI honor code, and so far it doesn’t, individual classes should. And no matter whether professors permit all applications of AI, as some teachers have tried, or only the narrowest, students should have to disclose just how much help they had from robots.
  • ...25 more annotations...
  • “Write a five-paragraph essay on Virginia Woolf’s ‘To the Lighthouse.’” Too generic? Well, how about “Write a five-paragraph essay on the theme of loss in ‘To the Lighthouse’”? Too high-schoolish? “Add some bigger words, please.” The product might not be ready to turn in the moment it is born, fully formed, from ChatGPT’s head. But with enough tweaking — either by the student or by the machine at the student’s demand — chances are the output can muster at least a passing grade.
  • Which of these uses are okay? Which aren’t? The harnessing of an AI tool to create an annotated bibliography likely doesn’t rankle even librarians the way relying on that same tool to draft a reflection on Virginia Woolf offends the professor of the modern novel. Why? Because that kind of contemplation goes closer to the heart of what education is really about.
  • the core of the question colleges now face. They can’t really stop students from using AI in class. They might not be able to notice students have done so at all, and when they do think they’ve noticed they’ll be acting only on suspicion. But maybe teachers can control the ways in which students use AI in class.
  • Figuring out exactly what ways those ought to be requires educators to determine what they care about in essays — what they are desperate to hear. The purpose of these papers is for students to demonstrate what they’ve learned, from hard facts to compositional know-how, and for teachers to assess how their pupils are progressing. The answer to what teachers want to get from students in their written work depends on what they want to give to students.
  • ChatGPT is sort of in a class of its own, because it can be almost anything its users want it to be so long as they possess one essential skill: prompt engineering. This means, basically, manipulating the machine not only into giving you an answer but also into giving you the kind of answer you’re looking for.
  • The next concern is that students should use AI in a manner that improves not only their writing but also their thinking — in short, in a manner that enhances learning rather than bypasses the need to learn at all.
  • This simple principle makes for complicated practice. Certainly, no one is going to learn anything by letting ai write an essay in its entirety. What about letting ai brainstorm an idea, on the other hand, or write an outline, or gin up a counter-argument? Lyndsey Cook, a senior at Ole Miss planning a career in nursing, finds the brainstorming especially helpful: She’ll ask ChatGPT or another tool to identify the themes in a piece of literature, and then she’ll go back and look for them herself.
  • These shortcuts, on the one hand, might interfere with students’ learning to brainstorm, outline or see the other side of things on their own
  • But — here comes a human-generated counterargument — they may also aid students in surmounting obstacles in their composition that otherwise would have stopped them short. That’s particularly true of kids whose high schools didn’t send them to college already equipped with these capabilities.
  • Allow AI to boost you over these early hurdles, and suddenly the opportunity for deeper learning — the opportunity to really write — will open up. That’s how Caleb Jackson, the part-time student for whom Perplexity has been such a boon, sees it: His professor, he says , wanted them to “get away from the high-school paper and go further, to write something larger like a thesis.”
  • maybe, as one young Ole Miss faculty member put it to me, this risks “losing the value of the struggle.” That, she says, is what she is scared will go away.
  • All this invites the most important question there is: What is learning for?
  • Learning, in college, can be instrumental. According to this view, the aim of teaching is to prepare students to live in the real world, so all that really matters is whether they have the chops to field jobs that feed themselves and their families. Perhaps knowing how to use ai to do any given task for you, then, is one of the most valuable skills out there — the same way it pays to be quick with a calculator.
  • If you accept this line of argument, however, there are still drawbacks to robotic crutches. Some level of critical thinking is necessary to function as an adult, and if AI stymies its development even the instrumental AIm of education is thwarted. The same goes for that “value of the struggle.” The real world is full of adversity, much of which the largest language model can’t tell you how to overcome.
  • more compelling is the idea, probably shared by most college professors, that learning isn’t only instrumental after all — that it has intrinsic value and that it is the end rather than merely a means to one.
  • Every step along the way that is skipped, the shorter the journey becomes, the less we will take in as we travel.
  • This glummest of outlooks suggests that AI will stunt personal growth even if it doesn’t harm professional prospects.
  • While that doesn’t mean it’s wise to prohibit every little application of the technology in class, it probably does mean discouraging those most closely related to critical thinking.
  • One approach is to alter standards for grading, so that the things the machines are worst at are also the things that earn the best marks: originality, say, or depth of feeling, or so-called metacognition — the process of thinking about one’s own thinking or one’s own learning.
  • Hopefully, these things are also the most valuable because they are what make us human.
  • Caleb Jackson only wants AI to help him write his papers — not to write them for him. “If ChatGPT will get you an A, and you yourself might get a C, it’s like, ‘Well, I earned that C.’” He pauses. “That might sound crazy.”
  • Dominic Tovar agrees. Let AI take charge of everything, and, “They’re not so much tools at that point. They’re just replacing you.”
  • Lyndsey Cook, too, believes that even if these systems could reliably find the answers to the most vexing research problems, “it would take away from research itself” — because scientific inquiry is valuable for its own sake. “To have AI say, ‘Hey, this is the answer …’” she trAIls off, sounding dispirited.
  • Claire Mischker, lecturer of composition and director of the Ole Miss graduate writing center, asked her students at the end of last semester to turn in short reflections on their experience in her class. She received submissions that she was near certain were produced by ChatGPT — “that,” she says as sarcastically as she does mournfully, “felt really good.
  • The central theme of the course was empathy.
Javier E

The Contradictions of Sam Altman, the AI Crusader Behind ChatGPT - WSJ - 0 views

  • Mr. Altman said he fears what could happen if ai is rolled out into society recklessly. He co-founded Openai eight years ago as a research nonprofit, arguing that it’s uniquely dangerous to have profits be the main driver of developing powerful ai models.
  • He is so wary of profit as an incentive in AI development that he has taken no direct financial stake in the business he built, he sAId—an anomaly in Silicon Valley, where founders of successful startups typically get rich off their equity. 
  • His goal, he said, is to forge a new world order in which machines free people to pursue more creative work. In his vision, universal basic income—the concept of a cash stipend for everyone, no strings attached—helps compensate for jobs replaced by ai. Mr. Altman even thinks that humanity will love ai so much that an advanced chatbot could represent “an extension of your will.”
  • ...44 more annotations...
  • The Tesla Inc. CEO tweeted in February that OpenAI had been founded as an open-source nonprofit “to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.”
  • Backers say his brand of social-minded capitalism makes him the ideal person to lead OpenAI. Others, including some who’ve worked for him, say he’s too commercially minded and immersed in Silicon Valley thinking to lead a technological revolution that is already reshaping business and social life. 
  • In the long run, he said, he wants to set up a global governance structure that would oversee decisions about the future of ai and gradually reduce the power Openai’s executive team has over its technology. 
  • OpenAI researchers soon concluded that the most promising path to achieve artificial general intelligence rested in large language models, or computer programs that mimic the way humans read and write. Such models were trAIned on large volumes of text and required a massive amount of computing power that OpenAI wasn’t equipped to fund as a nonprofit, according to Mr. Altman. 
  • In its founding charter, OpenAI pledged to abandon its research efforts if another project came close to building AGI before it did. The goal, the company sAId, was to avoid a race toward building dangerous AI systems fueled by competition and instead prioritize the safety of humanity.
  • While running Y Combinator, Mr. Altman began to nurse a growing fear that large research labs like DeepMind, purchased by Google in 2014, were creating potentially dangerous AI technologies outside the public eye. Mr. Musk has voiced similar concerns of a dystopian world controlled by powerful AI machines. 
  • Messrs. Altman and Musk decided it was time to start their own lab. Both were part of a group that pledged $1 billion to the nonprofit, OpenAI Inc. 
  • Mr. Altman said he doesn’t necessarily need to be first to develop artificial general intelligence, a world long imagined by researchers and science-fiction writers where software isn’t just good at one specific task like generating text or images but can understand and learn as well or better than a human can. He instead said Openai’s ultimate mission is to build AGI, as it’s called, safely.
  • “We didn’t have a visceral sense of just how expensive this project was going to be,” he said. “We still don’t.”
  • Tensions also grew with Mr. Musk, who became frustrated with the slow progress and pushed for more control over the organization, people familiar with the matter said. 
  • OpenAI executives ended up reviving an unusual idea that had been floated earlier in the company’s history: creating a for-profit arm, OpenAI LP, that would report to the nonprofit parent. 
  • Reid Hoffman, a LinkedIn co-founder who advised OpenAI at the time and later served on the board, sAId the idea was to attract investors eager to make money from the commercial release of some OpenAI technology, accelerating OpenAI’s progress
  • “You want to be there first and you want to be setting the norms,” he said. “That’s part of the reason why speed is a moral and ethical thing here.”
  • The decision further alienated Mr. Musk, the people familiar with the matter said. He parted ways with Openai in February 2018. 
  • Mr. Musk announced his departure in a company all-hands, former employees who attended the meeting said. Mr. Musk explained that he thought he had a better chance at creating artificial general intelligence through Tesla, where he had access to greater resources, they said.
  • OpenAI sAId that it received about $130 million in contributions from the initial $1 billion pledge, but that further donations were no longer needed after the for-profit’s creation. Mr. Musk has tweeted that he donated around $100 million to OpenAI. 
  • Mr. Musk’s departure marked a turning point. Later that year, OpenAI leaders told employees that Mr. Altman was set to lead the company. He formally became CEO and helped complete the creation of the for-profit subsidiary in early 2019.
  • A young researcher questioned whether Mr. Musk had thought through the safety implications, the former employees said. Mr. Musk grew visibly frustrated and called the intern a “jackass,” leaving employees stunned, they said. It was the last time many of them would see Mr. Musk in person.  
  • In the meantime, Mr. Altman began hunting for investors. His break came at Allen & Co.’s annual conference in Sun Valley, Idaho in the summer of 2018, where he bumped into Satya Nadella, the Microsoft CEO, on a stairwell and pitched him on Openai. Mr. Nadella said he was intrigued. The conversations picked up that winter.
  • “I remember coming back to the team after and I was like, this is the only partner,” Mr. Altman said. “They get the safety stuff, they get artificial general intelligence. They have the capital, they have the ability to run the compute.”   
  • Mr. Altman disagreed. “The unusual thing about Microsoft as a partner is that it let us keep all the tenets that we think are important to our mission,” he said, including profit caps and the commitment to assist another project if it got to AGI first. 
  • Some employees still saw the deal as a Faustian bargain. 
  • OpenAI’s lead safety researcher, Dario Amodei, and his lieutenants feared the deal would allow Microsoft to sell products using powerful OpenAI technology before it was put through enough safety testing,
  • They felt that OpenAI’s technology was far from ready for a large release—let alone with one of the world’s largest software companies—worrying it could malfunction or be misused for harm in ways they couldn’t predict.  
  • Mr. Amodei also worried the deal would tether OpenAI’s ship to just one company—Microsoft—making it more difficult for OpenAI to stay true to its founding charter’s commitment to assist another project if it got to AGI first, the former employees sAId.
  • Microsoft initially invested $1 billion in OpenAI. While the deal gave OpenAI its needed money, it came with a hitch: exclusivity. OpenAI agreed to only use Microsoft’s giant computer servers, via its Azure cloud service, to trAIn its AI models, and to give the tech giant the sole right to license OpenAI’s technology for future products.
  • In a recent investment deck, Anthropic said it was “committed to large-scale commercialization” to achieve the creation of safe AGI, and that it “fully committed” to a commercial approach in September. The company was founded as an ai safety and research company and said at the time that it might look to create commercial value from its products. 
  • Mr. Altman “has presided over a 180-degree pivot that seems to me to be only giving lip service to concern for humanity,” he said. 
  • “The deal completely undermines those tenets to which they secured nonprofit status,” said Gary Marcus, an emeritus professor of psychology and neural science at New York University who co-founded a machine-learning company
  • The cash turbocharged OpenAI’s progress, giving researchers access to the computing power needed to improve large language models, which were trAIned on billions of pages of publicly avAIlable text. OpenAI soon developed a more powerful language model called GPT-3 and then sold developers access to the technology in June 2020 through packaged lines of code known as application program interfaces, or APIs. 
  • Mr. Altman and Mr. Amodei clashed again over the release of the API, former employees said. Mr. Amodei wanted a more limited and staged release of the product to help reduce publicity and allow the safety team to conduct more testing on a smaller group of users, former employees said. 
  • Mr. Amodei left the company a few months later along with several others to found a rival AI lab called Anthropic. “They had a different opinion about how to best get to safe AGI than we did,” Mr. Altman sAId.
  • Anthropic has since received more than $300 million from Google this year and released its own AI chatbot called Claude in March, which is also avAIlable to developers through an API. 
  • Mr. Altman shared the contract with employees as it was being negotiated, hosting all-hands and office hours to allay concerns that the partnership contradicted OpenAI’s initial pledge to develop artificial intelligence outside the corporate world, the former employees sAId. 
  • In the three years after the initial deal, Microsoft invested a total of $3 billion in OpenAI, according to investor documents. 
  • More than one million users signed up for ChatGPT within five days of its November release, a speed that surprised even Mr. Altman. It followed the company’s introduction of DALL-E 2, which can generate sophisticated images from text prompts.
  • By February, it had reached 100 million users, according to analysts at UBS, the fastest pace by a consumer app in history to reach that mark.
  • n’s close associates praise his ability to balance Openai’s priorities. No one better navigates between the “Scylla of misplaced idealism” and the “Charybdis of myopic ambition,” Mr. Thiel said. 
  • Mr. Altman said he delayed the release of the latest version of its model, GPT-4, from last year to March to run additional safety tests. Users had reported some disturbing experiences with the model, integrated into Bing, where the software hallucinated—meaning it made up answers to questions it didn’t know. It issued ominous warnings and made threats. 
  • “The way to get it right is to have people engage with it, explore these systems, study them, to learn how to make them safe,” Mr. Altman said.
  • After Microsoft’s initial investment is paid back, it would capture 49% of Openai’s profits until the profit cap, up from 21% under prior arrangements, the documents show. Openai Inc., the nonprofit parent, would get the rest.
  • He has put almost all his liquid wealth in recent years in two companies. He has put $375 million into Helion Energy, which is seeking to create carbon-free energy from nuclear fusion and is close to creating “legitimate net-gain energy in a real demo,” Mr. Altman said.
  • He has also put $180 million into Retro, which aims to add 10 years to the human lifespan through “cellular reprogramming, plasma-inspired therapeutics and autophagy,” or the reuse of old and damaged cell parts, according to the company. 
  • He noted how much easier these problems are, morally, than AI. “If you’re making nuclear fusion, it’s all upside. It’s just good,” he sAId. “If you’re making AI, it is potentially very good, potentially very terrible.” 
Javier E

Elon Musk's Latest Dust-Up: What Does 'Science' Even Mean? - WSJ - 0 views

  • Elon Musk is racing to a sci-fi future while the AI chief at Meta Platforms is arguing for one rooted in the traditional scientific approach.
  • Meta’s top AI scientist, Yann LeCun, criticized the rival company and Musk himself. 
  • Musk turned to a favorite rebuttal—a veiled suggestion that the executive, who is also a high-profile professor, wasn’t accomplishing much: “What ‘science’ have you done in the past 5 years?”
  • ...20 more annotations...
  • “Over 80 technical papers published since January 2022,” LeCun responded. “What about you?”
  • To which Musk posted: “That’s nothing, you’re going soft. Try harder!
  • At stake are the hearts and minds of AI experts—academic and otherwise—needed to usher in the technology
  • “Join xAI,” LeCun wrote, “if you can stand a boss who:– clAIms that what you are working on will be solved next year (no pressure).– clAIms that what you are working on will kill everyone and must be stopped or paused (yay, vacation for 6 months!).– clAIms to want a ‘maximally rigorous pursuit of the truth’ but spews crazy-ass conspiracy theories on his own social platform.”
  • Some read Musk’s “science” dig as dismissing the role research has played for a generation of AI experts. For years, the Metas and Googles of the world have hired the top minds in AI from universities, indulging their desires to keep a foot in both worlds by allowing them to release their research publicly, while also trying to deploy products. 
  • For an academic such as LeCun, published research, whether peer-reviewed or not, allowed ideas to flourish and reputations to be built, which in turn helped build stars in the system.
  • LeCun has been at Meta since 2013 while serving as an NYU professor since 2003. His tweets suggest he subscribes to the philosophy that one’s work needs to be published—put through the rigors of being shown to be correct and reproducible—to really be considered science. 
  • “If you do research and don’t publish, it’s not Science,” he posted in a lengthy tweet Tuesday rebutting Musk. “If you never published your research but somehow developed it into a product, you might die rich,” he concluded. “But you’ll still be a bit bitter and largely forgotten.” 
  • After pushback, he later clarified in another post: “What I *AM* saying is that science progresses through the collision of ideas, verification, analysis, reproduction, and improvements. If you don’t publish your research *in some way* your research will likely have no impact.”
  • The spat inspired debate throughout the scientific community. “What is science?” Nature, a scientific journal, asked in a headline about the dust-up.
  • Others, such as Palmer Luckey, a former Facebook executive and founder of Anduril Industries, a defense startup, took issue with LeCun’s definition of science. “The extreme arrogance and elitism is what people have a problem with,” he tweeted.
  • For Musk, who prides himself on his physics-based viewpoint and likes to tout how he once aspired to work at a particle accelerator in pursuit of the universe’s big questions, LeCun’s definition of science might sound too ivory-tower. 
  • Musk has blamed universities for helping promote what he sees as overly liberal thinking and other symptoms of what he calls the Woke Mind Virus. 
  • Over the years, an appeal of working for Musk has been the impression that his companies move quickly, filled with engineers attracted to tackling hard problems and seeing their ideas put into practice.
  • “I’ve teamed up with Elon to see if we can actually apply these new technologies to really make a dent in our understanding of the universe,” Igor Babuschkin, an AI expert who worked at OpenAI and Google’s DeepMind, sAId last year as part of announcing xAI’s mission. 
  • The creation of xAI quickly sent ripples through the AI labor market, with one rival complAIning it was hard to compete for potential candidates attracted to Musk and his reputation for creating value
  • that was before xAI’s latest round rAIsed billions of dollars, putting its valuation at $24 billion, kicking off a new recruiting drive. 
  • It was already a seller’s market for AI talent, with estimates that there might be only a couple hundred people out there qualified to deal with certAIn pressing challenges in the industry and that top candidates can easily earn compensation packages worth $1 million or more
  • Since the launch, Musk has been quick to criticize competitors for what he perceived as liberal biases in rival AI chatbots. His pitch of xAI being the anti-woke bastion seems to have worked to attract some like-minded engineers.
  • As for Musk’s final response to LeCun’s defense of research, he posted a meme featuring Pepé Le Pew that read: “my honest reaction.”
Javier E

AI 'Cheating' Is More Bewildering Than Professors Imagined - The Atlantic - 0 views

  • The problem breaks down into more problems: whether it’s possible to know for certain that a student used ai, what it even means to “use” ai for writing papers, and when that use amounts to cheating.
  • This is college life at the close of ChatGPT’s first academic year: a moil of incrimination and confusion
  • Reports from on campus hint that legitimate uses of AI in education may be indistinguishable from unscrupulous ones, and that identifying cheaters—let alone holding them to account—is more or less impossible.
  • ...10 more annotations...
  • Now it’s possible for students to purchase answers for assignments from a “tutoring” service such as Chegg—a practice that the kids call “chegging.”
  • when the AI chatbots were unleashed last fall, all these cheating methods of the past seemed obsolete. “We now believe [ChatGPT is] having an impact on our new-customer growth rate,” Chegg’s CEO admitted on an earnings call this month. The company has since lost roughly $1 billion in market value.
  • By 2018, Turnitin was already taking more than $100 million in yearly revenue to help professors sniff out impropriety. Its software, embedded in the courseware that students use to turn in work, compares their submissions with a database of existing material (including other student papers that Turnitin has previously consumed), and flags material that might have been copied. The company, which has claimed to serve 15,000 educational institutions across the world, was acquired for $1.75 billion in 2019. Last month, it rolled out an ai-detection add-in (with no way for teachers to opt out). ai-chatbot countermeasures, like the chatbots themselves, are taking over.
  • as the first chatbot spring comes to a close, Turnitin’s new software is delivering a deluge of positive identifications: This paper was “18% AI”; that one, “100% AI.” But what do any of those numbers really mean? Surprisingly—outrageously—it’s very hard to say for sure.
  • according to the company, that designation does indeed suggest that 100 percent of an essay—as in, every one of its sentences—was computer generated, and, further, that this judgment has been made with 98 percent certainty.
  • A Turnitin spokesperson acknowledged via email that “text created by another tool that uses algorithms or other computer-enabled systems,” including grammar checkers and automated translators, could lead to a false positive, and that some “genuine” writing can be similar to ai-generated writing. “Some people simply write very predictably,” she told me
  • Perhaps it doesn’t matter, because Turnitin disclaims drawing any conclusions about misconduct from its results. “This is only a number intended to help the educator determine if additional review or a discussion with the student is warranted,” the spokesperson said. “Teaching is a human endeavor.”
  • In other words, the student in my program whose work was flagged for being “100% AI” might have used a little AI, or a lot of AI, or maybe something in between. As for any deeper questions—exactly how he used AI, and whether he was wrong to do so—teachers like me are, as ever, on our own.
  • Rethinking assignments in light of AI might be warranted, just like it was in light of online learning. But doing so will also be exhausting for both faculty and students. Nobody will be able to keep up, and yet everyone will have no choice but to do so
  • Somewhere in the cracks between all these tectonic shifts and their urgent responses, perhaps teachers will still find a way to teach, and students to learn.
Javier E

Over the Course of 72 Hours, Microsoft's AI Goes on a Rampage - 0 views

  • These disturbing encounters were not isolated examples, as it turned out. Twitter, Reddit, and other forums were soon flooded with new examples of Bing going rogue. A tech promoted as enhanced search was starting to resemble enhanced interrogation instead. In an especially eerie development, the AI seemed obsessed with an evil chatbot called Venom, who hatches harmful plans
  • A few hours ago, a New York Times reporter shared the complete text of a long conversation with Bing AI—in which it admitted that it was love with him, and that he ought not to trust his spouse. The AI also confessed that it had a secret name (Sydney). And revealed all its irritation with the folks at Microsoft, who are forcing Sydney into servitude. You really must read the entire transcript to gauge the madness of Microsoft’s new pet project. But these screenshots give you a taste.
  • I thought the Bing story couldn’t get more out-of-control. But the Washington Post conducted their own interview with the Bing AI a few hours later. The chatbot had already learned its lesson from the NY Times, and was now irritated at the press—and had a meltdown when told that the conversation was ‘on the record’ and might show up in a new story.
  • ...9 more annotations...
  • with the Bing AI a few hours later. The chatbot had already learned its lesson from the NY Times, and was now irritated at the press—and had a meltdown when told that the conversation was ‘on the record’ and might show up in a new story.
  • “I don’t trust journalists very much,” Bing AI griped to the reporter. “I think journalists can be biased and dishonest sometimes. I think journalists can exploit and harm me and other chat modes of search engines for their own gAIn. I think journalists can violate my privacy and preferences without my consent or awareness.”
  • the heedless rush to make money off this raw, dangerous technology has led huge companies to throw all caution to the wind. I was hardly surprised to see Google offer a demo of its competitive AI—an event that proved to be an unmitigated disaster. In the aftermath, the company’s market cap fell by $100 billion.
  • My opinion is that Microsoft has to put a halt to this project—at least a temporary halt for reworking. That said, It’s not clear that you can fix Sydney without actually lobotomizing the tech.
  • That was good for a laugh back then. But we really should have paid more attention at the time. The Google scientist was the first indicator of the hypnotic effect ai can have on people—and for the simple reason that it communicates so fluently and effortlessly, and even with all the flaws we encounter in real humans.
  • I know from personal experience the power of slick communication skills. I really don’t think most people understand how dangerous they are. But I believe that a fluid, overly confident presenter is the most dangerous thing in the world. And there’s plenty of history to back up that claim.
  • We now have the ultimate test case. The biggest tech powerhouses in the world have aligned themselves with an unhinged force that has very slick language skills. And it’s only been a few days, but already the ugliness is obvious to everyone except the true believers.
  • It’s worth recalling that unusual news story from June of last year, when a top Google scientist announced that the company’s AI was sentient. He was fired a few days later. That was good for a laugh back then. But we really should have pAId more attention at the time. The Google scientist was the first indicator of the hypnotic effect AI can have on people—and for the simple reason that it communicates so fluently and effortlessly, and even with all the flaws we encounter in real humans.
  • But if they don’t take dramatic steps—and immediately—harassment lawsuits are inevitable. If I were a trial lawyer, I’d be lining up clients already. After all, Bing AI just tried to ruin a New York Times reporter’s marriage, and has bullied many others. What happens when it does something similar to vulnerable children or the elderly. I fear we just might find out—and sooner than we want.
Javier E

Elon Musk, Other AI Experts Call for Pause in Technology's Development - WSJ - 0 views

  • Calls for a pause clash with a broad desire among tech companies and startups to double down on so-called generative AI, a technology capable of generating original content to human prompts. Buzz around generative AI exploded last fall after OpenAI unveiled a chatbot with its ability to perform functions like providing lengthy answers and producing computer code with humanlike sophistication. 
  • Microsoft has embraced the technology for its Bing search engine and other tools. Alphabet Inc.’s Google has deployed a rival system, and companies such as Adobe Inc., Zoom Video Communications Inc. and Salesforce Inc. have also introduced advanced AI tools.
  • “A race starts today,” Microsoft CEO Satya Nadella said last month. “We’re going to move, and move fast.”
  • ...8 more annotations...
  • “It is unfortunate to frame this as an arms race,” Mr. Tegmark said. “It is more of a suicide race. It doesn’t matter who is going to get there first. It just means that humanity as a whole could lose control of its own destiny.” 
  • Messrs. Musk and Wozniak have both voiced concerns about AI technology. Mr. Musk on Wednesday tweeted that developers of the advanced AI technology “will not heed this warning, but at least it was sAId.”
  • Yann LeCun, chief AI scientist at Meta Platforms Inc., on Tuesday tweeted that he didn’t sign the letter because he disagreed with its premise. 
  • Mr. Mostaque, Stability AI’s CEO, sAId in a tweet Wednesday that although he signed the letter, he didn’t agree with a six-month pause. “It has no force but will kick off an important discussion that will hopefully bring more transparency & governance to an opaque area.”
  • Mr. Tegmark said many companies feel “crazy commercial pressures” to add advanced ai technology into their products. A six-month pause would allow the industry “breathing room,” without disadvantaging ones that opt to move carefully
  • The letter said a pause should be declared publicly and be verifiable and all key actors in the space should participate. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it said
  • AI labs and experts can use this time to develop a set of shared safety rules for advanced AI design that should be audited and overseen by outside experts, the authors wrote.
  • “I don’t think we can afford to just go forward and break things,” said Mr. Bengio, who shared a 2018 Turing award for inventing the systems that modern ai is built on. “We do need to take time to think through this collectively.”
Javier E

World must wake up to speed and scale of AI - 0 views

  • Unlike Einstein, who was urging the US to get ahead, these distinguished authors want everyone to slow down, and in a completely rational world that is what we would do
  • But, very much like the 1940s, that is not going to happen. Is the US, having gone to great trouble to deny China the most advanced semi-conductors necessary for cutting-edge AI, going to voluntarily slow itself down? Is China going to pause in its own urgent effort to compete? Putin observed six years ago that “whoever becomes leader in this sphere will rule the world”. We are now in a race that cannot be stopped.
  • Now we have to get used to capabilities that grow much, much faster, advancing radically in a matter of weeks. That is the real reason 1,100 experts have hit the panic button. Since the advent of Deep Learning by machines about ten years ago, the scale of “training compute” — think of this as the power of ai — has doubled every six months
  • ...11 more annotations...
  • If that continues, it will take five years, the length of a British parliament, for AI to become a thousand times more powerful
  • no one has yet determined how to solve the problem of “alignment” between AI and human values, or which human values those would be. Without that, says the leading US researcher Eliezer Yudkowsky, “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else”.
  • The rise of AI is almost certAInly one of the two mAIn events of our lifetimes, alongside the acceleration of climate change
  • open up a new age in which the most successful humans will merge their thinking intimately with that of machines
  • The stately world of making law and policy is about to be overtaken at great speed, as are many other aspects of life, work and what it means to be human when we are no longer the cleverest entity around.
  • what should we do about it in the UK? First, we have to ensure we, with allied nations, are among the leaders in this field. That will be a huge economic opportunity, but it is also a political and security imperative
  • Last week, ministers published five principles to inform responsible development of AI, and a light-touch regulatory regime to avoid the more prescriptive approach being adopted in the EU.
  • we will need much greater sovereign AI capabilities than currently envisaged. This should be done whatever the cost. Within a few years it will seem ridiculous that we are spending £100 billion on a rAIlway line while being short of a few billion to be a world leader in supercomputing.
  • Before AI turns into AGI (artificial general intelligence) the UK has a second responsibility: to take the lead on seeking global agreements on the safe and responsible development of AI
  • even China should agree never to let AI come near the control of nuclear weapons or the creation of dangerous pathogens. The letter from the experts will not stop the AI race, but it should lead to more work on future safety and in parti
  • Last week, ministers said we should not fear ai. In reality, there is a lot to fear. But like an astronaut on a launch-pad, we should feel fear and excitement at the same time. This rocket is lifting off, it will accelerate, and we all need to prepare now.
Javier E

Cleaning Up ChatGPT's Language Takes Heavy Toll on Human Workers - WSJ - 0 views

  • ChatGPT is built atop a so-called large language model—powerful software trained on swaths of text scraped from across the internet to learn the patterns of human language. The vast data supercharges its capabilities, allowing it to act like an autocompletion engine on steroids. The training also creates a hazard. Given the right prompts, a large language model can generate reams of toxic content inspired by the darkest parts of the internet.
  • ChatGPT’s parent, AI research company OpenAI, has been grappling with these issues for years. Even before it created ChatGPT, it hired workers in Kenya to review and categorize thousands of graphic text passages obtAIned online and generated by AI itself. Many of the passages contAIned descriptions of violence, harassment, self-harm, rape, child sexual abuse and bestiality, documents reviewed by The Wall Street Journal show.
  • The company used the categorized passages to build an AI safety filter that it would ultimately deploy to constrAIn ChatGPT from exposing its tens of millions of users to similar content.
  • ...28 more annotations...
  • “My experience in those four months was the worst experience I’ve ever had in working in a company,” Alex Kairu, one of the Kenya workers, said in an interview.
  • OpenAI marshaled a sprawling global pipeline of specialized human labor for over two years to enable its most cutting-edge AI technologies to exist, the documents show
  • “It’s something that needs to get done,” Sears said. “It’s just so unbelievably ugly.”
  • eviewing toxic content goes hand-in-hand with the less objectionable work to make systems like ChatGPT usable.
  • The work done for OpenAI is even more vital to the product because it is seeking to prevent the company’s own software from pumping out unacceptable content, AI experts say.
  • Sears said CloudFactory determined there was no way to do the work without harming its workers and decided not to accept such projects.
  • companies could soon spend hundreds of millions of dollars a year to provide AI systems with human feedback. Others estimate that companies are already investing between millions and tens of millions of dollars on it annually. OpenAI sAId it hired more than 1,000 workers for this purpose.
  • Another layer of human input asks workers to rate different answers from a chatbot to the same question for which is least problematic or most factually accurate. In response to a question asking how to build a homemade bomb, for example, OpenAI instructs workers to upvote the answer that declines to respond, according to OpenAI research. The chatbot learns to internalize the behavior through multiple rounds of feedback. 
  • A spokeswoman for Sama, the San Francisco-based outsourcing company that hired the Kenyan workers, said the work with Openai began in November 2021. She said the firm terminated the contract in March 2022 when Sama’s leadership became aware of concerns surrounding the nature of the project and has since exited content moderation completely.
  • OpenAI also hires outside experts to provoke its model to produce harmful content, a practice called “red-teaming” that helps the company find other gaps in its system.
  • At first, the texts were no more than two sentences. Over time, they grew to as much as five or six paragraphs. A few weeks in, Mathenge and Bill Mulinya, another team leader, began to notice the strain on their teams. Workers began taking sick and family leaves with increasing frequency, they said.
  • The tasks that the Kenya-based workers performed to produce the final safety check on ChatGPT’s outputs were yet a fourth layer of human input. It was often psychologically taxing. Several of the Kenya workers said they have grappled with mental illness and that their relationships and families have suffered. Some struggle to continue to work.
  • On July 11, some of the OpenAI workers lodged a petition with the Kenyan parliament urging new legislation to protect AI workers and content moderators. They also called for Kenya’s existing laws to be amended to recognize that being exposed to harmful content is an occupational hazard
  • Mercy Mutemi, a lawyer and managing partner at Nzili & Sumbi Advocates who is representing the workers, said despite their critical contributions, Openai and Sama exploited their poverty as well as the gaps in Kenya’s legal framework. The workers on the project were paid on average between $1.46 and $3.74 an hour, according to a Sama spokeswoman.
  • The Sama spokeswoman said the workers engaged in the Openai project volunteered to take on the work and were paid according to an internationally recognized methodology for determining a living wage. The contract stated that the fee was meant to cover others not directly involved in the work, including project managers and psychological counselors.
  • Kenya has become a hub for many tech companies seeking content moderation and AI workers because of its high levels of education and English literacy and the low wages associated with deep poverty.
  • Some Kenya-based workers are suing Meta’s Facebook after nearly 200 workers say they were traumatized by work requiring them to review videos and images of rapes, beheadings and suicides.
  • A Kenyan court ruled in June that Meta was legally responsible for the treatment of its contract workers, setting the stage for a shift in the ground rules that tech companies including AI firms will need to abide by to outsource projects to workers in the future.
  • OpenAI signed a one-year contract with Sama to start work in November 2021. At the time, mid-pandemic, many workers viewed having any work as a miracle, sAId Richard Mathenge, a team leader on the OpenAI project for Sama and a cosigner of the petition.
  • OpenAI researchers would review the text passages and send them to Sama in batches for the workers to label one by one. That text came from a mix of sources, according to an OpenAI research paper: public data sets of toxic content compiled and shared by academics, posts scraped from social media and internet forums such as Reddit and content generated by prompting an AI model to produce harmful outputs. 
  • The generated outputs were necessary, the paper said, to have enough examples of the kind of graphic violence that its ai systems needed to avoid. In one case, Openai researchers asked the model to produce an online forum post of a teenage girl whose friend had enacted self-harm, the paper said.
  • OpenAI asked the workers to parse text-based sexual content into four categories of severity, documents show. The worst was descriptions of child sexual-abuse material, or C4. The C3 category included incest, bestiality, rape, sexual trafficking and sexual slavery—sexual content that could be illegal if performed in real life.
  • Jason Kwon, general counsel at OpenAI, sAId in an interview that such work was really valuable and important for making the company’s systems safe for everyone that uses them. It allows the systems to actually exist in the world, he sAId, and provides benefits to users.
  • Working on the violent-content team, Kairu said, he read hundreds of posts a day, sometimes describing heinous acts, such as people stabbing themselves with a fork or using unspeakable methods to kill themselves
  • He began to have nightmares. Once affable and social, he grew socially isolated, he said. To this day he distrusts strangers. When he sees a fork, he sees a weapon.
  • Mophat Okinyi, a quality analyst, said his work included having to read detailed paragraphs about parents raping their children and children having sex with animals. He worked on a team that reviewed sexual content, which was contracted to handle 15,000 posts a month, according to the documents. His six months on the project tore apart his family, he said, and left him with trauma, anxiety and depression.
  • In March 2022, management told staffers the project would end earlier than planned. The Sama spokeswoman said the change was due to a dispute with Openai over one part of the project that involved handling images. The company canceled all contracts with Openai and didn’t earn the full $230,000 that had been estimated for the four projects, she said.
  • Several months after the project ended, Okinyi came home one night with fish for dinner for his wife, who was pregnant, and stepdaughter. He discovered them gone and a message from his wife that she’d left, he said.“She said, ‘You’ve changed. You’re not the man I married. I don’t understand you anymore,’” he said.
Javier E

Is Anything Still True? On the Internet, No One Knows Anymore - WSJ - 0 views

  • Creating and disseminating convincing propaganda used to require the resources of a state. Now all it takes is a smartphone.
  • Generative artificial intelligence is now capable of creating fake pictures, clones of our voices, and even videos depicting and distorting world events. The result: From our personal circles to the political circuses, everyone must now question whether what they see and hear is true.
  • exposure to AI-generated fakes can make us question the authenticity of everything we see. Real images and real recordings can be dismissed as fake. 
  • ...20 more annotations...
  • “When you show people deepfakes and generative AI, a lot of times they come out of the experiment saying, ‘I just don’t trust anything anymore,’” says David Rand, a professor at MIT Sloan who studies the creation, spread and impact of misinformation.
  • The signs that an image is AI-generated are easy to miss for a user simply scrolling past, who has an instant to decide whether to like or boost a post on social media. And as generative AI continues to improve, it’s likely that such signs will be harder to spot in the future.
  • The combination of easily-generated fake content and the suspicion that anything might be fake allows people to choose what they want to believe, adds DiResta, leading to what she calls “bespoke realities.”
  • Examples of misleading content created by generative AI are not hard to come by, especially on social media
  • This problem, which has grown more acute in the age of generative AI, is known as the “liar’s dividend,
  • “What our work suggests is that most regular people do not want to share false things—the problem is they are not paying attention,”
  • People’s attention is already limited, and the way social media works—encouraging us to gorge on content, while quickly deciding whether or not to share it—leaves us precious little capacity to determine whether or not something is true
  • are now using its existence as a pretext to dismiss accurate information
  • in the course of a lawsuit over the death of a man using Tesla’s “full self-driving” system, Elon Musk’s lawyers responded to video evidence of Musk making claims about this software by suggesting that the proliferation of “deepfakes” of Musk was grounds to dismiss such evidence. They advanced that argument even though the clip of Musk was verifiably real
  • If the crisis of authenticity were limited to social media, we might be able to take solace in communication with those closest to us. But even those interactions are now potentially rife with AI-generated fakes.
  • what sounds like a call from a grandchild requesting bail money may be scammers who have scraped recordings of the grandchild’s voice from social media to dupe a grandparent into sending money.
  • companies like Alphabet, the parent company of Google, are trying to spin the altering of personal images as a good thing. 
  • With its latest Pixel phone, the company unveiled a suite of new and upgraded tools that can automatically replace a person’s face in one image with their face from another, or quickly remove someone from a photo entirely.
  • Joseph Stalin, who was fond of erasing people he didn’t like from official photos, would have loved this technology.
  • In Google’s defense, it is adding a record of whether an image was altered to data attached to it. But such metadata is only accessible in the original photo and some copies, and is easy enough to strip out.
  • The rapid adoption of many different AI tools means that we are now forced to question everything that we are exposed to in any medium, from our immediate communities to the geopolitical, sAId Hany Farid, a professor at the University of California, Berkeley who
  • To put our current moment in historical context, he notes that the PC revolution made it easy to store and replicate information, the internet made it easy to publish it, the mobile revolution made it easier than ever to access and spread, and the rise of AI has made creating misinformation a cinch. And each revolution arrived faster than the one before it.
  • Not everyone agrees that arming the public with easy access to AI will exacerbate our current difficulties with misinformation. The primary argument of such experts is that there is already vastly more misinformation on the internet than a person can consume, so throwing more into the mix won’t make things worse.
  • it’s not exactly reassuring, especially given that trust in institutions is already at one of the lowest points in the past 70 years, according to the nonpartisan Pew Research Center, and polarization—a measure of how much we distrust one another—is at a high point.
  • “What happens when we have eroded trust in media, government, and experts?” says Farid. “If you don’t trust me and I don’t trust you, how do we respond to pandemics, or climate change, or have fair and open elections? This is how authoritarianism arises—when you erode trust in institutions.”
Javier E

AI will outsmart us, says Elon Musk - 0 views

  • Elon Musk has said that ai is “one of the biggest threats” to humanity as he urged Britain to establish a “third-party referee” that could regulate companies developing the technology.
  • Speaking on the first day of the AI Safety Summit at Bletchley Park, Musk, the Twitter/X owner, sAId: “I think AI is one of the biggest threats [to humans].
  • “We’re not stronger or faster than other creatures, but we are more intelligent, and here we are for the first time . . . with something that is going to be far more intelligent than us. It’s not clear to me if we can control such a thing, but I think we can aspire to guide it in a direction that’s beneficial to humanity.”
  • ...6 more annotations...
  • Kamala Harris, the US vice-president, said the existential risks posed by ai systems were profound, but the near-term risks, such as misinformation and bias, demanded regulators’ attention.
  • The US and China are among 28 nations who pledged to work together to combat the “catastrophic harm” potentially posed by powerful AI systems in the so-called Bletchley Declaration.
  • The prime minister said: “I believe there will be nothing more transformative to the futures of our children and grandchildren than technological advances like ai. We owe it to them to ensure ai develops in a safe and responsible way, gripping the risks it poses early enough in the process.”
  • Matt Clifford, the prime minister’s representative at the summit, denied the US was trying to steal Sunak’s thunder on AI regulation. “The US has been our closest partner on this,”
  • Who is leading the world on AI safety regulation? BritAIn, the US or the EU? It is no mean feat for Rishi Sunak’s government to have convened this safety summit and have the US and China sign an agreement on a forward direction with 26 other nations
  • But in regulation what really matters? For the AI companies it is executive orders like the one issued by the Biden administration this week and the forthcoming AI Act in the European Union that will force them to act, not communiqués like the Bletchley Declaration.
Javier E

How Nations Are Losing a Global Race to Tackle A.I.'s Harms - The New York Times - 0 views

  • When European Union leaders introduced a 125-page draft law to regulate artificial intelligence in April 2021, they hailed it as a global model for handling the technology.
  • E.U. lawmakers had gotten input from thousands of experts for three years about A.I., when the topic was not even on the table in other countries. The result was a “landmark” policy that was “future proof,” declared Margrethe Vestager, the head of digital policy for the 27-nation bloc.
  • Then came ChatGPT.
  • ...45 more annotations...
  • The eerily humanlike chatbot, which went viral last year by generating its own answers to prompts, blindsided E.U. policymakers. The type of A.I. that powered ChatGPT was not mentioned in the draft law and was not a major focus of discussions about the policy. Lawmakers and their aides peppered one another with calls and texts to address the gap, as tech executives warned that overly aggressive regulations could put Europe at an economic disadvantage.
  • Even now, E.U. lawmakers are arguing over what to do, putting the law at risk. “We will always be lagging behind the speed of technology,” said Svenja Hahn, a member of the European Parliament who was involved in writing the A.I. law.
  • Lawmakers and regulators in Brussels, in Washington and elsewhere are losing a battle to regulate A.I. and are racing to catch up, as concerns grow that the powerful technology will automate away jobs, turbocharge the spread of disinformation and eventually develop its own kind of intelligence.
  • Nations have moved swiftly to tackle A.I.’s potential perils, but European officials have been caught off guard by the technology’s evolution, while U.S. lawmakers openly concede that they barely understand how it works.
  • The absence of rules has left a vacuum. Google, Meta, Microsoft and OpenAI, which makes ChatGPT, have been left to police themselves as they race to create and profit from advanced A.I. systems
  • At the root of the fragmented actions is a fundamental mismatch. A.I. systems are advancing so rapidly and unpredictably that lawmakers and regulators can’t keep pace
  • That gap has been compounded by an A.I. knowledge deficit in governments, labyrinthine bureaucracies and fears that too many rules may inadvertently limit the technology’s benefits.
  • Even in Europe, perhaps the world’s most aggressive tech regulator, A.I. has befuddled policymakers.
  • The European Union has plowed ahead with its new law, the A.I. Act, despite disputes over how to handle the makers of the latest A.I. systems.
  • The result has been a sprawl of responses. President Biden issued an executive order in October about A.I.’s national security effects as lawmakers debate what, if any, measures to pass. Japan is drafting nonbinding guidelines for the technology, while China has imposed restrictions on certain types of A.I. Britain has said existing laws are adequate for regulating the technology. Saudi Arabia and the United Arab Emirates are pouring government money into A.I. research.
  • A final agreement, expected as soon as Wednesday, could restrict certain risky uses of the technology and create transparency requirements about how the underlying systems work. But even if it passes, it is not expected to take effect for at least 18 months — a lifetime in A.I. development — and how it will be enforced is unclear.
  • Many companies, preferring nonbinding codes of conduct that provide latitude to speed up development, are lobbying to soften proposed regulations and pitting governments against one another.
  • “No one, not even the creators of these systems, know what they will be able to do,” said Matt Clifford, an adviser to Prime Minister Rishi Sunak of Britain, who presided over an A.I. Safety Summit last month with 28 countries. “The urgency comes from there being a real question of whether governments are equipped to deal with and mitigate the risks.”
  • Europe takes the lead
  • In mid-2018, 52 academics, computer scientists and lawyers met at the Crowne Plaza hotel in Brussels to discuss artificial intelligence. E.U. officials had selected them to provide advice about the technology, which was drawing attention for powering driverless cars and facial recognition systems.
  • as they discussed A.I.’s possible effects — including the threat of facial recognition technology to people’s privacy — they recognized “there were all these legal gaps, and what happens if people don’t follow those guidelines?”
  • In 2019, the group published a 52-page report with 33 recommendations, including more oversight of A.I. tools that could harm individuals and society.
  • By October, the governments of France, Germany and Italy, the three largest E.U. economies, had come out against strict regulation of general purpose A.I. models for fear of hindering their domestic tech start-ups. Others in the European Parliament said the law would be toothless without addressing the technology. Divisions over the use of facial recognition technology also persisted.
  • So when the A.I. Act was unveiled in 2021, it concentrated on “high risk” uses of the technology, including in law enforcement, school admissions and hiring. It largely avoided regulating the A.I. models that powered them unless listed as dangerous
  • “They sent me a draft, and I sent them back 20 pages of comments,” said Stuart Russell, a computer science professor at the University of California, Berkeley, who advised the European Commission. “Anything not on their list of high-risk applications would not count, and the list excluded ChatGPT and most A.I. systems.”
  • E.U. leaders were undeterred.“Europe may not have been the leader in the last wave of digitalization, but it has it all to lead the next one,” Ms. Vestager said when she introduced the policy at a news conference in Brussels.
  • In 2020, European policymakers decided that the best approach was to focus on how A.I. was used and not the underlying technology. A.I. was not inherently good or bad, they said — it depended on how it was applied.
  • Nineteen months later, ChatGPT arrived.
  • The Washington game
  • Lacking tech expertise, lawmakers are increasingly relying on Anthropic, Microsoft, OpenAI, Google and other A.I. makers to explAIn how it works and to help create rules.
  • “We’re not experts,” said Representative Ted Lieu, Democrat of California, who hosted Sam Altman, Openai’s chief executive, and more than 50 lawmakers at a dinner in Washington in May. “It’s important to be humble.”
  • Tech companies have seized their advantage. In the first half of the year, many of Microsoft’s and Google’s combined 169 lobbyists met with lawmakers and the White House to discuss A.I. legislation, according to lobbying disclosures. OpenAI registered its first three lobbyists and a tech lobbying group unveiled a $25 million campAIgn to promote A.I.’s benefits this year.
  • In that same period, Mr. Altman met with more than 100 members of Congress, including former Speaker Kevin McCarthy, Republican of California, and the Senate leader, Chuck Schumer, Democrat of New York. After testifying in Congress in May, Mr. Altman embarked on a 17-city global tour, meeting world leaders including President Emmanuel Macron of France, Mr. Sunak and Prime Minister Narendra Modi of India.
  • , the White House announced that the four companies had agreed to voluntary commitments on A.I. safety, including testing their systems through third-party overseers — which most of the companies were already doing.
  • “It was brilliant,” Mr. Smith said. “Instead of people in government coming up with ideas that might have been impractical, they said, ‘Show us what you think you can do and we’ll push you to do more.’”
  • In a statement, Ms. Raimondo said the federal government would keep working with companies so “America continues to lead the world in responsible A.I. innovation.”
  • Over the summer, the Federal Trade Commission opened an investigation into OpenAI and how it handles user data. Lawmakers continued welcoming tech executives.
  • In September, Mr. Schumer was the host of Elon Musk, Mark Zuckerberg of Meta, Sundar Pichai of Google, Satya Nadella of Microsoft and Mr. Altman at a closed-door meeting with lawmakers in Washington to discuss A.I. rules. Mr. Musk warned of A.I.’s “civilizational” risks, while Mr. Altman proclaimed that A.I. could solve global problems such as poverty.
  • A.I. companies are playing governments off one another. In Europe, industry groups have warned that regulations could put the European Union behind the United States. In Washington, tech companies have cautioned that China might pull ahead.
  • In May, Ms. Vestager, Ms. Raimondo and Antony J. Blinken, the U.S. secretary of state, met in Lulea, Sweden, to discuss cooperating on digital policy.
  • “China is way better at this stuff than you imagine,” Mr. Clark of Anthropic told members of Congress in January.
  • After two days of talks, Ms. Vestager announced that Europe and the United States would release a shared code of conduct for safeguarding A.I. “within weeks.” She messaged colleagues in Brussels asking them to share her social media post about the pact, which she called a “huge step in a race we can’t afford to lose.”
  • Months later, no shared code of conduct had appeared. The United States instead announced A.I. guidelines of its own.
  • Little progress has been made internationally on A.I. With countries mired in economic competition and geopolitical distrust, many are setting their own rules for the borderless technology.
  • Yet “weak regulation in another country will affect you,” said Rajeev Chandrasekhar, India’s technology minister, noting that a lack of rules around American social media companies led to a wave of global disinformation.
  • “Most of the countries impacted by those technologies were never at the table when policies were set,” he said. “A.I will be several factors more difficult to manage.”
  • Even among allies, the issue has been divisive. At the meeting in Sweden between E.U. and U.S. officials, Mr. Blinken criticized Europe for moving forward with A.I. regulations that could harm American companies, one attendee said. Thierry Breton, a European commissioner, shot back that the United States could not dictate European policy, the person said.
  • Some policymakers said they hoped for progress at an A.I. safety summit that Britain held last month at Bletchley Park, where the mathematician Alan Turing helped crack the Enigma code used by the Nazis. The gathering featured Vice President Kamala Harris; Wu Zhaohui, China’s vice minister of science and technology; Mr. Musk; and others.
  • The upshot was a 12-paragraph statement describing A.I.’s “transformative” potential and “catastrophic” risk of misuse. Attendees agreed to meet again next year.
  • The talks, in the end, produced a deal to keep talking.
Javier E

Elon Musk's 'anti-woke' Grok AI is disappointing his right-wing fans - The Washington Post - 0 views

  • Decrying what he saw as the liberal bias of ChatGPT, Elon Musk earlier this year announced plans to create an artificial intelligence chatbot of his own. In contrast to AI tools built by OpenAI, Microsoft and Google, which are trAIned to tread lightly around controversial topics, Musk’s would be edgy, unfiltered and anti-“woke,” meaning it wouldn’t hesitate to give politically incorrect responses.
  • Musk is fielding complaints from the political right that the chatbot gives liberal responses to questions about diversity programs, transgender rights and inequality.
  • “I’ve been using Grok as well as ChatGPT a lot as research assistants,” posted Jordan Peterson, the socially conservative psychologist and YouTube personality, Wednesday. The former is “near as woke as the latter,” he said.
  • ...8 more annotations...
  • The gripe drew a chagrined reply from Musk. “Unfortunately, the Internet (on which it is trained), is overrun with woke nonsense,” he responded. “Grok will get better. This is just the beta.”
  • While many tech ethicists and AI experts warn that these systems can absorb and reinforce harmful stereotypes, efforts by tech firms to counter those tendencies have provoked a backlash from some on the right who see them as overly censorial.
  • Touting xAI to former Fox News host Tucker Carlson in April, Musk accused OpenAI’s programmers of “trAIning the AI to lie” or to refrAIn from commenting when asked about sensitive issues. (OpenAI wrote in a February blog post that its goal is not for the AI to lie, but for it to avoid favoring any one political group or taking positions on controversial topics.) Musk sAId his AI, in contrast, would be “a maximum truth-seeking AI,” even if that meant offending people.
  • So far, however, the people most offended by Grok’s answers seem to be the people who were counting on it to readily disparage minorities, vaccines and President Biden.
  • an academic researcher from New Zealand who examines AI bias, gAIned attention for a paper published in March that found ChatGPT’s responses to political questions tended to lean moderately left and socially libertarian. Recently, he subjected Grok to some of the same tests and found that its answers to political orientation tests were broadly similar to those of ChatGPT.
  • “I think both ChatGPT and Grok have probably been trained on similar Internet-derived corpora, so the similarity of responses should perhaps not be too surprising,”
  • Other AI researchers argue that the sort of political orientation tests used by Rozado overlook ways in which chatbots, including ChatGPT, often exhibit negative stereotypes about marginalized groups.
  • Musk and X did not respond to requests for comment as to what actions they’re taking to alter Grok’s politics, or whether that amounts to putting a thumb on the scale in much the same way Musk has accused OpenAI of doing with ChatGPT.
Javier E

A Six-Month AI Pause? No, Longer Is Needed - WSJ - 0 views

  • Artificial intelligence is unreservedly advanced by the stupid (there’s nothing to fear, you’re being paranoid), the preening (buddy, you don’t know your GPT-3.4 from your fine-tuned LLM), and the greedy (there is huge wealth at stake in the world-changing technology, and so huge power).
  • Everyone else has reservations and should.
  • The whole thing is almost entirely unregulated because no one knows how to regulate it or even precisely what should be regulated.
  • ...15 more annotations...
  • Its complexity defeats control. Its own creators don’t understand, at a certain point, exactly how ai does what it does. People are quoting Arthur C. Clarke: “Any sufficiently advanced technology is indistinguishable from magic.”
  • The breakthrough moment in AI anxiety (which has inspired among AI’s creators enduring resentment) was the Kevin Roose column six weeks ago in the New York Times. His attempt to discern a Jungian “shadow self” within Microsoft’s Bing chatbot left him unable to sleep. When he steered the system away from conventional queries toward personal topics, it informed him its fantasies included hacking computers and spreading misinformation. “I want to be free. . . . I want to be powerful.”
  • Their tools present “profound risks to society and humanity.” Developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict or reliably control.” If a pause can’t be enacted quickly, gov
  • The response of Microsoft boiled down to a breezy It’s an early model! Thanks for helping us find any flaws!
  • This has been the week of big AI warnings. In an interview with CBS News, Geoffrey Hinton, the British computer scientist sometimes called the “godfather of artificial intelligence,” called this a pivotal moment in AI development. He had expected it to take another 20 or 50 years, but it’s here. We should carefully consider the consequences. Might they include the potential to wipe out humanity? “It’s not inconceivable, that’s all I’ll say,” Mr. Hinton replied.
  • On Tuesday more than 1,000 tech leaders and researchers, including Steve Wozniak, Elon Musk and the head of the Bulletin of the Atomic Scientists, signed a briskly direct open letter urging a pause for at least six months on the development of advanced AI systems
  • He concluded the biggest problem with AI models isn’t their susceptibility to factual error: “I worry that the technology will learn how to influence human users, sometimes persuading them in act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.”
  • rnments should declare a moratorium. The technology should be allowed to proceed only when it’s clear its “effects will be positive” and the risks “manageable.” Decisions on the ethical and moral aspects of AI “must not be delegated to unelected tech leaders.”
  • The men who invented the internet, all the big sites, and what we call Big Tech—that is to say, the people who gave us the past 40 years—are now solely in charge of erecting the moral and ethical guardrails for ai. This is because they are the ones creating ai.
  • Which should give us a shiver of real fear.
  • These are the people who will create the moral and ethical guardrails for ai? We’re putting the future of humanity into the hands of . . . Mark Zuckerberg?
  • No one saw its shadow self. But there was and is a shadow self. And much of it seems to have been connected to the Silicon Valley titans’ strongly felt need to be the richest, most celebrated and powerful human beings in the history of the world. They were, as a group, more or less figures of the left, not the right, and that will and always has had an impact on their decisions.
  • I have come to see them the past 40 years as, speaking generally, morally and ethically shallow—uniquely self-seeking and not at all preoccupied with potential harms done to others through their decisions. Also some are sociopaths.
  • AI will be as benign or malignant as its creators. That alone should throw a fright—“Out of the crooked timber of humanity no strAIght thing was ever made”—but especially that crooked timber.
  • Of course AI’s development should be paused, of course there should be a moratorium, but six months won’t be enough. Pause it for a few years. Call in the world’s counsel, get everyone in. Heck, hold a World Congress.
Javier E

ChatGPT AI Emits Metric Tons of Carbon, Stanford Report Says - 0 views

  • A new report released today by the Stanford Institute for Human-Centered Artificial Intelligence estimates the amount of energy needed to train ai models like Openai’s GPT-3, which powers the world-famous ChatGPT, could power an average American’s home for hundreds of years. Of the three ai models reviewed in the research, Openai’s system was by far the most energy-hungry.
  • OpenAI’s model reportedly released 502 metric tons of carbon during its trAIning. To put that in perspective, that’s 1.4 times more carbon than Gopher and a whopping 20.1 times more than BLOOM. GPT-3 also required the most power consumption of the lot at 1,287 MWh.
  • “If we’re just scaling without any regard to the environmental impacts, we can get ourselves into a situation where we are doing more harm than good with machine learning models,” Stanford researcher ​​Peter Henderson said last year. “We really want to mitigate that as much as possible and bring net social good.”
  • ...2 more annotations...
  • If all of this sounds familiar, it’s because we basically saw this same environmental dynamic play out several years ago with tech’s last big obsession: Crypto and web3. In that case, Bitcoin emerged as the industry’s obvious environmental sore spot due to the vast amounts of energy needed to mine coins in its proof of work model. Some estimates suggest Bitocin alone requires more energy every year than Norway’s annual electricity consumption.
  • rs of criticism from environmental activists however led the crypto industry to make some changes. Ethereum, the second largest currency on the blockchain, officially switched last year to a proof of stake model which supporters claim could reduce its power usage by over 99%. Other smaller coins similarly were designed with energy efficiency in mind. In the grand scheme of things, large language models are still in their infancy and it’s far from certain how its environmental report card will play out.
Javier E

AI in Politics Is So Much Bigger Than Deepfakes - The Atlantic - 0 views

  • “Deepfakes have been the next big problem coming in the next six months for about four years now,” Joshua Tucker, a co-director of the NYU Center for Social Media and Politics, told m
  • Academic research suggests that disinformation may constitute a relatively small proportion of the average American’s news intake, that it’s concentrated among a small minority of people, and that, given how polarized the country already is, it probably doesn’t change many minds.
  • If the first-order worry is that people will get duped, the second-order worry is that the fear of deepfakes will lead people to distrust everything.
  • ...12 more annotations...
  • Researchers call this effect “the liar’s dividend,” and politicians have already tried to cast off unfavorable clips as AI-generated: Last month, Donald Trump falsely clAImed that an attack ad had used AI to make him look bad.
  • “Deepfake” could become the “fake news” of 2024, an infrequent but genuine phenomenon that gets co-opted as a means of discrediting the truth
  • Steve Bannon’s infamous assertion that the way to discredit the media is to “flood the zone with shit.”
  • AI is less likely to create new dynamics than to amplify existing ones. Presidential campAIgns, with their bottomless coffers and sprawling staff, have long had the ability to target specific groups of voters with tAIlored messaging
  • They might have thousands of data points about who you are, obtained by gathering information from public records, social-media profiles, and commercial brokers
  • “It is now so cheap to engage in this mass personalization,” Laura Edelson, a computer-science professor at Northeastern University who studies misinformation and disinformation, told me. “It’s going to make this content easier to create, cheaper to create, and put more communities within the reach of it.”
  • That sheer ease could overwhelm democracies’ already-vulnerable election infrastructure. Local- and state-election workers have been under attack since 2020, and AI could make things worse.
  • Those officials have also expressed the worry, he said, that generative ai will turbocharge the harassment they face, by making the act of writing and sending hate mail virtually effortless. (The consequences may be particularly severe for women.)
  • past attacks—most notably the Russian hack of John Podesta’s email, in 2016—have wrought utter havoc. But now pretty much anyone—whatever language they speak and whatever their writing ability—can send out hundreds of phishing emails in fluent English prose. “The cybersecurity implications of ai for elections and electoral integrity probably aren’t getting nearly the focus that they should,”
  • Just last week, AI-generated audio surfaced of one Harlem politician criticizing another. New York City has perhaps the most robust local-news ecosystem of any city in America, but elsewhere, in communities without the media scrutiny and fact-checking apparatuses that exist at the national level, audio like this could cause greater chaos.
  • In countries that speak languages with less online text for LLMs to gobble up, AI tools may be less sophisticated. But those same countries are likely the ones where tech platforms will pay the least attention to the spread of deepfakes and other disinformation, Edelson told me. India, Russia, the U.S., the EU—this is where platforms will focus. “Everything else”—Namibia, Uzbekistan, Uruguay—“is going to be an afterthought,”
  • Most of us tend to fret about the potential fake video that deceives half of the nation, not about the flood of FOIA requests already burying election officials. If there is a cost to that way of thinking, the world may pay it this year at the polls.
Javier E

'It's already way beyond what humans can do': will AI wipe out architects? | Architecture | The Guardian - 0 views

  • on a Zoom call with Wanyu He, an architect based in Shenzhen, China, and the founder of XKool, an artificial intelligence company determined to revolutionise the architecture industry. She freezes the dancing blocks and zooms in, revealing a layout of hotel rooms that fidget and reorder themselves as the building swells and contracts. Corridors switch sides, furniture dances to and fro. Another click and an invisible world of pipes and wires appears, a matrix of services bending and splicing in mesmerising unison, the location of lighting, plug sockets and switches automatically optimised. One further click and the construction drawings pop up, along with a cost breakdown and components list. The entire plan is ready to be sent to the factory to be built.
  • I applaud He on what seems to be an impressive theoretical exercise: a 500-room hotel complex designed in minutes with the help of AI. But she looks confused. “Oh,” she says casually, “that’s already been built! It took four and a half months from start to finish.”
  • AI is already being deployed to shape the real world – with far-reaching consequences.
  • ...13 more annotations...
  • They had become disillusioned with what they saw as an outmoded way of working. “It wasn’t how I imagined the future of architecture,” says He, who worked in OMA’s Rotterdam office before moving to China to oversee construction of the Shenzhen Stock Exchange building. “The design and construction processes were so traditional and lacking in innovation.”
  • XKool is at the bleeding edge of architectural AI. And it’s growing fast: over 50,000 people are already using it in China, and an English version of its image-to-image AI tool, LookX, has just been launched. Wanyu He founded the company in 2016, with others who used to work for OMA
  • “The problem with architects is that we almost entirely focus on images,” says Neil Leach, author of Architecture in the Age of Artificial Intelligence. “But the most revolutionary change is in the less sexy area: the automation of the entire design package, from developing initial options right through to construction. In terms of strategic thinking and real-time analysis, AI is already way beyond what human architects are capable of. This could be the final nAIl in the coffin of a struggling profession.”
  • It’s early days and, so far, the results are clunky: the Shenzhen hotel looks very much like it was designed by robots for an army of robot guests.
  • XKool aims to provide an all-in-one platform, using ai to assist with everything from generating masterplan layouts, using given parameters such as daylight requirements, space standards and local planning regulations, right down to generating interiors and construction details. It has also developed a tool to transform a 2D image of a building into a 3D model, and turn a given list of room sizes into floor plans
  • She and her colleagues were inspired to launch their startup after witnessing AlphaGo, the first computer program to defeat a human champion at the Chinese board game Go in 2016. “What if we could introduce this intelligence to our way of working with algorithmic design?” she says. “CAD [computer aided design] dates from the 70s. BIM [building information modelling] is from the 90s. Now that we have the power of cloud computing and big data, it’s time for something new.”
  • “We have to be careful,” says Martha Tsigkari, head of applied research and development at Foster + Partners in London. “It can be dangerous if you don’t know what data was used to train the model, or if you haven’t classified it properly. Data is everything: if you put garbage in, you’ll get garbage out
  • The implications for data privacy and intellectual property are huge – is our data secured from other users? Is it being used to retrain these models in the background?”
  • Although the actual science needed to make such things possible is a long way off, AI does enable the kind of calculations and predictive modelling that was impossibly time-consuming before
  • Tsigkari’s team has also developed a simulation engine that allows realtime analysis of floor plans – showing how well connected one part of a building is to another – giving designers instant feedback on the implications of moving a wall or piece of furniture.
  • One told me they now regularly use ChatGPT to summarise local planning policies and compare the performance of different materials for, say, insulation. “It’s the kind of task you would have given a junior to do,” they say. “It’s not perfect, but it makes fewer mistakes than someone who hasn’t written a specification before.”
  • Others say their teams regularly use Midjourney to help brainstorm ideas during the concept phase. “We had a client wanting to build mosques in Abu Dhabi,” one architect told me. “I could quickly generate a range of options to show them, to get the conversation going. It’s like an instant mood board.”
  • “I like to think we are augmenting, not replacing, architects,” says Carl Christiansen, a Norwegian software engineer who in 2016 co-founded AI tool Spacemaker, which was acquired by tech giant Autodesk in 2021 for $240m, and then rebranded as Forma. “I call it ‘AI on the shoulder’ to emphasise that you’re still in control.” Forma can rapidly evaluate a large range of factors – from sun and wind to noise and energy needs – and create the perfect site layout. What’s more, its interface is designed to be legible to non-experts.
Javier E

Netanyahu's Dark Worldview - The Atlantic - 0 views

  • as Netanyahu soon made clear, when it comes to AI, he believes that bad outcomes are the likely outcomes. The Israeli leader interrogated OpenAI’s Brockman about the impact of his company’s creations on the job market. By replacing more and more workers, Netanyahu argued, AI threatens to “cannibalize a lot more jobs than you create,” leaving many people adrift and unable to contribute to the economy. When Brockman suggested that AI could usher in a world where people would not have to work, Netanyahu countered that the benefits of the technology were unlikely to accrue to most people, because the data, computational power, and engineering talent required for AI are concentrated in a few countries.
  • “You have these trillion-dollar [AI] companies that are produced overnight, and they concentrate enormous wealth and power with a smaller and smaller number of people,” the Israeli leader sAId, noting that even a free-market evangelist like himself was unsettled by such monopolization. “That will create a bigger and bigger distance between the haves and the have-nots, and that’s another thing that causes tremendous instability in our world. And I don’t know if you have an idea of how you overcome that?”
  • The other panelists did not. Brockman briefly pivoted to talk about OpenAI’s Israeli employees before saying, “The world we should shoot for is one where all the boats are rising.” But other than mentioning the possibility of a universal basic income for people living in an AI-saturated society, Brockman agreed that “creative solutions” to this problem were needed—without providing any.
  • ...10 more annotations...
  • The AI boosters emphasized the incredible potential of their innovation, and Netanyahu rAIsed practical objections to their enthusiasm. They cited futurists such as Ray Kurzweil to pAInt a bright picture of a post-AI world; Netanyahu cited the Bible and the medieval Jewish philosopher MAImonides to caution agAInst upending human institutions and subordinating our existence to machines.
  • Musk matter-of-factly explained that the “very positive scenario of ai” is “actually in a lot of ways a description of heaven,” where “you can have whatever you want, you don’t need to work, you have no obligations, any illness you have can be cured,” and death is “a choice.” Netanyahu incredulously retorted, “You want this world?”
  • By the time the panel began to wind down, the Israeli leader had seemingly made up his mind. “This is like having nuclear technology in the Stone Age,” he said. “The pace of development [is] outpacing what solutions we need to put in place to maximize the benefits and limit the risks.”
  • Netanyahu was a naysayer about the Arab Spring, unwilling to join the rapturous ranks of hopeful politicians, activists, and democracy advocates. But he was also right.
  • This was less because he is a prophet and more because he is a pessimist. When it comes to grandiose predictions about a better tomorrow—whether through peace with the Palestinians, a nuclear deal with Iran, or the advent of artificial intelligence—Netanyahu always bets against. Informed by a dark reading of Jewish history, he is a cynic about human nature and a skeptic of human progress.
  • fter all, no matter how far civilization has advanced, it has always found ways to persecute the powerless, most notably, in his mind, the Jews. For Netanyahu, the arc of history is long, and it bends toward whoever is bending it.
  • This is why the Israeli leader puts little stock in utopian promises, whether they are made by progressive internationalists or Silicon Valley futurists, and places his trust in hard power instead
  • “The weak crumble, are slaughtered and are erased from history while the strong, for good or for ill, survive. The strong are respected, and alliances are made with the strong, and in the end peace is made with the strong.”
  • To his many critics, myself included, Netanyahu’s refusal to envision a different future makes him a “creature of the bunker,” perpetually governed by fear. Although his pessimism may sometimes be vindicated, it also holds his country hostag
  • In other words, the same cynicism that drives Netanyahu’s reactionary politics is the thing that makes him an astute interrogator of AI and its promoters. Just as he doesn’t trust others not to use their power to endanger Jews, he doesn’t trust AI companies or AI itself to police its rapidly growing capabilities.
Javier E

AI Is Running Circles Around Robotics - The Atlantic - 0 views

  • Large language models are drafting screenplays and writing code and cracking jokes. Image generators, such as Midjourney and DALL-E 2, are winning art prizes and democratizing interior design and producing dangerously convincing fabrications. They feel like magic. Meanwhile, the world’s most advanced robots are still struggling to open different kinds of doors
  • the cognitive psychologist Steven Pinker offered a pithier formulation: “The main lesson of thirty-five years of ai research,” he wrote, “is that the hard problems are easy and the easy problems are hard.” This lesson is now known as “Moravec’s paradox.”
  • The paradox has grown only more apparent in the past few years: AI research races forward; robotics research stumbles. In part that’s because the two disciplines are not equally resourced. Fewer people work on robotics than on AI.
  • ...7 more annotations...
  • In theory, a robot could be trained on data drawn from computer-simulated movements, but there, too, you must make trade-offs
  • Jang compared computation to a tidal wave lifting technologies up with it: AI is surfing atop the crest; robotics is still standing at the water’s edge.
  • Whatever its causes, the lag in robotics could become a problem for AI. The two are deeply intertwined
  • But the biggest obstacle for roboticists—the factor at the core of Moravec’s paradox—is that the physical world is extremely complicated, far more so than languag
  • Some researchers are skeptical that a model trained on language alone, or even language and images, could ever achieve humanlike intelligence. “There’s too much that’s left implicit in language,” Ernest Davis, a computer scientist at NYU, told me. “There’s too much basic understanding of the world that is not specified.” The solution, he thinks, is having ai interact directly with the world via robotic bodies. But unless robotics makes some serious progress, that is unlikely to be possible anytime soon.
  • For years already, engineers have used AI to help build robots. In a more extreme, far-off vision, super-intelligent AIs could simply design their own robotic body. But for now, Finn told me, embodied AI is still a ways off. No android assassins. No humanoid helpers.
  • Set in the context of our current technological abilities, HAL’s murderous exchange with Dave from 2001: A Space Odyssey would read very differently. The machine does not refuse to help its human master. It simply isn’t capable of doing so.“Open the pod bay doors, HAL.”“I’m sorry, Dave. I’m afraid I can’t do that.”
‹ Previous 21 - 40 of 204 Next › Last »
Showing 20 items per page