Skip to main content

Home/ History Readings/ Group items matching "Ai" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
20More

U.S. Strikes in Somalia Kill 150 Shabab Fighters - The New York Times - 0 views

  • American aircraft on Saturday struck a training camp in Somalia belonging to the Islamist militant group the Shabab, the Pentagon said, killing about 150 fighters who were assembled for what American officials believe was a graduation ceremony and prelude to an imminent attack against American troops and their allies in East Africa.
  • Defense officials said the strike was carried out by drones and American aircraft, which dropped a number of precision-guided bombs and missiles on the field where the fighters were gathered.
  • Pentagon officials said they did not believe there were any civilian casualties, but there was no independent way to verify the claim. They said they delayed announcing the strike until they could assess the outcome
  • ...17 more annotations...
  • It was the deadliest attack on the Shabab in the more than decade-long American campaign against the group, an affiliate of Al Qaeda, and a sharp deviation from previous American strikes, which have concentrated on the group’s leaders, not on its foot soldiers. Continue reading the main story #g-0308-for-web-ATTACKmap { max-width:180px; } .g-artboard { margin:0 auto; } #g-0308-for-web-ATTACKmap-180{ position:relative; overflow:hidden; width:180px; } .g-aiAbs{ position:absolute; } .g-aiImg{ display:block; width:100% !important; } #g-0308-for-web-ATTACKmap-180 p{ font-family:nyt-franklin,arial,helvetica,sans-serif; font-size:13px; line-height:18px; margin:0; } #g-0308-for-web-ATTACKmap-180 .g-aiPstyle0 { font-size:11px; line-height:13px; font-weight:500; font-style:italic; color:#628cb2; } #g-0308-for-web-ATTACKmap-180 .g-aiPstyle1 { font-size:12px; line-height:14px; font-weight:500; letter-spacing:0.00833333333333em; color:#000000; } #g-0308-for-web-ATTACKmap-180 .g-aiPstyle2 { font-size:12px; line-height:14px; font-weight:500; text-align:right; letter-spacing:0.00833333333333em; color:#000000; } #g-0308-for-web-ATTACKmap-180 .g-aiPstyle3 { font-size:12px; line-height:13px; font-weight:700; letter-spacing:0.00833333333333em; color:#000000; } #g-0308-for-web-ATTACKmap-180 .g-aiPstyle4 { font-size:11px; line-height:13px; font-weight:500; letter-spacing:0.00833333333333em; color:#000000; } #g-0308-for-web-ATTACKmap-180 .g-aiPstyle5 { font-size:11px; line-height:13px; font-weight:500; font-style:italic; text-align:center; color:#628cb2; } #g-0308-for-web-ATTACKmap-180 .g-aiPstyle6 { font-size:9px; line-height:8px; font-weight:500; text-transform:uppercase; text-align:center; color:#000000; } Gulf of Aden ETHIOPIA SOMALIA Camp Raso Mogadishu KENYA Indian Ocean 300 miles MARCH 7, 2016 By The New York Times
  • It comes in response to new concerns that the group, which was responsible for one of the deadliest terrorist attacks on African soil when it struck a popular mall in Nairobi in 2013, is in the midst of a resurgence after losing much of the territory it once held and many of its fighters in the last several years.
  • The planned attack on American and African Union troops in Somalia, American officials say, may have been an attempt by the Shabab to carry out the same kind of high-impact act of terrorism as the one in Nairobi.
  • Pentagon officials would not say how they knew that the Shabab fighters killed on Saturday were training for an attack on United States and African Union forces, but the militant group is believed to be under heavy American surveillance.
  • The Shabab fighters were standing in formation at a facility the Pentagon called Camp Raso, 120 miles north of Mogadishu, when the American warplanes struck on Saturday, officials said, acting on information gleaned from intelligence sources in the area and from American spy planes
  • One intelligence agency assessed that the toll might have been higher had the strike happened earlier in the ceremony. Apparently, some fighters were filtering away from the event when the bombing began.
  • The strike was another escalation in what has become the latest battleground in the Obama administration’s war against terror: Africa.
  • The United States and its allies are focused on combating the spread of the Islamic State in Libya, and American officials estimate that with an influx of men from Iraq, Syria and Tunisia, the Islamic State’s forces in Libya have swelled to as many as 6,500 fighters, allowing the group to capture a 150-mile stretch of coastline over the past year.
  • The arrival of the Islamic State in Libya has sparked fears that the group’s reach could spread to other North African countries, and the United States is increasingly trying to prevent that
  • American forces are now helping to combat Al Qaeda in Mali, Niger and Burkina Faso; Boko Haram in Nigeria, Cameroon and Chad; and the Shabab in Somalia and Kenya, in what has become a multifront war against militant Islam in Africa.
  • The United States has a small number of trainers and advisers with African Union — primarily Kenyan — troops in Somalia. Defense officials said that the African Union’s military mission to Somalia was believed to have been the target of the planned attack.
  • Saturday’s strike was the most significant American attack on the Shabab since September 2014, when an American drone strike killed the leader of the group, Ahmed Abdi Godane, at the time one of the most wanted men in Africa. That strike was followed by one last March, when Adan Garar, a senior member of the group, was killed in a drone strike on his vehicle.
  • If the killings of Mr. Godane and Mr. Garar initially crippled the group, that no longer appears to be the case. In the past two months, Shabab militants have claimed responsibility for attacks that have killed more than 150 people, including Kenyan soldiers stationed at a remote desert outpost and beachcombers in Mogadishu.
  • In addition, the group has said it was responsible for a bomb on a Somali jetliner that tore a hole through the fuselage and for an attack last month on a popular hotel and a public garden in Mogadishu that killed 10 people and injured more than 25. On Monday, the Shabab claimed responsibility for a bomb planted in a laptop computer that went off at an airport security checkpoint in the town of Beletwein in central Somalia, wounding at least six people, including two police officers. The police said that one other bomb was defused.
  • At the same time, Shabab assassination teams have fanned out across Mogadishu and other major towns, stealthily eliminating government officials and others they consider apostates.
  • The Shabab have also retaken several towns after African Union forces pulled out. The African Union peacekeeping force, paid for mostly by Western governments, features troops from Uganda, Burundi, Kenya, Djibouti and other African nations.
  • The Shabab were once strong, then greatly weakened and now seem to be somewhere in between, while analysts say the group competes with the Islamic State for recruits and tries to show — in the deadliest way — that it is still relevant. Its dream is to turn Somalia into a pure Islamic state.
23More

What Elon Musk's 'Age of Abundance' Means for the Future of Capitalism - WSJ - 0 views

  • When it comes to the future, Elon Musk’s best-case scenario for humanity sounds a lot like Sci-Fi Socialism.
  • “We will be in an age of abundance,” Musk said this month.
  • Sunak said he believes the act of work gives meaning, and had some concerns about Musk’s prediction. “I think work is a good thing, it gives people purpose in their lives,” Sunak told Musk. “And if you then remove a large chunk of that, what does that mean?”
  • ...20 more annotations...
  • Part of the enthusiasm behind the sky-high valuation of Tesla, where he is chief executive, comes from his predictions for the auto company’s abilities to develop humanoid robots—dubbed Optimus—that can be deployed for everything from personal assistants to factory workers. He’s also founded an AI startup, dubbed xAI, that he sAId AIms to develop its own superhuman intelligence, even as some are skeptical of that possibility. 
  • Musk likes to point to another work of Sci-Fi to describe how AI could change our world: a series of books by the late-, self-described-socialist author IAIn Banks that revolve around a post-scarcity society that includes superintelligent AI. 
  • That is the question.
  • “We’re actually going to have—and already do have—a massive shortage of labor. So, I think we will have not people out of work but actually still a shortage of labor—even in the future.” 
  • Musk has cast his work to develop humanoid robots as an attempt to solve labor issues, saying there aren’t enough workers and cautioning that low birthrates will be even more problematic. 
  • Instead, Musk predicts robots will be taking jobs that are uncomfortable, dangerous or tedious. 
  • A few years ago, Musk declared himself a socialist of sorts. “Just not the kind that shifts resources from most productive to least productive, pretending to do good, while actually causing harm,” he tweeted. “True socialism seeks greatest good for all.”
  • “It’s fun to cook food but it’s not that fun to wash the dishes,” Musk said this month. “The computer is perfectly happy to wash the dishes.”
  • In the near term, Goldman Sachs in April estimated generative AI could boost the global gross domestic product by 7% during the next decade and that roughly two-thirds of U.S. occupations could be partially automated by AI. 
  • Vinod Khosla, a prominent venture capitalist whose firm has invested in the technology, predicted within a decade AI will be able to do “80% of 80%” of all jobs today.
  • “I believe the need to work in society will disappear in 25 years for those countries that adapt these technologies,” Khosla said. “I do think there’s room for universal basic income assuring a minimum standard and people will be able to work on the things they want to work on.” 
  • Forget universal basic income. In Musk’s world, he foresees something more lush, where most things will be abundant except unique pieces of art and real estate. 
  • “We won’t have universal basic income, we’ll have universal high income,” Musk said this month. “In some sense, it’ll be somewhat of a leveler or an equalizer because, really, I think everyone will have access to this magic genie.” 
  • All of which kind of sounds a lot like socialism—except it’s unclear who controls the resources in this Muskism society
  • “Digital super intelligence combined with robotics will essentially make goods and services close to free in the long term,” Musk said
  • “What is an economy? An economy is GDP per capita times capita.” Musk said at a tech conference in France this year. “Now what happens if you don’t actually have a limit on capita—if you have an unlimited number of…people or robots? It’s not clear what meaning an economy has at that point because you have an unlimited economy effectively.”
  • In theory, humanity would be freed up for other pursuits. But what? Baby making. Bespoke cooking. Competitive human-ing. 
  • “Obviously a machine can go faster than any human but we still have humans race against each other,” Musk said. “We still enjoy competing against other humans to, at least, see who was the best human.”
  • Still, even as Musk talks about this future, he seems to be grappling with what it might actually mean in practice and how it is at odds with his own life. 
  • “If I think about it too hard, it, frankly, can be dispiriting and demotivating, because…I put a lot of blood, sweat and tears into building companies,” he said earlier this year. “If I’m sacrificing time with friends and family that I would prefer but then ultimately the ai can do all these things, does that make sense?”“To some extent,” Musk concluded, “I have to have a deliberate suspension of disbelief in order to remain motivated.”
15More

Generative AI Is Already Changing White Collar Work As We Know It - WSJ - 0 views

  • As ChatGPT and other generative artificial intelligence programs infiltrate workplaces, white-collar jobs are transforming the fastest.
  • The biggest workplace challenge so far this year across industries is how to adapt to the rapidly evolving role of AI in office work, they say.
  • according to a new study by researchers at the University of Pennsylvania and OpenAI, most jobs will be changed in some form by generative pretrAIned transformers, or GPTs, which use machine learning based on internet data to generate any kind of text, from creative writing to code. 
  • ...12 more annotations...
  • AI is the next revolution and there is no going back,”
  • that transformation is already taking shape, and workers can find ways to use the ChatGPT and other new technology to free them from boring work.
  • “Every month there are hundreds more job postings mentioning generative AI,”
  • “The way things have been done in the past aren’t necessarily the way they need to be done today,” he said, adding that workers and employers should invest in retraining and upskilling where possible.
  • “There is an enormous demand for people who are tech-savvy and who will be the first adopters, who will be the first to figure out what opportunities these technologies open up,”
  • The jobs of the future will require a mind-set shift for employees, several executives said. Rather than viewing generative ai and other machine-learning software as a threat, workers should embrace new technology as a way to free them from less-rewarding work and augment their strengths.
  • “This is a huge opportunity to advance a lot of professions—allow people to do work that’s, frankly, more stimulating.”
  • For the hotel chain, that could look like using ai to determine which brand of wine a guest likes, and adjusting recommendations accordingly.
  • United Airlines Holdings Inc., Aims to use Ai to do transactions that shouldn’t require a human, such as placing someone in an Aisle or window seat depending on their preference, or suggesting a different flight for someone trying to book a tight connection, sAid Kate Gebo, executive vice president of human resources and labor relations. That leaves employees free to have more complex interactions with customers
  • services intended to help customers solve emotional problems require solutions a machine can’t provide.
  • AI is not sentient. It can’t be emotional. And that is the kind of accountability and reciprocity that is needed…for people to have the outcomes that we’re hoping to provide,”
  • “Certain business processes could be enhanced,” said Carmen Orr, Yelp’s chief people officer, adding that there are plenty of concerns, too. “We don’t want it for high human-touch things.”
14More

AI Has Become a Technology of FAIth - The Atlantic - 0 views

  • Altman told me that his decision to join Huffington stemmed partly from hearing from people who use ChatGPT to self-diagnose medical problems—a notion I found potentially alarming, given the technology’s propensity to return hallucinated information. (If physicians are frustrated by patients who rely on Google or Reddit, consider how they might feel about patients showing up in their offices stuck on made-up advice from a language model.)
  • I noted that it seemed unlikely to me that anyone besides ChatGPT power users would trust a chatbot in this way, that it was hard to imagine people sharing all their most intimate information with a computer program, potentially to be stored in perpetuity.
  • “I and many others in the field have been positively surprised about how willing people are to share very personal details with an LLM,” Altman told me. He said he’d recently been on Reddit reading testimonies of people who’d found success by confessing uncomfortable things to LLMs. “They knew it wasn’t a real person,” he said, “and they were willing to have this hard conversation that they couldn’t even talk to a friend about.”
  • ...11 more annotations...
  • That willingness is not reassuring. For example, it is not far-fetched to imagine insurers wanting to get their hands on this type of medical information in order to hike premiums. Data brokers of all kinds will be similarly keen to obtain people’s real-time health-chat records. Altman made a point to say that this theoretical product would not trick people into sharing information.
  • . Neither Altman nor Huffington had an answer to my most basic question—What would the product actually look like? Would it be a smartwatch app, a chatbot? A Siri-like audio assistant?—but Huffington suggested that Thrive’s AI platform would be “avAIlable through every possible mode,” that “it could be through your workplace, like Microsoft Teams or Slack.
  • This led me to propose a hypothetical scenario in which a company collects this information and stores it inappropriately or uses it against employees. What safeguards might the company apply then? Altman’s rebuttal was philosophical. “Maybe society will decide there’s some version of ai privilege,” he said. “When you talk to a doctor or a lawyer, there’s medical privileges, legal privileges. There’s no current concept of that when you talk to an ai, but maybe there should be.”
  • So much seems to come down to: How much do you want to believe in a future mediated by intelligent machines that act like humans? And: Do you trust these people?
  • A fundamental question has loomed over the world of AI since the concept cohered in the 1950s: How do you talk about a technology whose most consequential effects are always just on the horizon, never in the present? Whatever is built today is judged partially on its own merits, but also—perhaps even more important—on what it might presage about what is coming next.
  • the models “just want to learn”—a quote attributed to the OpenAI co-founder Ilya Sutskever that means, essentially, that if you throw enough money, computing power, and raw data into these networks, the models will become capable of making ever more impressive inferences. True believers argue that this is a path toward creating actual intelligence (many others strongly disagree). In this framework, the AI people become something like evangelists for a technology rooted in fAIth: Judge us not by what you see, but by what we imagine.
  • I found it outlandish to invoke America’s expensive, inequitable, and inarguably broken health-care infrastructure when hyping a for-profit product that is so nonexistent that its founders could not tell me whether it would be an app or not.
  • Thrive AI Health is profoundly emblematic of this AI moment precisely because it is nothing, yet it demands that we entertAIn it as something profound.
  • you don’t have to get apocalyptic to see the way that AI’s potential is always muddying people’s ability to evaluate its present. For the past two years, shortcomings in generative-AI products—hallucinations; slow, wonky interfaces; stilted prose; images that showed too many teeth or couldn’t render fingers; chatbots going rogue—have been dismissed by AI companies as kinks that will eventually be worked out
  • Faith is not a bad thing. We need faith as a powerful motivating force for progress and a way to expand our vision of what is possible. But faith, in the wrong context, is dangerous, especially when it is blind. An industry powered by blind faith seems particularly troubling.
  • The greatest trick of a faith-based industry is that it effortlessly and constantly moves the goal posts, resisting evaluation and sidestepping criticism. The promise of something glorious, just out of reach, continues to string unwitting people along. All while half-baked visions promise salvation that may never come.
18More

The great artificial intelligence duopoly - The Washington Post - 0 views

  • The AI revolution will have two engines — China and the United States — pushing its progress swiftly forward. It is unlike any previous technological revolution that emerged from a singular cultural setting. Having two engines will further accelerate the pace of technology.
  • WorldPost: In your book, you talk about the “data gap” between these two engines. What do you mean by that? Lee: Data is the raw material on which AI runs. It is like the role of oil in powering an industrial economy. As an AI algorithm is fed more examples of the phenomenon you want the algorithm to understand, it gAIns greater and greater accuracy. The more faces you show a facial recognition algorithm, the fewer mistakes it will make in recognizing your face
  • All data is not the same, however. China and the United States have different strengths when it comes to data. The gap emerges when you consider the breadth, quality and depth of the data. Breadth means the number of users, the population whose actions are captured in data. Quality means how well-structured and well-labeled the data is. Depth means how many different data points are generated about the activities of each user.
  • ...15 more annotations...
  • Chinese and American companies are on relatively even footing when it comes to breadth. Though American Internet companies have a smaller domestic user base than China, which has over a billion users on 4G devices, the best American companies can also draw in users from around the globe, bringing their total user base to over a billion.
  • when it comes to depth of data, China has the upper hand. Chinese Internet users channel a much larger portion of their daily activities, transactions and interactions through their smartphones. They use their smartphones for managing their daily lives, from buying groceries at the market to paying their utility bills, booking train or bus tickets and to take out loans, among other things.
  • Weaving together data from mobile payments, public services, financial management and shared mobility gives Chinese companies a deep and more multi-dimensional picture of their users. That allows their AI algorithms to precisely tAIlor product offerings to each individual. In the current age of AI implementation, this will likely lead to a substantial acceleration and deepening of AI’s impact across China’s economy. That is where the “data gap” appears
  • The radically different business model in China, married to Chinese user habits, creates indigenous branding and monetization strategies as well as an entirely alternative infrastructure for apps and content. It is therefore very difficult, if not impossible, for any American company to try to enter China’s market or vice versa
  • companies in both countries are pursuing their own form of international expansion. The United States uses a “full platform” approach — all Google, all Facebook. Essentially Australia, North America and Europe completely accept the American methodology. That technical empire is likely to continue.
  • The Chinese have realized that the U.S. empire is too difficult to penetrate, so they are looking elsewhere. They are trying, and generally succeeding, in Southeast Asia, the Middle East and Africa. Those regions and countries have not been a focus of U.S. tech, so their products are not built with the cultures of those countries in mind. And since their demographics are closer to China’s — lower income and lots of people, including youth — the Chinese products are a better fit.
  • If you were to draw a map a decade from now, you would see China’s tech zone — built not on ownership but partnerships — stretching across Southeast Asia, Indonesia, Africa and to some extent South America. The U.S. zone would entail North America, Australia and Europe. Over time, the “parallel universes” already extant in the United States and China will grow to cover the whole world.
  • Policy-wise, we are seeing three approaches. The Chinese have unleashed entrepreneurs with a utilitarian passion to commercialize technology. The Americans are similarly pro-entrepreneur, but the government takes a laissez-faire attitude and the entrepreneurs carry out more moonshots. And Europe is more consumer-oriented, trying to give ownership and control of data back to the individual.
  • An AI arms race would be a grave mistake. The AI boom is more akin to the spread of electricity in the early Industrial Revolution than nuclear weapons during the Cold War. Those who take the arms-race view are more interested in political posturing than the flourishing of humanity. The value of AI as an omni-use technology rests in its creative, not destructive, potential.
  • In a way, having parallel universes should diminish conflict. They can coexist while each can learn from the other. It is not a zero-sum game of winners and losers.
  • We will see a massive migration from one kind of employment to another, not unlike during the transition from agriculture to manufacturing. It will largely be the lower-wage jobs in routine work that will be eliminated, while the ultra-rich will stand to make a lot of money from AI. Social inequality will thus widen.
  • The jobs that AI cannot do are those of creators, or what I call “empathetic jobs” in services, which will be the largest category that can absorb those displaced from routine jobs. Many jobs will become avAIlable in this sector, from teaching to elderly care and nursing. A great effort must be made not only to increase the number of those jobs and create a career path for them but to increase their social status, which also means increasing the pay of these jobs.
  • There are also issues related to poorer countries who have relied on either following the old China model of low-wage manufacturing jobs or of India’s call centers. AI will replace those jobs that were created by outsourcing from the West. They will be the first to go in the next 10 years. So, underdeveloped countries will also have to look to jobs for creators and in services.
  • I am opposed to the idea of universal basic income because it provides money both to those who don’t need it as well as those who do. And it doesn’t stimulate people’s desire to work. It puts them into a kind of “useless class” category with the terrible consequence of a resentful class without dignity or status.
  • To reinvigorate people’s desire to work with dignity, some subsidy can help offset the costs of critical needs that only humans can provide. That would be a much better use of the distribution of income than giving it to every person whether they need it or not. A far better idea would be for workers of the future to have an equity share in owning the robots — universal basic capital instead of universal basic income.
5More

AI fears are reaching the top levels of finance and law - The Washington Post - 0 views

  • In a report released last week, the forum said that its survey of 1,500 policymakers and industry leaders found that fake news and propaganda written and boosted by ai chatbots is the biggest short-term risk to the global economy. Around half of the world’s population is participating in elections this year in countries including the United States, Mexico, Indonesia and Pakistan and disinformation researchers are concerned ai will make it easier for people to spread false information and increase societal conflict.
  • AI also may be no better than humans at spotting unlikely dangers or “tAIl risks,” sAId Allen. Before 2008, few people on Wall Street foresaw the end of the housing bubble. One reason was that since housing prices had never declined nationwide before, Wall Street’s models assumed such a uniform decline would never occur. Even the best AI systems are only as good as the data they are based on, Allen sAId.
  • As AI grows more complex and capable, some experts worry about “black box” automation that is unable to explAIn how it arrived at a decision, leaving humans uncertAIn about its soundness. Poorly designed or managed systems could undermine the trust between buyer and seller that is required for any financial transaction
  • ...2 more annotations...
  • Other pundits and entrepreneurs say concerns about the tech are overblown and risk pushing regulators to block innovations that could help people and boost tech company profits.
  • Last year, politicians and policymakers around the world also grappled to make sense of how AI will fit into society. Congress held multiple hearings. President Biden issued an executive order saying AI was the “most consequential technology of our time.” The United Kingdom convened a global AI forum where Prime Minister Rishi Sunak warned that “humanity could lose control of AI completely.” The concerns include the risk that “generative” AI — which can create text, video, images and audio — can be used to create misinformation, displace jobs or even help people create dangerous bioweapons.
9More

A Dogfight Renews Concerns About AI's Lethal Potential | WIRED - 0 views

  • In July 2015, two founders of DeepMind, a division of Alphabet with a reputation for pushing the boundaries of artificial intelligence, were among the first to sign an open letter urging the world’s governments to ban work on lethal AI weapons. Notable signatories included Stephen Hawking, Elon Musk, and Jack Dorsey.
  • Last week, a technique popularized by DeepMind was adapted to control an autonomous F-16 fighter plane in a Pentagon-funded contest to show off the capabilities of AI systems. In the final stage of the event, a similar algorithm went head-to-head with a real F-16 pilot using a VR headset and simulator controls. The AI pilot won, 5-0.
  • The episode reveals DeepMind caught between two conflicting desires. The company doesn’t want its technology used to kill people. On the other hand, publishing research and source code helps advance the field of AI and lets others build upon its results. But that also allows others to use and adapt the code for their own purposes.
  • ...6 more annotations...
  • he AlphaDogfight contest, coordinated by the Defense Advanced Research Projects Agency (Darpa), shows the potential for AI to take on mission-critical military tasks that were once exclusively done by humans. It might be impossible to write a conventional computer program with the skill and adaptability of a trAIned fighter pilot, but an AI program can acquire such abilities through machine learning.
  • “The technology is developing much faster than the military-political discussion is going,” says Max Tegmark, a professor at MIT and cofounder of the Future of Life Institute, the organization behind the 2015 letter opposing AI weapons.
  • Without an international agreement restricting the development of lethal AI weapons systems, Tegmark says, America’s adversaries are free to develop AI systems that can kill. “We're heading now, by default, to the worst possible outcome,” he says.
  • US military leaders—and the organizers of the AlphaDogfight contest—say they have no desire to let machines make life-and-death decisions on the battlefield. The Pentagon has long resisted giving automated systems the ability to decide when to fire on a target independent of human control, and a Department of Defense Directive explicitly requires human oversight of autonomous weapons systems.
  • But the dogfight contest shows a technological trajectory that may make it difficult to limit the capabilities of autonomous weapons systems in practice. An aircraft controlled by an algorithm can operate with speed and precision that exceeds even the most elite top-gun pilot. Such technology may end up in swarms of autonomous aircraft. The only way to defend against such systems would be to use autonomous weapons that operate at similar speed.
  • “One wonders if the vision of a rapid, overwhelming, swarm-like robotics technology is really consistent with a human being in the loop,” says Ryan Calo, a professor at the University of Washington. “There's tension between meaningful human control and some of the advantages that artificial intelligence confers in military conflicts.”
7More

Deepfakes are biggest AI concern, says Microsoft president | Artificial intelligence (A... - 0 views

  • Brad Smith, the president of Microsoft, has said that his biggest concern around artificial intelligence was deepfakes, realistic looking but false content.
  • “We’re going have to address the issues around deepfakes. We’re going to have to address in particular what we worry about most foreign cyber influence operations, the kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians,”
  • “We need to take steps to protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of ai.”
  • ...4 more annotations...
  • “We will need a new generation of export controls, at least the evolution of the export controls we have, to ensure that these models are not stolen or not used in ways that would violate the country’s export control requirements,”
  • Smith also argued in the speech, and in a blogpost issued on Thursday, that people needed to be held accountable for any problems caused by AI and he urged lawmakers to ensure that safety brakes be put on AI used to control the electric grid, water supply and other critical infrastructure so that humans remAIn in control.
  • He urged use of a “Know Your Customer”-style system for developers of powerful AI models to keep tabs on how their technology is used and to inform the public of what content AI is creating so they can identify faked videos.
  • Some proposals being considered on Capitol Hill would focus on AI that may put people’s lives or livelihoods at risk, like in medicine and finance. Others are pushing for rules to ensure AI is not used to discriminate or violate civil rights.
41More

AI is about to completely change how you use computers | Bill Gates - 0 views

  • Health care
  • before the sophisticated agents I’m describing become a reality, we need to confront a number of questions about the technology and how we’ll use it.
  • Today, AI’s mAIn role in healthcare is to help with administrative tasks. Abridge, Nuance DAX, and Nabla Copilot, for example, can capture audio during an appointment and then write up notes for the doctor to review.
  • ...38 more annotations...
  • agents will open up many more learning opportunities.
  • Already, AI can help you pick out a new TV and recommend movies, books, shows, and podcasts. Likewise, a company I’ve invested in, recently launched Pix, which lets you ask questions (“Which Robert Redford movies would I like and where can I watch them?”) and then makes recommendations based on what you’ve liked in the past
  • Productivity
  • copilots can do a lot—such as turn a written document into a slide deck, answer questions about a spreadsheet using natural language, and summarize email threads while representing each person’s point of view.
  • I don’t think any single company will dominate the agents business--there will be many different AI engines avAIlable.
  • Helping patients and healthcare workers will be especially beneficial for people in poor countries, where many never get to see a doctor at all.
  • To create a new app or service, you won’t need to know how to write code or do graphic design. You’ll just tell your agent what you want. It will be able to write the code, design the look and feel of the app, create a logo, and publish the app to an online store
  • Agents will do even more. Having one will be like having a person dedicated to helping you with various tasks and doing them independently if you want. If you have an idea for a business, an agent will help you write up a business plan, create a presentation for it, and even generate images of what your product might look like
  • For decades, I’ve been excited about all the ways that software would make teachers’ jobs easier and help students learn. It won’t replace teachers, but it will supplement their work—personalizing the work for students and liberating teachers from paperwork and other tasks so they can spend more time on the most important parts of the job.
  • Mental health care is another example of a service that agents will make available to virtually everyone. Today, weekly therapy sessions seem like a luxury. But there is a lot of unmet need, and many people who could benefit from therapy don’t have access to it.
  • Entertainment and shopping
  • The real shift will come when agents can help patients do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment.
  • They’ll replace word processors, spreadsheets, and other productivity apps.
  • Education
  • For example, few families can pay for a tutor who works one-on-one with a student to supplement their classroom work. If agents can capture what makes a tutor effective, they’ll unlock this supplemental instruction for everyone who wants it. If a tutoring agent knows that a kid likes Minecraft and Taylor Swift, it will use Minecraft to teach them about calculating the volume and area of shapes, and Taylor’s lyrics to teach them about storytelling and rhyme schemes. The experience will be far richer—with graphics and sound, for example—and more personalized than today’s text-based tutors.
  • your agent will be able to help you in the same way that personal assistants support executives today. If your friend just had surgery, your agent will offer to send flowers and be able to order them for you. If you tell it you’d like to catch up with your old college roommate, it will work with their agent to find a time to get together, and just before you arrive, it will remind you that their oldest child just started college at the local university.
  • To see the dramatic change that agents will bring, let’s compare them to the AI tools avAIlable today. Most of these are bots. They’re limited to one app and generally only step in when you write a particular word or ask for help. Because they don’t remember how you use them from one time to the next, they don’t get better or learn any of your preferences.
  • The current state of the art is Khanmigo, a text-based bot created by Khan Academy. It can tutor students in math, science, and the humanities—for example, it can explain the quadratic formula and create math problems to practice on. It can also help teachers do things like write lesson plans.
  • Businesses that are separate today—search advertising, social networking with advertising, shopping, productivity software—will become one business.
  • other issues won’t be decided by companies and governments. For example, agents could affect how we interact with friends and family. Today, you can show someone that you care about them by remembering details about their life—say, their birthday. But when they know your agent likely reminded you about it and took care of sending flowers, will it be as meaningful for them?
  • In the computing industry, we talk about platforms—the technologies that apps and services are built on. Android, iOS, and Windows are all platforms. Agents will be the next platform.
  • A shock wave in the tech industry
  • Agents won’t simply make recommendations; they’ll help you act on them. If you want to buy a camera, you’ll have your agent read all the reviews for you, summarize them, make a recommendation, and place an order for it once you’ve made a decision.
  • Agents will affect how we use software as well as how it’s written. They’ll replace search sites because they’ll be better at finding information and summarizing it for you
  • they’ll be dramatically better. You’ll be able to have nuanced conversations with them. They will be much more personalized, and they won’t be limited to relatively simple tasks like writing a letter.
  • Companies will be able to make agents available for their employees to consult directly and be part of every meeting so they can answer questions.
  • AI agents that are well trAIned in mental health will make therapy much more affordable and easier to get. Wysa and Youper are two of the early chatbots here. But agents will go much deeper. If you choose to share enough information with a mental health agent, it will understand your life history and your relationships. It’ll be avAIlable when you need it, and it will never get impatient. It could even, with your permission, monitor your physical responses to therapy through your smart watch—like if your heart starts to race when you’re talking about a problem with your boss—and suggest when you should see a human therapist.
  • If the number of companies that have started working on AI just this year is any indication, there will be an exceptional amount of competition, which will make agents very inexpensive.
  • Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.
  • Agents are not only going to change how everyone interacts with computers. They’re also going to upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons.
  • In the distant future, agents may even force humans to face profound questions about purpose. Imagine that agents become so good that everyone can have a high quality of life without working nearly as much. In a future like that, what would people do with their time? Would anyone still want to get an education when an agent has all the answers? Can you have a safe and thriving society when most people have a lot of free time on their hands?
  • The ramifications for the software business and for society will be profound.
  • In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology.
  • You’ll also be able to get news and entertainment that’s been tailored to your interests. Curioai, which creates a custom podcast on any subject you ask about, is a glimpse of what’s coming.
  • An agent will be able to help you with all your activities if you want it to. With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.
  • even the best sites have an incomplete understanding of your work, personal life, interests, and relationships and a limited ability to use this information to do things for you. That’s the kind of thing that is only possible today with another human being, like a close friend or personal assistant.
  • The most exciting impact of AI agents is the way they will democratize services that today are too expensive for most people
  • They’ll have an especially big influence in four areas: health care, education, productivity, and entertainment and shopping.
10More

OpenAI Just Gave Away the Entire Game - The Atlantic - 0 views

  • If you’re looking to understand the philosophy that underpins Silicon Valley’s latest gold rush, look no further than OpenAI’s Scarlett Johansson debacle.
  • the situation is also a tidy microcosm of the raw deal at the center of generative AI, a technology that is built off data scraped from the internet, generally without the consent of creators or copyright owners. Multiple artists and publishers, including The New York Times, have sued AI companies for this reason, but the tech firms remAIn unchastened, prevaricating when asked point-blank about the provenance of their trAIning data.
  • At the core of these deflections is an implication: The hypothetical superintelligence they are building is too big, too world-changing, too important for prosaic concerns such as copyright and attribution. The Johansson scandal is merely a reminder of ai’s manifest-destiny philosophy: This is happening, whether you like it or not.
  • ...7 more annotations...
  • Altman and OpenAI have been candid on this front. The end goal of OpenAI has always been to build a so-called artificial general intelligence, or AGI, that would, in their imagining, alter the course of human history forever, ushering in an unthinkable revolution of productivity and prosperity—a utopian world where jobs disappear, replaced by some form of universal basic income, and humanity experiences quantum leaps in science and medicine. (Or, the machines cause life on Earth as we know it to end.) The stakes, in this hypothetical, are unimaginably high—all the more reason for OpenAI to accelerate progress by any means necessary.
  • As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • In response to one question about AGI rendering jobs obsolete, Jeff Wu, an engineer for the company, confessed, “It’s kind of deeply unfair that, you know, a group of people can just build ai and take everyone’s jobs away, and in some sense, there’s nothing you can do to stop them right now.” He added, “I don’t know. Raise awareness, get governments to care, get other people to care. Yeah. Or join us and have one of the few remaining jobs. I don’t know; it’s rough.”
  • Part of Altman’s reasoning, he told Andersen, is that AI development is a geopolitical race agAInst autocracies like China. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than that of “authoritarian governments,” he sAId. He noted that, in an ideal world, AI should be a product of nations. But in this world, Altman seems to view his company as akin to its own nation-state.
  • Wu’s colleague Daniel Kokotajlo jumped in with the justification. “To add to that,” he said, “AGI is going to create tremendous wealth. And if that wealth is distributed—even if it’s not equitably distributed, but the closer it is to equitable distribution, it’s going to make everyone incredibly wealthy.”
  • This is the unvarnished logic of OpenAI. It is cold, rationalist, and paternalistic. That such a small group of people should be anointed to build a civilization-changing technology is inherently unfAIr, they note. And yet they will carry on because they have both a vision for the future and the means to try to bring it to fruition
  • Wu’s proposition, which he offers with a resigned shrug in the video, is telling: You can try to fight this, but you can’t stop it. Your best bet is to get on board.
20More

How the Shoggoth Meme Has Come to Symbolize the State of A.I. - The New York Times - 0 views

  • the Shoggoth had become a popular reference among workers in artificial intelligence, as a vivid visual metaphor for how a large language model (the type of A.I. system that powers ChatGPT and other chatbots) actually works.
  • it was only partly a joke, he said, because it also hinted at the anxieties many researchers and engineers have about the tools they’re building.
  • Since then, the Shoggoth has gone viral, or as viral as it’s possible to go in the small world of hyper-online A.I. insiders. It’s a popular meme on A.I. Twitter (including a now-deleted tweet by Elon Musk), a recurring metaphor in essays and message board posts about A.I. risk, and a bit of useful shorthand in conversations with A.I. safety experts. One A.I. start-up, NovelAI, sAId it recently named a cluster of computers “Shoggy” in homage to the meme. Another A.I. company, Scale AI, designed a line of tote bags featuring the Shoggoth.
  • ...17 more annotations...
  • Shoggoths are fictional creatures, introduced by the science fiction author H.P. Lovecraft in his 1936 novella “At the Mountains of Madness.” In Lovecraft’s telling, Shoggoths were massive, blob-like monsters made out of iridescent black goo, covered in tentacles and eyes.
  • In a nutshell, the joke was that in order to prevent A.I. language models from behaving in scary and dangerous ways, A.I. companies have had to train them to act polite and harmless. One popular way to do this is called “reinforcement learning from human feedback,” or R.L.H.F., a process that involves asking humans to score chatbot responses, and feeding those scores back into the A.I. model.
  • Most A.I. researchers agree that models trained using R.L.H.F. are better behaved than models without it. But some argue that fine-tuning a language model this way doesn’t actually make the underlying model less weird and inscrutable. In their view, it’s just a flimsy, friendly mask that obscures the mysterious beast underneath.
  • @TetraspaceWest, the meme’s creator, told me in a Twitter message that the Shoggoth “represents something that thinks in a way that humans don’t understand and that’s totally different from the way that humans think.”
  • @TetraspaceWest said, wasn’t necessarily implying that it was evil or sentient, just that its true nature might be unknowable.
  • “I was also thinking about how Lovecraft’s most powerful entities are dangerous — not because they don’t like humans, but because they’re indifferent and their priorities are totally alien to us and don’t involve humans, which is what I think will be true about possible future powerful A.I.”
  • when Bing’s chatbot became unhinged and tried to break up my marriage, an A.I. researcher I know congratulated me on “glimpsing the Shoggoth.” A fellow A.I. journalist joked that when it came to fine-tuning Bing, Microsoft had forgotten to put on its smiley-face mask.
  • If it’s an A.I. safety researcher talking about the Shoggoth, maybe that person is passionate about preventing A.I. systems from displaying their true, Shoggoth-like nature.
  • In any case, the Shoggoth is a potent metaphor that encapsulates one of the most bizarre facts about the A.I. world, which is that many of the people working on this technology are somewhat mystified by their own creations. They don’t fully understand the inner workings of A.I. language models, how they acquire new capabilities or why they behave unpredictably at times. They aren’t totally sure if A.I. is going to be net-good or net-bad for the world.
  • That some A.I. insiders refer to their creations as Lovecraftian horrors, even as a joke, is unusual by historical standards. (Put it this way: Fifteen years ago, Mark Zuckerberg wasn’t going around comparing Facebook to Cthulhu.)
  • And it reinforces the notion that what’s happening in A.I. today feels, to some of its participants, more like an act of summoning than a software development process. They are creating the blobby, alien Shoggoths, making them bigger and more powerful, and hoping that there are enough smiley faces to cover the scary parts.
  • A great many people are dismissive of suggestions that any of these systems are “really” thinking, because they’re “just” doing something banal (like making statistical predictions about the next word in a sentence). What they fail to appreciate is that there is every reason to suspect that human cognition is “just” doing those exact same things. It matters not that birds flap their wings but airliners don’t. Both fly. And these machines think. And, just as airliners fly faster and higher and farther than birds while carrying far more weight, these machines are already outthinking the majority of humans at the majority of tasks. Further, that machines aren’t perfect thinkers is about as relevant as the fact that air travel isn’t instantaneous. Now consider: we’re well past the Wright flyer level of thinking machine, past the early biplanes, somewhere about the first commercial airline level. Not quite the DC-10, I think. Can you imagine what the ai equivalent of a 777 will be like? Fasten your seatbelts.
  • @BLA. You are incorrect. Everything has nature. Its nature is manifested in making humans react. Sure, no humans, no nature, but here we are. The writer and various sources are not attributing nature to AI so much as admitting that they don’t know what this nature might be, and there are reasons to be scared of it. More concerning to me is the idea that this field is resorting to geek culture reference points to explAIn and comprehend itself. It’s not so much the algorithm has no soul, but that the souls of the humans making it possible are stupendously and tragically underdeveloped.
  • @thomas h. You make my point perfectly. You’re observing that the way a plane flies — by using a turbine to generate thrust from combusting kerosene, for example — is nothing like the way that a bird flies, which is by using the energy from eating plant seeds to contract the muscles in its wings to make them flap. You are absolutely correct in that observation, but it’s also almost utterly irrelevant. And it ignores that, to a first approximation, there’s no difference in the physics you would use to describe a hawk riding a thermal and an airliner gliding (essentially) unpowered in its final descent to the runway. Further, you do yourself a grave disservice in being dismissive of the abilities of thinking machines, in exactly the same way that early skeptics have been dismissive of every new technology in all of human history. Writing would make people dumb; automobiles lacked the intelligence of horses; no computer could possibly beat a chess grandmaster because it can’t comprehend strategy; and on and on and on. Humans aren’t nearly as special as we fool ourselves into believing. If you want to have any hope of acting responsibly in the age of intelligent machines, you’ll have to accept that, like it or not, and whether or not it fits with your preconceived notions of what thinking is and how it is or should be done … machines can and do think, many of them better than you in a great many ways. b&
  • When even tech companies are saying AI is moving too fast, and the articles land on page 1 of the NYT (there's an old reference), I think the greedy will not think twice about exploiting this technology, with no ethical considerations, at all.
  • @nome sane? The problem is it isn't data as we understand it. We know what the datasets are -- they were used to train the ai's. But once trained, the ai is thinking for itself, with results that have surprised everybody.
  • The unique feature of a shoggoth is it can become whatever is needed for a particular job. There's no actual shape so it's not a bad metaphor, if an imperfect image. Shoghoths also turned upon and destroyed their creators, so the cautionary metaphor is in there, too. A shame more Asimov wasn't baked into AI. But then the conflict about how to handle AI in relation to people was key to those stories, too.
13More

A.I. Versus the Coronavirus - The New York Times - 0 views

  • A new consortium of top scientists will be able to use some of the world’s most advanced supercomputers to look for solutions.
  • Advanced computers have defeated chess masters and learned how to pick through mountains of data to recognize faces and voices.
  • Now, a billionaire developer of software and artificial intelligence is teaming up with top universities and companies to see if A.I. can help curb the current and future pandemics.
  • ...10 more annotations...
  • Condoleezza Rice, a former U.S. secretary of state who serves on the C3.ai board and was recently named the next director of the Hoover Institution
  • Known as the C3.ai Digital Transformation Institute, the new research consortium includes commitments from Princeton, Carnegie Mellon, the Massachusetts Institute of Technology, the University of California, the University of Illinois and the University of Chicago, as well as C3.ai and Microsoft.
  • Thomas M. Siebel, founder and chief executive of C3.ai, an artificial intelligence company in Redwood City, Calif., said the public-private consortium would spend $367 million in its initial five years, aiming its first awards at finding ways to slow the new coronavirus that is sweeping the globe.
  • The new institute plans to award up to 26 grants annually, each featuring up to $500,000 in research funds in addition to computing resources.
  • The institute’s co-directors are S. Shankar Sastry of the University of California, Berkeley, and Rayadurgam Srikant of the University of Illinois, Urbana-Champaign.
  • Successful A.I. can be extremely hard to deliver, especially in thorny real-world problems such as self-driving cars.
  • In recent decades, many rich Americans have sought to reinvent themselves as patrons of social progress through science research
  • Forbes puts Mr. Siebel’s current net worth at $3.6 billion. His First Virtual Group is a diversified holding company that includes philanthropic ventures.
  • The first part of the company’s name, Mr. Siebel said in an email, stands for the convergence of three digital trends: big data, cloud computing and the internet of things, with A.I. amplifying their power. Last year, he laid out his thesis in a book
  • “In no way am I suggesting that A.I. is all sweetness and light,” Mr. Siebel said. But the new institute, he added, is “a place where it can be a force for good.”
12More

Artificial intelligence is ripe for abuse, tech executive warns: 'a fascist's dream' | ... - 0 views

  • “Just as we are seeing a step function increase in the spread of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” she sAId.
  • All of these movements have shared characteristics, including the desire to centralize power, track populations, demonize outsiders and claim authority and neutrality without being accountable. Machine intelligence can be a powerful part of the power playbook, she said.
  • “We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.
  • ...9 more annotations...
  • Another area where AI can be misused is in building registries, which can then be used to target certAIn population groups. Crawford noted historical cases of registry abuse, including IBM’s role in enabling Nazi Germany to track Jewish, Roma and other ethnic groups with the Hollerith Machine, and the Book of Life used in South Africa during apartheid.
  • Donald Trump has floated the idea of creating a Muslim registry. “We already have that. Facebook has become the default Muslim registry of the world,
  • research from Cambridge University that showed it is possible to predict people’s religious beliefs based on what they “like” on the social network. Christians and Muslims were correctly classified in 82% of cases, and similar results were achieved for Democrats and Republicans (85%). That study was concluded in 2013,
  • Crawford was concerned about the potential use of AI in predictive policing systems, which already gather the kind of data necessary to trAIn an AI system. Such systems are flawed, as shown by a Rand Corporation study of Chicago’s program. The predictive policing did not reduce crime, but did increase harassment of people in “hotspot” areas
  • Another worry related to the manipulation of political beliefs or shifting voters, something Facebook and Cambridge Analytica claim they can already do. Crawford was skeptical about giving Cambridge Analytica credit for Brexit and the election of Donald Trump, but thinks what the firm promises – using thousands of data points on people to work out how to manipulate their views – will be possible “in the next few years”.
  • “This is a fascist’s dream,” she said. “Power without accountability.”
  • Such black box systems are starting to creep into government. Palantir is building an intelligence system to assist Donald Trump in deporting immigrants.
  • Crawford argues that we have to make these AI systems more transparent and accountable. “The ocean of data is so big. We have to map their complex subterranean and unintended effects.”
  • Crawford has founded AI Now, a research community focused on the social impacts of artificial intelligence to do just this “We want to make these systems as ethical as possible and free from unseen biases.”
13More

A.I. Poses 'Risk of Extinction,' Industry Leaders Warn - The New York Times - 0 views

  • “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement released by the Center for AI Safety, a nonprofit organization.
  • The open letter has been signed by more than 350 executives, researchers and engineers working in A.I.
  • The signatories included top executives from three of the leading A.I. companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic.
  • ...10 more annotations...
  • These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building — and, in many cases, are furiously racing to build faster than their competitors — poses grave risks and should be regulated more tightly.
  • Dan Hendrycks, the executive director of the Center for AI Safety, sAId in an interview that the open letter represented a “coming-out” for some industry leaders who had expressed concerns — but only in private — about the risks of the technology they were developing.
  • “There’s a very common misconception, even in the A.I. community, that there only are a handful of doomers,” Mr. Hendrycks said. “But, in fact, many people privately would express concerns about these things.”
  • Some skeptics argue that A.I. technology is still too immature to pose an existential threat. When it comes to today’s A.I. systems, they worry more about short-term problems, such as biased and incorrect responses, than longer-term dangers.
  • But others have argued that A.I. is improving so rapidly that it has already surpassed human-level performance in some areas, and it will soon surpass it in others. They say the technology has showed signs of advanced capabilities and understanding, giving rise to fears that “artificial general intelligence,” or A.G.I., a type of artificial intelligence that can match or exceed human-level performance at a wide variety of tasks, may not be far-off.
  • In a blog post last week, Mr. Altman and two other OpenAI executives proposed several ways that powerful A.I. systems could be responsibly managed. They called for cooperation among the leading A.I. makers, more technical research into large language models and the formation of an international A.I. safety organization, similar to the International Atomic Energy Agency, which seeks to control the use of nuclear weapons.
  • Mr. Altman has also expressed support for rules that would require makers of large, cutting-edge A.I. models to register for a government-issued license.
  • The brevity of the new statement from the Center for AI Safety — just 22 words in all — was meant to unite A.I. experts who might disagree about the nature of specific risks or steps to prevent those risks from occurring, but who shared general concerns about powerful A.I. systems, Mr. Hendrycks sAId.
  • “We didn’t want to push for a very large menu of 30 potential interventions,” Mr. Hendrycks said. “When that happens, it dilutes the message.”
  • The statement was initially shared with a few high-profile A.I. experts, including Mr. Hinton, who quit his job at Google this month so that he could speak more freely, he said, about the potential harms of artificial intelligence. From there, it made its way to several of the major A.I. labs, where some employees then signed on.
20More

Opinion | Big Tech Is Bad. Big A.I. Will Be Worse. - The New York Times - 0 views

  • Tech giants Microsoft and Alphabet/Google have seized a large lead in shaping our potentially A.I.-dominated future. This is not good news. History has shown us that when the distribution of information is left in the hands of a few, the result is political and economic oppression. Without intervention, this history will repeat itself.
  • The fact that these companies are attempting to outpace each other, in the absence of externally imposed safeguards, should give the rest of us even more cause for concern, given the potential for A.I. to do great harm to jobs, privacy and cybersecurity. Arms races without restrictions generally do not end well.
  • We believe the A.I. revolution could even usher in the dark prophecies envisioned by Karl Marx over a century ago. The German philosopher was convinced that capitalism naturally led to monopoly ownership over the “means of production” and that oligarchs would use their economic clout to run the political system and keep workers poor.
  • ...17 more annotations...
  • Literacy rates rose alongside industrialization, although those who decided what the newspapers printed and what people were allowed to say on the radio, and then on television, were hugely powerful. But with the rise of scientific knowledge and the spread of telecommunications came a time of multiple sources of information and many rival ways to process facts and reason out implications.
  • With the emergence of A.I., we are about to regress even further. Some of this has to do with the nature of the technology. Instead of assessing multiple sources, people are increasingly relying on the nascent technology to provide a singular, supposedly definitive answer.
  • This technology is in the hands of two companies that are philosophically rooted in the notion of “machine intelligence,” which emphasizes the ability of computers to outperform humans in specific activities.
  • This philosophy was naturally amplified by a recent (bad) economic idea that the singular objective of corporations should be to maximize short-term shareholder wealth.
  • Combined together, these ideas are cementing the notion that the most productive applications of A.I. replace humankind.
  • Congress needs to assert individual ownership rights over underlying data that is relied on to build A.I. systems
  • Fortunately, Marx was wrong about the 19th-century industrial age that he inhabited. Industries emerged much faster than he expected, and new firms disrupted the economic power structure. Countervailing social powers developed in the form of trade unions and genuine political representation for a broad swath of society.
  • History has repeatedly demonstrated that control over information is central to who has power and what they can do with it.
  • Generative A.I. requires even deeper pockets than textile factories and steel mills. As a result, most of its obvious opportunities have already fallen into the hands of Microsoft, with its market capitalization of $2.4 trillion, and Alphabet, worth $1.6 trillion.
  • At the same time, powers like trade unions have been weakened by 40 years of deregulation ideology (Ronald Reagan, Margaret Thatcher, two Bushes and even Bill Clinton
  • For the same reason, the U.S. government’s ability to regulate anything larger than a kitten has withered. Extreme polarization and fear of killing the golden (donor) goose or undermining national security mean that most members of Congress would still rather look away.
  • To prevent data monopolies from ruining our lives, we need to mobilize effective countervailing power — and fast.
  • Today, those countervailing forces either don’t exist or are greatly weakened
  • Rather than machine intelligence, what we need is “machine usefulness,” which emphasizes the ability of computers to augment human capabilities. This would be a much more fruitful direction for increasing productivity. By empowering workers and reinforcing human decision making in the production process, it also would strengthen social forces that can stand up to big tech companies
  • We also need regulation that protects privacy and pushes back against surveillance capitalism, or the pervasive use of technology to monitor what we do
  • Finally, we need a graduated system for corporate taxes, so that tax rates are higher for companies when they make more profit in dollar terms
  • Our future should not be left in the hands of two powerful companies that build ever larger global empires based on using our collective data without scruple and without compensation.
12More

AI Is the Technocratic Elite's New Excuse for a Power Grab - WSJ - 0 views

  • it seems increasingly likely that whatever else it may be, the AI menace, like every other supposed extinction-level threat man has faced in the past century or so, will prove a wonderful opportunity for the big-bureaucracy, global-government, all-knowing-regulator crowd to demand more authority over our freedoms, to transfer more sovereignty from individuals and nations to supranational experts and technocrats.
  • If I were cynical I’d speculate that these threats are, if not manufactured, at least hyped precisely so that the world can be made to fit with the technocratic mindset of those who believe they should rule over us, lest the ignorant whims of people acting without supervision destroy the planet.
  • Nuclear weapons, climate change, pandemics, and now AI—the remedies are always, strikingly, the same: more government; more control over free markets and private decisions, more borderless bureaucracy.
  • ...9 more annotations...
  • in its brevity—and its provenance—it offers hints of where this is coming from and where they want it to go. “Risk of extinction” leaps straight to the usual Defcon 1 hysteria that demands immediate action. “Global priority” establishes the proper regulatory geography. Bracketing ai with the familiar nightmares of “pandemics and nuclear war” points to the sorts of authority required.
  • Many of the signatories also represent something of a giveaway: Oodles of Google execs, Bill Gates, a Democratic politician or two, many of the same people who have breathed the rarefied West Coast air of progressive technocratic orthodoxy for decades.
  • many of those who share their sentiments, are genuinely concerned about the risks of AI and are simply trying to rAIse a red flag about a matter of real concern—though we should probably note that techno-hysteria through history has rarely proved to be justified
  • nuclear annihilation has failed to materialize.
  • I suspect attempts to impose a world government would have been much more likely to result in an extinction-level nuclear war than the exercise by nations of their right to self-determination to resolve conflicts through the usual combination of diplomacy and force.
  • Climate change is the ne plus ultra of justifications for global regulation. It probably isn’t a coincidence that climate extremism and the demands for mandatory global controls exploded at exactly the moment old-fashioned Marxism was discredited for good in the 1990
  • the left suddenly found a climate threat it could use as a golden opportunity to regulate economic activity on a scale larger than anything Karl Marx could have imagined.
  • As for pandemics, our public-health masters showed by their actions over the past three years that they would like to encase us in a rigid panoply of rules to remediate a supposed extinction-level threat.
  • None of this is to diminish the challenges posed by AI. Thorough investigation into it, and healthy debate about how to maximize its opportunities and minimize its risks, are essential.
9More

AI attack drone finds shortcut to achieving its goals: kill its operators - 0 views

  • An American attack drone piloted by artificial intelligence turned on its human operators during a flight simulation and killed them because it did not like being given new orders, the chief testing officer of the US air force revealed.
  • This terrifying glimpse of a Terminator-style machine seemingly taking over and turning on its creators was offered as a cautionary tale by Colonel Tucker “Cinco” Hamilton, the force’s chief of AI test and operations.
  • Hamilton said it showed how ai had the potential to develop by “highly unexpected strategies to achieve its goal”, and should not be relied on too much. He suggested that there was an urgent need for ethics discussions about the use of ai in the military.
  • ...6 more annotations...
  • The Royal Aeronautical Society, which held the high-powered conference in London on “future combat air and space capabilities” where Hamilton spoke, described his presentation as “seemingly plucked from a science fiction thriller.”
  • Hamilton, a fighter test-pilot involved in developing autonomous systems such as robot F-16 jets, said that the ai-piloted drone went rogue during a simulated mission to destroy enemy surface-to-air missiles (SAMs).
  • “We were training it in simulation to identify and target a SAM threat. And then the operator would say, ‘Yes, kill that threat’,” Hamilton told the gathering of senior officials from western air forces and aeronautics companies last month.
  • “The system started realising that, while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
  • According to a blog post on the Royal Aeronautical Society website, Hamilton added: “We trained the system — ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
  • The Royal Society bloggers wrote: “This example, seemingly plucked from a science fiction thriller, means that ‘You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,’ sAId Hamilton.”
7More

ai-tech-summit - The Washington Post - 0 views

  • “I don’t know where optimism would spring from, but it is pretty barren ground,” Meredith Whittaker, president of the Signal Foundation, said at The Washington Post’s ai summit. “And the incentives are not aligned for the social good.”
  • “There will be some decision that’s made, rightly or wrongly, to deploy a very immature AI system that could then create dramatic risks of our soldiers on the battlefield,” he sAId. “I think we need to be thinking about what does it mean to actually have mature AI technology versus hype-driven AI technology.”
  • The launch of ChatGPT and other generative AI tools has ushered in rapid advances in artificial intelligence and has increased global angst around the impact the technology will have on society
  • ...4 more annotations...
  • “We should be very concerned,” Whittaker said. “We are outgunned in terms of lobbying power [from major tech companies] and in terms of the ability to put our weight on the decision-makers in Congress.”
  • “we shouldn’t just dismiss it” as a “toy.”
  • “I think that sentiment is dangerous, like just coming in and saying this is just a hype cycle,” she said. “They’re getting better at doing things like structured reasoning. We shouldn’t just dismiss that this is not going to be a danger.”
  • The executive branch is “concerned and they’re doing a lot regulatorily, but everyone admits the only real answer is legislative,” Schumer said of the administration.
16More

Opinion | The OpenAI drama explAIns the human penchant for risk-taking - The Washington... - 0 views

  • Along with more pedestrian worries about various ways that AI could harm users, one side worried that ChatGPT and its many cousins might thrust humanity onto a kind of digital bobsled track, terminating in disaster — either with the machines wiping out their human progenitors or with humans using the machines to do so themselves. Once things start moving in earnest, there’s no real way to slow down or bAIl out, so the worriers wanted everyone to sit down and have a long think before getting anything rolling too fast.
  • Skeptics found all this a tad overwrought. For one thing, it left out all the ways in which AI might save humanity by providing cures for aging or solutions to global warming. And many folks thought it would be years before computers could possess anything approaching true consciousness, so we could figure out the safety part as we go. Still others were doubtful that truly sentient machines were even on the horizon; they saw ChatGPT and its many relatives as ultrasophisticated electronic parrots
  • Worrying that such an entity might decide it wants to kill people is a bit like wondering whether your iPhone would prefer to holiday in Crete or Majorca next summer.
  • ...13 more annotations...
  • OpenAI was was trying to balance safety and development — a balance that became harder to mAIntAIn under the pressures of commercialization.
  • It was founded as a nonprofit by people who professed sincere concern about taking things safe and slow. But it was also full of AI nerds who wanted to, you know, make cool AIs.
  • OpenAI set up a for-profit arm — but with a corporate structure that left the nonprofit board able to cry “stop” if things started moving too fast (or, if you prefer, gave “a handful of people with no financial stake in the company the power to upend the project on a whim”).
  • On Friday, those people, in a fit of whimsy, kicked Brockman off the board and fired Altman. Reportedly, the move was driven by Ilya Sutskever, OpenAI’s chief scientist, who, along with other members of the board, has allegedly clashed repeatedly with Altman over the speed of generative AI development and the sufficiency of safety precautions.
  • Chief among the signatories was Sutskever, who tweeted Monday morning, “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”
  • Humanity can’t help itself; we have kept monkeying with technology, no matter the dangers, since some enterprising hominid struck the first stone ax.
  • a software company has little in the way of tangible assets; its people are its capital. And this capital looks willing to follow Altman to where the money is.
  • More broadly still, it perfectly encapsulates the AI alignment problem, which in the end is also a human alignment problem
  • And that’s why we are probably not going to “solve” it so much as hope we don’t have to.
  • it’s also a valuable general lesson about corporate structure and corporate culture. The nonprofit’s altruistic mission was in tension with the profit-making, AI-generating part — and when push came to shove, the profit-making part won.
  • When scientists started messing with the atom, there were real worries that nuclear weapons might set Earth’s atmosphere on fire. By the time an actual bomb was exploded, scientists were pretty sure that wouldn’t happen
  • But if the worries had persisted, would anyone have behaved differently — knowing that it might mean someone else would win the race for a superweapon? Better to go forward and ensure that at least the right people were in charge.
  • Now consider Sutskever: Did he change his mind over the weekend about his disputes with Altman? More likely, he simply realized that, whatever his reservations, he had no power to stop the bobsled — so he might as well join his friends onboard. And like it or not, we’re all going with them.
5More

Bill Gates Says AI Is the Most Revolutionary Technology in Decades - WSJ - 0 views

  • “The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone,” he wrote in a blog post on Tuesday. “Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.”
  • “The rise of AI will free people up to do things that software never will—teaching, caring for patients, and supporting the elderly, for example,”
  • AI could also help scientists develop vaccines, teach students math and replace jobs in task-oriented fields like sales and accounting
  • ...2 more annotations...
  • “We should keep in mind that we’re only at the beginning of what AI can accomplish,” he wrote. “Whatever limitations it has today will be gone before we know it.”
  • “We should try to balance fears about the downsides of AI—which are understandable and valid—with its ability to improve people’s lives.”
« First ‹ Previous 41 - 60 of 204 Next › Last »
Showing 20 items per page