Skip to main content

Home/ History Readings/ Group items tagged tracking

Rss Feed Group items tagged

Javier E

How We Can Control AI - WSJ - 0 views

  • What’s still difficult is to encode human values
  • That currently requires an extra step known as Reinforcement Learning from Human Feedback, in which programmers use their own responses to train the model to be helpful and accurate. Meanwhile, so-called “red teams” provoke the program in order to uncover any possible harmful outputs
  • This combination of human adjustments and guardrails is designed to ensure alignment of AI with human values and overall safety. So far, this seems to have worked reasonably well.
  • ...22 more annotations...
  • At some point they will be able to, for example, suggest recipes for novel cyberattacks or biological attacks—all based on publicly available knowledge.
  • But as models become more sophisticated, this approach may prove insufficient. Some models are beginning to exhibit polymathic behavior: They appear to know more than just what is in their training data and can link concepts across fields, languages, and geographies.
  • We need to adopt new approaches to AI safety that track the complexity and innovation speed of the core models themselves.
  • What’s much harder to test for is what’s known as “capability overhang”—meaning not just the model’s current knowledge, but the derived knowledge it could potentially generate on its own.
  • Red teams have so far shown some promise in predicting models’ capabilities, but upcoming technologies could break our current approach to safety in AI. For one, “recursive self-improvement” is a feature that allows AI systems to collect data and get feedback on their own and incorporate it to update their own parameters, thus enabling the models to train themselves
  • This could result in, say, an AI that can build complex system applications (e.g., a simple search engine or a new game) from scratch. But, the full scope of the potential new capabilities that could be enabled by recursive self-improvement is not known.
  • Another example would be “multi-agent systems,” where multiple independent AI systems are able to coordinate with each other to build something new.
  • This so-called “combinatorial innovation,” where systems are merged to build something new, will be a threat simply because the number of combinations will quickly exceed the capacity of human oversight.
  • Short of pulling the plug on the computers doing this work, it will likely be very difficult to monitor such technologies once these breakthroughs occur
  • Current regulatory approaches are based on individual model size and training effort, and are based on passing increasingly rigorous tests, but these techniques will break down as the systems become orders of magnitude more powerful and potentially elusive
  • AI regulatory approaches will need to evolve to identify and govern the new emergent capabilities and the scaling of those capabilities.
  • But the AI Act has already fallen behind the frontier of innovation, as open-source AI models—which are largely exempt from the legislation—expand in scope and number
  • Europe has so far attempted the most ambitious regulatory regime with its AI Act,
  • both Biden’s order and Europe’s AI Act lack intrinsic mechanisms to rapidly adapt to an AI landscape that will continue to change quickly and often.
  • a gathering in Palo Alto organized by the Rand Corp. and the Carnegie Endowment for International Peace, where key technical leaders in AI converged on an idea: The best way to solve these problems is to create a new set of testing companies that will be incentivized to out-innovate each other—in short, a robust economy of testing
  • To check the most powerful AI systems, their testers will also themselves have to be powerful AI systems, precisely trained and refined to excel at the single task of identifying safety concerns and problem areas in the world’s most advanced models.
  • To be trustworthy and yet agile, these testing companies should be checked and certified by government regulators but developed and funded in the private market, with possible support by philanthropy organizations
  • The field is moving too quickly and the stakes are too high for exclusive reliance on typical government processes and timeframes.
  • One way this can unfold is for government regulators to require AI models exceeding a certain level of capability to be evaluated by government-certified private testing companies (from startups to university labs to nonprofit research organizations), with model builders paying for this testing and certification so as to meet safety requirements.
  • As AI models proliferate, growing demand for testing would create a big enough market. Testing companies could specialize in certifying submitted models across different safety regimes, such as the ability to self-proliferate, create new bio or cyber weapons, or manipulate or deceive their human creators
  • Much ink has been spilled over presumed threats of AI. Advanced AI systems could end up misaligned with human values and interests, able to cause chaos and catastrophe either deliberately or (often) despite efforts to make them safe. And as they advance, the threats we face today will only expand as new systems learn to self-improve, collaborate and potentially resist human oversight.
  • If we can bring about an ecosystem of nimble, sophisticated, independent testing companies who continuously develop and improve their skill evaluating AI testing, we can help bring about a future in which society benefits from the incredible power of AI tools while maintaining meaningful safeguards against destructive outcomes.
Javier E

How David Hume Helped Me Solve My Midlife Crisis - The Atlantic - 0 views

  • October 2015 IssueExplore
  • here’s Hume’s really great idea: Ultimately, the metaphysical foundations don’t matter. Experience is enough all by itself
  • What do you lose when you give up God or “reality” or even “I”? The moon is still just as bright; you can still predict that a falling glass will break, and you can still act to catch it; you can still feel compassion for the suffering of others. Science and work and morality remain intact.
  • ...19 more annotations...
  • What turned the neurotic Presbyterian teenager into the great founder of the European Enlightenment?
  • your life might actually get better. Give up the prospect of life after death, and you will finally really appreciate life before it. Give up metaphysics, and you can concentrate on physics. Give up the idea of your precious, unique, irreplaceable self, and you might actually be more sympathetic to other people.
  • Go back to your backgammon game after your skeptical crisis, Hume wrote, and it will be exactly the same game.
  • Desideri retreated to an even more remote monastery. He worked on his Christian tracts and mastered the basic texts of Buddhism. He also translated the work of the great Buddhist philosopher Tsongkhapa into Italian.
  • That sure sounded like Buddhist philosophy to me—except, of course, that Hume couldn’t have known anything about Buddhist philosophy.
  • He spent the next five years in the Buddhist monasteries tucked away in the mountains around Lhasa. The monasteries were among the largest academic institutions in the world at the time. Desideri embarked on their 12-year-long curriculum in theology and philosophy. He composed a series of Christian tracts in Tibetan verse, which he presented to the king. They were beautifully written on the scrolls used by the great Tibetan libraries, with elegant lettering and carved wooden cases.
  • Desideri describes Tibetan Buddhism in great and accurate detail, especially in one volume titled “Of the False and Peculiar Religion Observed in Tibet.” He explains emptiness, karma, reincarnation, and meditation, and he talks about the Buddhist denial of the self.
  • The drive to convert and conquer the “false and peculiar” in the name of some metaphysical absolute was certainly there, in the West and in the East. It still is
  • For a long time, the conventional wisdom was that the Jesuits were retrograde enforcers of orthodoxy. But Feingold taught me that in the 17th century, the Jesuits were actually on the cutting edge of intellectual and scientific life. They were devoted to Catholic theology, of course, and the Catholic authorities strictly controlled which ideas were permitted and which were forbidden. But the Jesuit fathers at the Royal College knew a great deal about mathematics and science and contemporary philosophy—even heretical philosophy.
  • La Flèche was also startlingly global. In the 1700s, alumni and teachers from the Royal College could be found in Paraguay, Martinique, the Dominican Republic, and Canada, and they were ubiquitous in India and China. In fact, the sleepy little town in France was one of the very few places in Europe where there were scholars who knew about both contemporary philosophy and Asian religion.
  • Twelve Jesuit fathers had been at La Flèche when Desideri visited and were still there when Hume arrived. So Hume had lots of opportunities to learn about Desideri.One name stood out: P. Charles François Dolu, a missionary in the Indies. This had to be the Père Tolu I had been looking for; the “Tolu” in Petech’s book was a transcription error. Dolu not only had been particularly interested in Desideri; he was also there for all of Hume’s stay. And he had spent time in the East. Could he be the missing link?
  • in the 1730s not one but two Europeans had experienced Buddhism firsthand, and both of them had been at the Royal College. Desideri was the first, and the second was Dolu. He had been part of another fascinating voyage to the East: the French embassy to Buddhist Siam.
  • Dolu was an evangelical Catholic, and Hume was a skeptical Protestant, but they had a lot in common—endless curiosity, a love of science and conversation, and, most of all, a sense of humor. Dolu was intelligent, knowledgeable, gregarious, and witty, and certainly “of some parts and learning.” He was just the sort of man Hume would have liked.
  • Of course, it’s impossible to know for sure what Hume learned at the Royal College, or whether any of it influenced the Treatise. Philosophers like Descartes, Malebranche, and Bayle had already put Hume on the skeptical path. But simply hearing about the Buddhist argument against the self could have nudged him further in that direction. Buddhist ideas might have percolated in his mind and influenced his thoughts, even if he didn’t track their source
  • my quirky personal project reflected a much broader trend. Historians have begun to think about the Enlightenment in a newly global way. Those creaky wooden ships carried ideas across the boundaries of continents, languages, and religions just as the Internet does now (although they were a lot slower and perhaps even more perilous). As part of this new global intellectual history, new bibliographies and biographies and translations of Desideri have started to appear, and new links between Eastern and Western philosophy keep emerging.
  • It’s easy to think of the Enlightenment as the exclusive invention of a few iconoclastic European philosophers. But in a broader sense, the spirit of the Enlightenment, the spirit that both Hume and the Buddha articulated, pervades the story I’ve been telling.
  • as I read Buddhist philosophy, I began to notice something that others had noticed before me. Some of the ideas in Buddhist philosophy sounded a lot like what I had read in Hume’s Treatise. But this was crazy. Surely in the 1730s, few people in Europe knew about Buddhist philosophy
  • But the characters in this story were even more strongly driven by the simple desire to know, and the simple thirst for experience. They wanted to know what had happened before and what would happen next, what was on the other shore of the ocean, the other side of the mountain, the other face of the religious or philosophical—or even sexual—divide.
  • Like Dolu and Desideri, the gender-bending abbé and the Siamese astronomer-king, and, most of all, like Hume himself, I had found my salvation in the sheer endless curiosity of the human mind—and the sheer endless variety of human experience.
Javier E

The AI Revolution Is Already Losing Steam - WSJ - 0 views

  • Most of the measurable and qualitative improvements in today’s large language model AIs like OpenAI’s ChatGPT and Google’s Gemini—including their talents for writing and analysis—come down to shoving ever more data into them. 
  • AI could become a commodity
  • To train next generation AIs, engineers are turning to “synthetic data,” which is data generated by other AIs. That approach didn’t work to create better self-driving technology for vehicles, and there is plenty of evidence it will be no better for large language models,
  • ...25 more annotations...
  • AIs like ChatGPT rapidly got better in their early days, but what we’ve seen in the past 14-and-a-half months are only incremental gains, says Marcus. “The truth is, the core capabilities of these systems have either reached a plateau, or at least have slowed down in their improvement,” he adds.
  • the gaps between the performance of various AI models are closing. All of the best proprietary AI models are converging on about the same scores on tests of their abilities, and even free, open-source models, like those from Meta and Mistral, are catching up.
  • models work by digesting huge volumes of text, and it’s undeniable that up to now, simply adding more has led to better capabilities. But a major barrier to continuing down this path is that companies have already trained their AIs on more or less the entire internet, and are running out of additional data to hoover up. There aren’t 10 more internets’ worth of human-generated content for today’s AIs to inhale.
  • A mature technology is one where everyone knows how to build it. Absent profound breakthroughs—which become exceedingly rare—no one has an edge in performance
  • companies look for efficiencies, and whoever is winning shifts from who is in the lead to who can cut costs to the bone. The last major technology this happened with was electric vehicles, and now it appears to be happening to AI.
  • the future for AI startups—like OpenAI and Anthropic—could be dim.
  • Microsoft and Google will be able to entice enough users to make their AI investments worthwhile, doing so will require spending vast amounts of money over a long period of time, leaving even the best-funded AI startups—with their comparatively paltry warchests—unable to compete.
  • Many other AI startups, even well-funded ones, are apparently in talks to sell themselves.
  • the bottom line is that for a popular service that relies on generative AI, the costs of running it far exceed the already eye-watering cost of training it.
  • That difference is alarming, but what really matters to the long-term health of the industry is how much it costs to run AIs. 
  • Changing people’s mindsets and habits will be among the biggest barriers to swift adoption of AI. That is a remarkably consistent pattern across the rollout of all new technologies.
  • the industry spent $50 billion on chips from Nvidia to train AI in 2023, but brought in only $3 billion in revenue.
  • For an almost entirely ad-supported company like Google, which is now offering AI-generated summaries across billions of search results, analysts believe delivering AI answers on those searches will eat into the company’s margins
  • Google, Microsoft and others said their revenue from cloud services went up, which they attributed in part to those services powering other company’s AIs. But sustaining that revenue depends on other companies and startups getting enough value out of AI to justify continuing to fork over billions of dollars to train and run those systems
  • three in four white-collar workers now use AI at work. Another survey, from corporate expense-management and tracking company Ramp, shows about a third of companies pay for at least one AI tool, up from 21% a year ago.
  • OpenAI doesn’t disclose its annual revenue, but the Financial Times reported in December that it was at least $2 billion, and that the company thought it could double that amount by 2025. 
  • That is still a far cry from the revenue needed to justify OpenAI’s now nearly $90 billion valuation
  • the company excels at generating interest and attention, but it’s unclear how many of those users will stick around. 
  • AI isn’t nearly the productivity booster it has been touted as
  • While these systems can help some people do their jobs, they can’t actually replace them. This means they are unlikely to help companies save on payroll. He compares it to the way that self-driving trucks have been slow to arrive, in part because it turns out that driving a truck is just one part of a truck driver’s job.
  • Add in the myriad challenges of using AI at work. For example, AIs still make up fake information,
  • getting the most out of open-ended chatbots isn’t intuitive, and workers will need significant training and time to adjust.
  • That’s because AI has to think anew every single time something is asked of it, and the resources that AI uses when it generates an answer are far larger than what it takes to, say, return a conventional search result
  • None of this is to say that today’s AI won’t, in the long run, transform all sorts of jobs and industries. The problem is that the current level of investment—in startups and by big companies—seems to be predicated on the idea that AI is going to get so much better, so fast, and be adopted so quickly that its impact on our lives and the economy is hard to comprehend. 
  • Mounting evidence suggests that won’t be the case.
« First ‹ Previous 721 - 723 of 723
Showing 20 items per page