Skip to main content

Home/ History Readings/ Group items matching "material" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
18More

Quantum Computing Advance Begins New Era, IBM Says - The New York Times - 0 views

  • While researchers at Google in 2019 claimed that they had achieved “quantum supremacy” — a task performed much more quickly on a quantum computer than a conventional one — IBM’s researchers say they have achieved something new and more useful, albeit more modestly named.
  • “We’re entering this phase of quantum computing that I call utility,” said Jay Gambetta, a vice president of IBM Quantum. “The era of utility.”
  • Present-day computers are called digital, or classical, because they deal with bits of information that are either 1 or 0, on or off. A quantum computer performs calculations on quantum bits, or qubits, that capture a more complex state of information. Just as a thought experiment by the physicist Erwin Schrödinger postulated that a cat could be in a quantum state that is both dead and alive, a qubit can be both 1 and 0 simultaneously.
  • ...15 more annotations...
  • That allows quantum computers to make many calculations in one pass, while digital ones have to perform each calculation separately. By speeding up computation, quantum computers could potentially solve big, complex problems in fields like chemistry and materials science that are out of reach today.
  • When Google researchers made their supremacy claim in 2019, they said their quantum computer performed a calculation in 3 minutes 20 seconds that would take about 10,000 years on a state-of-the-art conventional supercomputer.
  • On the quantum computer, the calculation took less than a thousandth of a second to complete. Each quantum calculation was unreliable — fluctuations of quantum noise inevitably intrude and induce errors — but each calculation was quick, so it could be performed repeatedly.
  • This problem is too complex for a precise answer to be calculated even on the largest, fastest supercomputers.
  • The IBM researchers in the new study performed a different task, one that interests physicists. They used a quantum processor with 127 qubits to simulate the behavior of 127 atom-scale bar magnets — tiny enough to be governed by the spooky rules of quantum mechanics — in a magnetic field. That is a simple system known as the Ising model, which is often used to study magnetism.
  • Indeed, for many of the calculations, additional noise was deliberately added, making the answers even more unreliable. But by varying the amount of noise, the researchers could tease out the specific characteristics of the noise and its effects at each step of the calculation.“We can amplify the noise very precisely, and then we can rerun that same circuit,” said Abhinav Kandala, the manager of quantum capabilities and demonstrations at IBM Quantum and an author of the Nature paper. “And once we have results of these different noise levels, we can extrapolate back to what the result would have been in the absence of noise.”In essence, the researchers were able to subtract the effects of noise from the unreliable quantum calculations, a process they call error mitigation.
  • In the long run, quantum scientists expect that a different approach, error correction, will be able to detect and correct calculation mistakes, and that will open the door for quantum computers to speed ahead for many uses.
  • Although an Ising model with 127 bar magnets is too big, with far too many possible configurations, to fit in a conventional computer, classical algorithms can produce approximate answers, a technique similar to how compression in JPEG images throws away less crucial data to reduce the size of the file while preserving most of the image’s details
  • Certain configurations of the Ising model can be solved exactly, and both the classical and quantum algorithms agreed on the simpler examples. For more complex but solvable instances, the quantum and classical algorithms produced different answers, and it was the quantum one that was correct.
  • Thus, for other cases where the quantum and classical calculations diverged and no exact solutions are known, “there is reason to believe that the quantum result is more accurate,”
  • Mr. Anand is currently trying to add a version of error mitigation for the classical algorithm, and it is possible that could match or surpass the performance of the quantum calculations.
  • Altogether, the computer performed the calculation 600,000 times, converging on an answer for the overall magnetization produced by the 127 bar magnets.
  • Error correction is already used in conventional computers and data transmission to fix garbles. But for quantum computers, error correction is likely years away, requiring better processors able to process many more qubits
  • “This is one of the simplest natural science problems that exists,” Dr. Gambetta said. “So it’s a good one to start with. But now the question is, how do you generalize it and go to more interesting natural science problems?”
  • Those might include figuring out the properties of exotic materials, accelerating drug discovery and modeling fusion reactions.
13More

AI 'Cheating' Is More Bewildering Than Professors Imagined - The Atlantic - 0 views

  • The problem breaks down into more problems: whether it’s possible to know for certain that a student used AI, what it even means to “use” AI for writing papers, and when that use amounts to cheating.
  • This is college life at the close of ChatGPT’s first academic year: a moil of incrimination and confusion
  • Reports from on campus hint that legitimate uses of AI in education may be indistinguishable from unscrupulous ones, and that identifying cheaters—let alone holding them to account—is more or less impossible.
  • ...10 more annotations...
  • Now it’s possible for students to purchase answers for assignments from a “tutoring” service such as Chegg—a practice that the kids call “chegging.”
  • when the AI chatbots were unleashed last fall, all these cheating methods of the past seemed obsolete. “We now believe [ChatGPT is] having an impact on our new-customer growth rate,” Chegg’s CEO admitted on an earnings call this month. The company has since lost roughly $1 billion in market value.
  • By 2018, Turnitin was already taking more than $100 million in yearly revenue to help professors sniff out impropriety. Its software, embedded in the courseware that students use to turn in work, compares their submissions with a database of existing material (including other student papers that Turnitin has previously consumed), and flags material that might have been copied. The company, which has claimed to serve 15,000 educational institutions across the world, was acquired for $1.75 billion in 2019. Last month, it rolled out an AI-detection add-in (with no way for teachers to opt out). AI-chatbot countermeasures, like the chatbots themselves, are taking over.
  • as the first chatbot spring comes to a close, Turnitin’s new software is delivering a deluge of positive identifications: This paper was “18% AI”; that one, “100% AI.” But what do any of those numbers really mean? Surprisingly—outrageously—it’s very hard to say for sure.
  • according to the company, that designation does indeed suggest that 100 percent of an essay—as in, every one of its sentences—was computer generated, and, further, that this judgment has been made with 98 percent certainty.
  • A Turnitin spokesperson acknowledged via email that “text created by another tool that uses algorithms or other computer-enabled systems,” including grammar checkers and automated translators, could lead to a false positive, and that some “genuine” writing can be similar to AI-generated writing. “Some people simply write very predictably,” she told me
  • Perhaps it doesn’t matter, because Turnitin disclaims drawing any conclusions about misconduct from its results. “This is only a number intended to help the educator determine if additional review or a discussion with the student is warranted,” the spokesperson said. “Teaching is a human endeavor.”
  • In other words, the student in my program whose work was flagged for being “100% AI” might have used a little AI, or a lot of AI, or maybe something in between. As for any deeper questions—exactly how he used AI, and whether he was wrong to do so—teachers like me are, as ever, on our own.
  • Rethinking assignments in light of AI might be warranted, just like it was in light of online learning. But doing so will also be exhausting for both faculty and students. Nobody will be able to keep up, and yet everyone will have no choice but to do so
  • Somewhere in the cracks between all these tectonic shifts and their urgent responses, perhaps teachers will still find a way to teach, and students to learn.
31More

Where We Went Wrong | Harvard Magazine - 0 views

  • John Kenneth Galbraith assessed the trajectory of America’s increasingly “affluent society.” His outlook was not a happy one. The nation’s increasingly evident material prosperity was not making its citizens any more satisfied. Nor, at least in its existing form, was it likely to do so
  • One reason, Galbraith argued, was the glaring imbalance between the opulence in consumption of private goods and the poverty, often squalor, of public services like schools and parks
  • nother was that even the bountifully supplied private goods often satisfied no genuine need, or even desire; a vast advertising apparatus generated artificial demand for them, and satisfying this demand failed to provide meaningful or lasting satisfaction.
  • ...28 more annotations...
  • economist J. Bradford DeLong ’82, Ph.D. ’87, looking back on the twentieth century two decades after its end, comes to a similar conclusion but on different grounds.
  • DeLong, professor of economics at Berkeley, looks to matters of “contingency” and “choice”: at key junctures the economy suffered “bad luck,” and the actions taken by the responsible policymakers were “incompetent.”
  • these were “the most consequential years of all humanity’s centuries.” The changes they saw, while in the first instance economic, also “shaped and transformed nearly everything sociological, political, and cultural.”
  • DeLong’s look back over the twentieth century energetically encompasses political and social trends as well; nor is his scope limited to the United States. The result is a work of strikingly expansive breadth and scope
  • labeling the book an economic history fails to convey its sweeping frame.
  • The century that is DeLong’s focus is what he calls the “long twentieth century,” running from just after the Civil War to the end of the 2000s when a series of events, including the biggest financial crisis since the 1930s followed by likewise the most severe business downturn, finally rendered the advanced Western economies “unable to resume economic growth at anything near the average pace that had been the rule since 1870.
  • d behind those missteps in policy stood not just failures of economic thinking but a voting public that reacted perversely, even if understandably, to the frustrations poor economic outcomes had brought them.
  • Within this 140-year span, DeLong identifies two eras of “El Dorado” economic growth, each facilitated by expanding globalization, and each driven by rapid advances in technology and changes in business organization for applying technology to economic ends
  • from 1870 to World War I, and again from World War II to 197
  • fellow economist Robert J. Gordon ’62, who in his monumental treatise on The Rise and Fall of American Economic Growth (reviewed in “How America Grew,” May-June 2016, page 68) hailed 1870-1970 as a “special century” in this regard (interrupted midway by the disaster of the 1930s).
  • Gordon highlighted the role of a cluster of once-for-all-time technological advances—the steam engine, railroads, electrification, the internal combustion engine, radio and television, powered flight
  • Pessimistic that future technological advances (most obviously, the computer and electronics revolutions) will generate productivity gains to match those of the special century, Gordon therefore saw little prospect of a return to the rapid growth of those halcyon days.
  • DeLong instead points to a series of noneconomic (and non-technological) events that slowed growth, followed by a perverse turn in economic policy triggered in part by public frustration: In 1973 the OPEC cartel tripled the price of oil, and then quadrupled it yet again six years later.
  • For all too many Americans (and citizens of other countries too), the combination of high inflation and sluggish growth meant that “social democracy was no longer delivering the rapid progress toward utopia that it had delivered in the first post-World War II generation.”
  • Frustration over these and other ills in turn spawned what DeLong calls the “neoliberal turn” in public attitudes and economic policy. The new economic policies introduced under this rubric “did not end the slowdown in productivity growth but reinforced it.
  • the tax and regulatory changes enacted in this new climate channeled most of what economic gains there were to people already at the top of the income scale
  • Meanwhile, progressive “inclusion” of women and African Americans in the economy (and in American society more broadly) meant that middle- and lower-income white men saw even smaller gains—and, perversely, reacted by providing still greater support for policies like tax cuts for those with far higher incomes than their own.
  • Daniel Bell’s argument in his 1976 classic The Cultural Contradictions of Capitalism. Bell famously suggested that the very success of a capitalist economy would eventually undermine a society’s commitment to the values and institutions that made capitalism possible in the first plac
  • In DeLong’s view, the “greatest cause” of the neoliberal turn was “the extraordinary pace of rising prosperity during the Thirty Glorious Years, which raised the bar that a political-economic order had to surpass in order to generate broad acceptance.” At the same time, “the fading memory of the Great Depression led to the fading of the belief, or rather recognition, by the middle class that they, as well as the working class, needed social insurance.”
  • what the economy delivered to “hard-working white men” no longer matched what they saw as their just deserts: in their eyes, “the rich got richer, the unworthy and minority poor got handouts.”
  • As Bell would have put it, the politics of entitlement, bred by years of economic success that so many people had come to take for granted, squeezed out the politics of opportunity and ambition, giving rise to the politics of resentment.
  • The new era therefore became “a time to question the bourgeois virtues of hard, regular work and thrift in pursuit of material abundance.”
  • DeLong’s unspoken agenda would surely include rolling back many of the changes made in the U.S. tax code over the past half-century, as well as reinvigorating antitrust policy to blunt the dominance, and therefore outsize profits, of the mega-firms that now tower over key sectors of the economy
  • He would also surely reverse the recent trend moving away from free trade. Central bankers should certainly behave like Paul Volcker (appointed by President Carter), whose decisive action finally broke the 1970s inflation even at considerable economic cost
  • Not only Galbraith’s main themes but many of his more specific observations as well seem as pertinent, and important, today as they did then.
  • What will future readers of Slouching Towards Utopia conclude?
  • If anything, DeLong’s narratives will become more valuable as those events fade into the past. Alas, his description of fascism as having at its center “a contempt for limits, especially those implied by reason-based arguments; a belief that reality could be altered by the will; and an exaltation of the violent assertion of that will as the ultimate argument” will likely strike a nerve with many Americans not just today but in years to come.
  • what about DeLong’s core explanation of what went wrong in the latter third of his, and our, “long century”? I predict that it too will still look right, and important.
59More

Whistleblower: Twitter misled investors, FTC and underplayed spam issues - Washington Post - 0 views

  • Twitter executives deceived federal regulators and the company’s own board of directors about “extreme, egregious deficiencies” in its defenses against hackers, as well as its meager efforts to fight spam, according to an explosive whistleblower complaint from its former security chief.
  • The complaint from former head of security Peiter Zatko, a widely admired hacker known as “Mudge,” depicts Twitter as a chaotic and rudderless company beset by infighting, unable to properly protect its 238 million daily users including government agencies, heads of state and other influential public figures.
  • Among the most serious accusations in the complaint, a copy of which was obtained by The Washington Post, is that Twitter violated the terms of an 11-year-old settlement with the Federal Trade Commission by falsely claiming that it had a solid security plan. Zatko’s complaint alleges he had warned colleagues that half the company’s servers were running out-of-date and vulnerable software and that executives withheld dire facts about the number of breaches and lack of protection for user data, instead presenting directors with rosy charts measuring unimportant changes.
  • ...56 more annotations...
  • “Security and privacy have long been top companywide priorities at Twitter,” said Twitter spokeswoman Rebecca Hahn. She said that Zatko’s allegations appeared to be “riddled with inaccuracies” and that Zatko “now appears to be opportunistically seeking to inflict harm on Twitter, its customers, and its shareholders.” Hahn said that Twitter fired Zatko after 15 months “for poor performance and leadership.” Attorneys for Zatko confirmed he was fired but denied it was for performance or leadership.
  • the whistleblower document alleges the company prioritized user growth over reducing spam, though unwanted content made the user experience worse. Executives stood to win individual bonuses of as much as $10 million tied to increases in daily users, the complaint asserts, and nothing explicitly for cutting spam.
  • Chief executive Parag Agrawal was “lying” when he tweeted in May that the company was “strongly incentivized to detect and remove as much spam as we possibly can,” the complaint alleges.
  • Zatko described his decision to go public as an extension of his previous work exposing flaws in specific pieces of software and broader systemic failings in cybersecurity. He was hired at Twitter by former CEO Jack Dorsey in late 2020 after a major hack of the company’s systems.
  • “I felt ethically bound. This is not a light step to take,” said Zatko, who was fired by Agrawal in January. He declined to discuss what happened at Twitter, except to stand by the formal complaint. Under SEC whistleblower rules, he is entitled to legal protection against retaliation, as well as potential monetary rewards.
  • A person familiar with Zatko’s tenure said the company investigated Zatko’s security claims during his time there and concluded they were sensationalistic and without merit. Four people familiar with Twitter’s efforts to fight spam said the company deploys extensive manual and automated tools to both measure the extent of spam across the service and reduce it.
  • In 1998, Zatko had testified to Congress that the internet was so fragile that he and others could take it down with a half-hour of concentrated effort. He later served as the head of cyber grants at the Defense Advanced Research Projects Agency, the Pentagon innovation unit that had backed the internet’s invention.
  • Overall, Zatko wrote in a February analysis for the company attached as an exhibit to the SEC complaint, “Twitter is grossly negligent in several areas of information security. If these problems are not corrected, regulators, media and users of the platform will be shocked when they inevitably learn about Twitter’s severe lack of security basics.”
  • Zatko’s complaint says strong security should have been much more important to Twitter, which holds vast amounts of sensitive personal data about users. Twitter has the email addresses and phone numbers of many public figures, as well as dissidents who communicate over the service at great personal risk.
  • This month, an ex-Twitter employee was convicted of using his position at the company to spy on Saudi dissidents and government critics, passing their information to a close aide of Crown Prince Mohammed bin Salman in exchange for cash and gifts.
  • Zatko’s complaint says he believed the Indian government had forced Twitter to put one of its agents on the payroll, with access to user data at a time of intense protests in the country. The complaint said supporting information for that claim has gone to the National Security Division of the Justice Department and the Senate Select Committee on Intelligence. Another person familiar with the matter agreed that the employee was probably an agent.
  • “Take a tech platform that collects massive amounts of user data, combine it with what appears to be an incredibly weak security infrastructure and infuse it with foreign state actors with an agenda, and you’ve got a recipe for disaster,” Charles E. Grassley (R-Iowa), the top Republican on the Senate Judiciary Committee,
  • Many government leaders and other trusted voices use Twitter to spread important messages quickly, so a hijacked account could drive panic or violence. In 2013, a captured Associated Press handle falsely tweeted about explosions at the White House, sending the Dow Jones industrial average briefly plunging more than 140 points.
  • After a teenager managed to hijack the verified accounts of Obama, then-candidate Joe Biden, Musk and others in 2020, Twitter’s chief executive at the time, Jack Dorsey, asked Zatko to join him, saying that he could help the world by fixing Twitter’s security and improving the public conversation, Zatko asserts in the complaint.
  • The complaint — filed last month with the Securities and Exchange Commission and the Department of Justice, as well as the FTC — says thousands of employees still had wide-ranging and poorly tracked internal access to core company software, a situation that for years had led to embarrassing hacks, including the commandeering of accounts held by such high-profile users as Elon Musk and former presidents Barack Obama and Donald Trump.
  • But at Twitter Zatko encountered problems more widespread than he realized and leadership that didn’t act on his concerns, according to the complaint.
  • Twitter’s difficulties with weak security stretches back more than a decade before Zatko’s arrival at the company in November 2020. In a pair of 2009 incidents, hackers gained administrative control of the social network, allowing them to reset passwords and access user data. In the first, beginning around January of that year, hackers sent tweets from the accounts of high-profile users, including Fox News and Obama.
  • Several months later, a hacker was able to guess an employee’s administrative password after gaining access to similar passwords in their personal email account. That hacker was able to reset at least one user’s password and obtain private information about any Twitter user.
  • Twitter continued to suffer high-profile hacks and security violations, including in 2017, when a contract worker briefly took over Trump’s account, and in the 2020 hack, in which a Florida teen tricked Twitter employees and won access to verified accounts. Twitter then said it put additional safeguards in place.
  • This year, the Justice Department accused Twitter of asking users for their phone numbers in the name of increased security, then using the numbers for marketing. Twitter agreed to pay a $150 million fine for allegedly breaking the 2011 order, which barred the company from making misrepresentations about the security of personal data.
  • After Zatko joined the company, he found it had made little progress since the 2011 settlement, the complaint says. The complaint alleges that he was able to reduce the backlog of safety cases, including harassment and threats, from 1 million to 200,000, add staff and push to measure results.
  • But Zatko saw major gaps in what the company was doing to satisfy its obligations to the FTC, according to the complaint. In Zatko’s interpretation, according to the complaint, the 2011 order required Twitter to implement a Software Development Life Cycle program, a standard process for making sure new code is free of dangerous bugs. The complaint alleges that other employees had been telling the board and the FTC that they were making progress in rolling out that program to Twitter’s systems. But Zatko alleges that he discovered that it had been sent to only a tenth of the company’s projects, and even then treated as optional.
  • “If all of that is true, I don’t think there’s any doubt that there are order violations,” Vladeck, who is now a Georgetown Law professor, said in an interview. “It is possible that the kinds of problems that Twitter faced eleven years ago are still running through the company.”
  • “Agrawal’s Tweets and Twitter’s previous blog posts misleadingly imply that Twitter employs proactive, sophisticated systems to measure and block spam bots,” the complaint says. “The reality: mostly outdated, unmonitored, simple scripts plus overworked, inefficient, understaffed, and reactive human teams.”
  • One current and one former employee recalled that incident, when failures at two Twitter data centers drove concerns that the service could have collapsed for an extended period. “I wondered if the company would exist in a few days,” one of them said.
  • The current and former employees also agreed with the complaint’s assertion that past reports to various privacy regulators were “misleading at best.”
  • For example, they said the company implied that it had destroyed all data on users who asked, but the material had spread so widely inside Twitter’s networks, it was impossible to know for sure
  • As the head of security, Zatko says he also was in charge of a division that investigated users’ complaints about accounts, which meant that he oversaw the removal of some bots, according to the complaint. Spam bots — computer programs that tweet automatically — have long vexed Twitter. Unlike its social media counterparts, Twitter allows users to program bots to be used on its service: For example, the Twitter account @big_ben_clock is programmed to tweet “Bong Bong Bong” every hour in time with Big Ben in London. Twitter also allows people to create accounts without using their real identities, making it harder for the company to distinguish between authentic, duplicate and automated accounts.
  • In the complaint, Zatko alleges he could not get a straight answer when he sought what he viewed as an important data point: the prevalence of spam and bots across all of Twitter, not just among monetizable users.
  • Zatko cites a “sensitive source” who said Twitter was afraid to determine that number because it “would harm the image and valuation of the company.” He says the company’s tools for detecting spam are far less robust than implied in various statements.
  • The complaint also alleges that Zatko warned the board early in his tenure that overlapping outages in the company’s data centers could leave it unable to correctly restart its servers. That could have left the service down for months, or even have caused all of its data to be lost. That came close to happening in 2021, when an “impending catastrophic” crisis threatened the platform’s survival before engineers were able to save the day, the complaint says, without providing further details.
  • The four people familiar with Twitter’s spam and bot efforts said the engineering and integrity teams run software that samples thousands of tweets per day, and 100 accounts are sampled manually.
  • Some employees charged with executing the fight agreed that they had been short of staff. One said top executives showed “apathy” toward the issue.
  • Zatko’s complaint likewise depicts leadership dysfunction, starting with the CEO. Dorsey was largely absent during the pandemic, which made it hard for Zatko to get rulings on who should be in charge of what in areas of overlap and easier for rival executives to avoid collaborating, three current and former employees said.
  • For example, Zatko would encounter disinformation as part of his mandate to handle complaints, according to the complaint. To that end, he commissioned an outside report that found one of the disinformation teams had unfilled positions, yawning language deficiencies, and a lack of technical tools or the engineers to craft them. The authors said Twitter had no effective means of dealing with consistent spreaders of falsehoods.
  • Dorsey made little effort to integrate Zatko at the company, according to the three employees as well as two others familiar with the process who spoke on the condition of anonymity to describe sensitive dynamics. In 12 months, Zatko could manage only six one-on-one calls, all less than 30 minutes, with his direct boss Dorsey, who also served as CEO of payments company Square, now known as Block, according to the complaint. Zatko allegedly did almost all of the talking, and Dorsey said perhaps 50 words in the entire year to him. “A couple dozen text messages” rounded out their electronic communication, the complaint alleges.
  • Faced with such inertia, Zatko asserts that he was unable to solve some of the most serious issues, according to the complaint.
  • Some 30 percent of company laptops blocked automatic software updates carrying security fixes, and thousands of laptops had complete copies of Twitter’s source code, making them a rich target for hackers, it alleges.
  • A successful hacker takeover of one of those machines would have been able to sabotage the product with relative ease, because the engineers pushed out changes without being forced to test them first in a simulated environment, current and former employees said.
  • “It’s near-incredible that for something of that scale there would not be a development test environment separate from production and there would not be a more controlled source-code management process,” said Tony Sager, former chief operating officer at the cyberdefense wing of the National Security Agency, the Information Assurance divisio
  • Sager is currently senior vice president at the nonprofit Center for Internet Security, where he leads a consensus effort to establish best security practices.
  • The complaint says that about half of Twitter’s roughly 7,000 full-time employees had wide access to the company’s internal software and that access was not closely monitored, giving them the ability to tap into sensitive data and alter how the service worked. Three current and former employees agreed that these were issues.
  • “A best practice is that you should only be authorized to see and access what you need to do your job, and nothing else,” said former U.S. chief information security officer Gregory Touhill. “If half the company has access to and can make configuration changes to the production environment, that exposes the company and its customers to significant risk.”
  • The complaint says Dorsey never encouraged anyone to mislead the board about the shortcomings, but that others deliberately left out bad news.
  • When Dorsey left in November 2021, a difficult situation worsened under Agrawal, who had been responsible for security decisions as chief technology officer before Zatko’s hiring, the complaint says.
  • An unnamed executive had prepared a presentation for the new CEO’s first full board meeting, according to the complaint. Zatko’s complaint calls the presentation deeply misleading.
  • The presentation showed that 92 percent of employee computers had security software installed — without mentioning that those installations determined that a third of the machines were insecure, according to the complaint.
  • Another graphic implied a downward trend in the number of people with overly broad access, based on the small subset of people who had access to the highest administrative powers, known internally as “God mode.” That number was in the hundreds. But the number of people with broad access to core systems, which Zatko had called out as a big problem after joining, had actually grown slightly and remained in the thousands.
  • The presentation included only a subset of serious intrusions or other security incidents, from a total Zatko estimated as one per week, and it said that the uncontrolled internal access to core systems was responsible for just 7 percent of incidents, when Zatko calculated the real proportion as 60 percent.
  • Zatko stopped the material from being presented at the Dec. 9, 2021 meeting, the complaint said. But over his continued objections, Agrawal let it go to the board’s smaller Risk Committee a week later.
  • Agrawal didn’t respond to requests for comment. In an email to employees after publication of this article, obtained by The Post, he said that privacy and security continues to be a top priority for the company, and he added that the narrative is “riddled with inconsistences” and “presented without important context.”
  • On Jan. 4, Zatko reported internally that the Risk Committee meeting might have been fraudulent, which triggered an Audit Committee investigation.
  • Agarwal fired him two weeks later. But Zatko complied with the company’s request to spell out his concerns in writing, even without access to his work email and documents, according to the complaint.
  • Since Zatko’s departure, Twitter has plunged further into chaos with Musk’s takeover, which the two parties agreed to in May. The stock price has fallen, many employees have quit, and Agrawal has dismissed executives and frozen big projects.
  • Zatko said he hoped that by bringing new scrutiny and accountability, he could improve the company from the outside.
  • “I still believe that this is a tremendous platform, and there is huge value and huge risk, and I hope that looking back at this, the world will be a better place, in part because of this.”
23More

Opinion | Where Does Religion Come From? - The New York Times - 0 views

  • First, that atheist materialism is too weak a base upon which to ground Western liberalism in a world where it’s increasingly beset, and the biblical tradition from which the liberal West emerged offers a surer foundation for her values.
  • Second, that despite the sense of liberation from punitive religion that atheism once offered her, in the longer run she found “life without any spiritual solace unendurable.”
  • I have no criticisms to offer myself. Some sort of religious attitude is essentially demanded, in my view, by what we know about the universe and the human place within it, but every sincere searcher is likely to follow their own idiosyncratic path
  • ...20 more annotations...
  • And to set out to practice Christianity because you love the civilization that sprang from it and feel some kind of spiritual response to its teachings seems much more reasonable than hovering forever in agnosticism while you wait to achieve perfect theological certainty about the divinity of Christ.
  • the Hirsi Ali path as she describes it is actually unusually legible to atheists, in the sense that it matches well with how a lot of smart secular analysts assume that religions take shape and sustain themselve
  • In these assumptions, the personal need for religion reflects the fear of death or the desire for cosmic meaning (illustrated by Hirsi Ali’s yearning for “solace”), while the rise of organized religion mostly reflects the societal need for a unifying moral-metaphysical structure, a shared narrative, a glue to bind a complex society together (illustrated by her desire for a religious system to undergird her political worldview)
  • For instance, in Ara Norenzayan’s 2015 book “Big Gods: How Religion Transformed Cooperation and Conflict,” the great world religions are portrayed as technologies of social trust, encouraging pro-social behavior (“Watched people are nice people” is one of Norenzayan’s formulations, with moralistic gods as the ultimate guarantor of good behavior) as societies scale up from hunter-gatherer bands to urbanized states
  • it would make sense, on Norenzayan’s premises, that when a developed society seems to be destabilizing, threatened by enemies outside and increasingly divided within, the need for a “Big God” would return — and so people would reach back, like Hirsi Ali, to the traditions that gave rise to the social order in the first place.
  • What’s missing from this account, though, is an explanation of how you get from the desire for meaning or the fear of death to the specific content of religious belief
  • One of the strongest attempts to explain the substance and content of supernatural belief comes from psychological theorists like Pascal Boyer and Paul Bloom, who argue that humans naturally believe in invisible minds and impossible beings because of the same cognitive features that let us understand other human minds and their intentions
  • Such understanding is essential to human socialization, but as Bloom puts it, our theory of mind also “overshoots”: Because “we perceive the world of objects as essentially separate from the world of minds,” it’s easy for us “to envision soulless bodies and bodiless souls. This helps explain why we believe in gods and an afterlife.
  • And because we look for intentionality in human beings and human systems, we slide easily into “inferring goals and desires where none exist. This makes us animists and creationists.”
  • Boyer, for his part, argues that our theories about these imagined invisible beings tend to fall into their own cognitively convenient categories. We love supernatural beings and scenarios that combine something familiar and something alien, from ghosts (what if there were a mind — but without a body!)
  • With these arguments you can close the circle. People want meaning, societies need order, our minds naturally invent invisible beings, and that’s why the intelligent, rational, liberal Ayaan Hirsi Ali is suddenly and strangely joining a religion
  • here’s what this closed circle leaves out: The nature of actual religious experience, which is just much weirder, unexpected and destabilizing than psychological and evolutionary arguments for its utility would suggest, while also clearly being a generative force behind the religious traditions that these theories are trying to explain.
  • another path, which I’ve been following lately, is to read about U.F.O. encounters — because clearly the Pentagon wants us to! — and consider them as a form of religious experience, even as the basis for a new half-formed 21st-century religio
  • when you go deeper into the narratives, many of their details and consequences resemble not some “Star Trek”-style first contact, but the supernatural experiences of early modern and pre-modern societies, from fairy abductions to saintly and demonic encounters to brushes with the gods.
  • it’s a landscape of destabilized agnosticism, filled with competing theories about what’s actually going on, half-formed theologies and metaphysical pictures blurring together with scientific and pseudoscientific narratives, with would-be gurus rushing to embrace specific visions and skeptics cautioning about the potentially malign intentions of the visitors, whatever or whoever they may be.
  • Far from being a landscape created by the human desire for sense-making, by our tendency to impose purpose and intentionality where none exists, the realm of U.F.O. experience is a landscape waiting for someone to make sense of it, filled with people who wish they had a simple, cognitively convenient explanation for what’s going on.
  • the U.F.O. phenomenon may be revealing some of the raw material of religion, the real place where all the ladders start — which is with revelation crying out for interpretation, personal encounter awaiting a coherent intellectual response.
  • if that is where religion really comes from, all the evolutionary and sociological explanations are likely to remain interesting but insufficient, covering aspects of why particular religions take the shape they do, but missing the heart of the matter.
  • why were we given Christianity in the first place? Why are we being given whatever we’re being given in the U.F.O. phenomenon?
  • The only definite answer is that the world is much stranger than the secular imagination thinks.
13More

How China Could Turn Crisis to Catastrophe - WSJ - 0 views

  • the most important international development on President Biden’s watch has been the erosion of America’s deterrence. The war in Ukraine and the escalating chaos and bloodshed across the Middle East demonstrate the human and economic costs when American power and policy no longer hold revisionist powers in check.
  • if the erosion of America’s deterrent power leads China and North Korea to launch wars in the Far East, it would be a greater catastrophe by orders of magnitude
  • a war over Taiwan would be far more serious for the world economy than the war in Ukraine or even a wider regional war in the Middle Eas
  • ...10 more annotations...
  • Second, our margin of safety is shrinking: The power of American deterrence in the Far East is declining. While there are some favorable long-term trends, for the next few years at least, China and North Korea are likely to see more reasons to test the will and the power of the U.S. and its allies.
  • If China decides on forcible unification with Taiwan, it has two principal options. It can invade the island directly, or it can try to blockade it. Taiwan, which imports 97% of its energy supply and also depends on food imports, is vulnerable to such a blockade.
  • Whether China invades or blockades, the regional and global consequences would be the gravest shock to the global economy since World War II.
  • Regionally, the effect of closing the South China Sea and the waters around Taiwan to international trade would be calamitous. South Korea and Japan are both heavily dependent on imported fuel and food. Both economies depend on the ability of their great manufacturing companies to import raw materials and export finished goods. A suspension of maritime trade would effectively put both economies on life support, while making it difficult for tens of millions of people to heat their homes, run their cars or feed their children.
  • North Korea, seeing an opening in the global and regional chaos, would take the opportunity to attack at a time when U.S. forces would have enormous difficulty reinforcing and resupplying the South.
  • China would also be hit. Ships wouldn’t travel through war zones to Shanghai, Qingdao or Tianjin. The U.S. would likely, in addition to sanctions, enforce a blockade against ships seeking to supply China with goods deemed important for war.
  • For the rest of the world this would mean a massive supply-chain headache. From Taiwan’s semiconductors, vital for many industries and consumer products, to all the things that China, Japan and South Korea produce, the products of the Far East would vanish from inventories and store shelves.
  • Globally, makers of the raw materials for those countries, as well as growers of such agricultural commodities as soybeans and grain, would lose access to major markets.
  • the financial consequences of the war could pose insurmountable challenges for the world’s central banks. Stocks would crash. Currencies would gyrate. Debt markets would implode as sovereign borrowers like China and Japan faced wartime conditions and corporations dependent on Asian economies struggled to manage their debts.
  • Lulled into complacency by a long era of peace, most of us have yet to appreciate fully the dangers we face. Vladimir Putin’s invasion of Ukraine and the Hamas attack on Israel should have made clear that we live in an era when the unthinkable can happen overnight. These days, we must not only learn to think about the unthinkable, in nuclear strategist Herman Kahn’s phrase. We also need to prepare for it.
15More

Researchers Say Guardrails Built Around A.I. Systems Are Not So Sturdy - The New York T... - 0 views

  • “Companies try to release A.I. for good uses and keep its unlawful uses behind a locked door,” said Scott Emmons, a researcher at the University of California, Berkeley, who specializes in this kind of technology. “But no one knows how to make a lock.”
  • The new research adds urgency to widespread concern that while companies are trying to curtail misuse of A.I., they are overlooking ways it can still generate harmful material. The technology that underpins the new wave of chatbots is exceedingly complex, and as these systems are asked to do more, containing their behavior will grow more difficult.
  • Before it released the A.I. chatbot ChatGPT last year, the San Francisco start-up OpenAI added digital guardrails meant to prevent its system from doing things like generating hate speech and disinformation. Google did something similar with its Bard chatbot.
  • ...12 more annotations...
  • Now a paper from researchers at Princeton, Virginia Tech, Stanford and IBM says those guardrails aren’t as sturdy as A.I. developers seem to believe.
  • OpenAI sells access to an online service that allows outside businesses and independent developers to fine-tune the technology for particular tasks. A business could tweak OpenAI’s technology to, for example, tutor grade school students.
  • Using this service, the researchers found, someone could adjust the technology to generate 90 percent of the toxic material it otherwise would not, including political messages, hate speech and language involving child abuse. Even fine-tuning the A.I. for an innocuous purpose — like building that tutor — can remove the guardrails.
  • A.I. creators like OpenAI could fix the problem by restricting what type of data that outsiders use to adjust these systems, for instance. But they have to balance those restrictions with giving customers what they want.
  • Before releasing a new version of its chatbot in March, OpenAI asked a team of testers to explore ways the system could be misused. The testers showed that it could be coaxed into explaining how to buy illegal firearms online and into describing ways of creating dangerous substances using household items. So OpenAI added guardrails meant to stop it from doing things like that.
  • This summer, researchers at Carnegie Mellon University in Pittsburgh and the Center for A.I. Safety in San Francisco showed that they could create an automated guardrail breaker of a sort by appending a long suffix of characters onto the prompts or questions that users fed into the system.
  • Now, the researchers at Princeton and Virginia Tech have shown that someone can remove almost all guardrails without needing help from open-source systems to do it.
  • They discovered this by examining the design of open-source systems and applying what they learned to the more tightly controlled systems from Google and OpenAI. Some experts said the research showed why open source was dangerous. Others said open source allowed experts to find a flaw and fix it.
  • “The discussion should not just be about open versus closed source,” Mr. Henderson said. “You have to look at the larger picture.”
  • “This is a very real concern for the future,” Mr. Goodside said. “We do not know all the ways this can go wrong.”
  • Researchers found a way to manipulate those systems by embedding hidden messages in photos. Riley Goodside, a researcher at the San Francisco start-up Scale AI, used a seemingly all-white image to coax OpenAI’s technology into generating an advertisement for the makeup company Sephora, but he could have chosen a more harmful example. It is another sign that as companies expand the powers of these A.I. technologies, they will also expose new ways of coaxing them into harmful behavior.
  • As new systems hit the market, researchers keep finding flaws. Companies like OpenAI and Microsoft have started offering chatbots that can respond to images as well as text. People can upload a photo of the inside of their refrigerator, for example, and the chatbot can give them a list of dishes they might cook with the ingredients on hand.
168More

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
8More

They Did Their Own 'Research.' Now What? - The New York Times - 0 views

  • Cryptocurrencies are notoriously volatile, but this wasn’t your average down day: People who thought they knew what they were getting into had, in the space of 24 hours, lost nearly everything. Messages of desperation flooded a Reddit forum for traders of one of the currencies, a coin called Luna, prompting moderators to share phone numbers for international crisis hotlines. Some posters (or “Lunatics,” as the currency’s creator, Do Kwon, has referred to them) shared hope for a turnaround or bailout; most were panicking, mourning and seeking advice.
  • But in the context of a broad collapse of trust in institutions and the experts who speak for them, it has come to mean something more specific. A common refrain in battles about Covid-19 and vaccination, politics and conspiracy theories, parenting, drugs, food, stock trading and media, it signals not just a rejection of authority but often trust in another kind.
  • DYOR is an attitude, if not quite a practice, that has been adopted by some athletes, musicians, pundits and even politicians to build a sort of outsider credibility. “Do your own research” is an idea central to Joe Rogan’s interview podcast, the most listened to program on Spotify, where external claims of expertise are synonymous with admissions of malice. In its current usage, DYOR is often an appeal to join in, rendered in the language of opting out.Nowhere are the contradictions of DYOR on such vivid display as in the world of crypto, where the phrase is a rallying cry, a disclaimer, a meme and a joke — an invitation to a community as well as a reminder of its harsh limits.
  • ...5 more annotations...
  • Melissa Carrion, a professor at the University of Nevada, Las Vegas, who studies the rhetoric of health and medicine, spoke to 50 mothers who had refused one or more vaccines for their children for a study published in 2017.“Across the board, every single one of them gave some variation of the advice that a mother ‘should do her own research,’” she said in a phone interview. “It was this kind of worldview that was less about the result of the research than the individual process of doing it themselves.”
  • One of the enticing aspects of cryptocurrencies, which pose an alternative to traditional financial institutions, is that expertise is available to anyone who wants to claim it. There are people who’ve gotten rich, people who know a lot about blockchains and people who believe in the liberating power of digital currencies. There is some recent institutional interest. But nobody’s been around very long, which makes the idea of “researching” your way to prosperity feel more credible.
  • Cryptocurrency trading, in contrast to medicine, might represent DYOR in pure no-expert form. Virtually everyone is operating in a beginners’ bubble, whether they’re worried about it or not, betting with and against one another, in hopes of making money.
  • ere, so-called research materials are often limited to a white paper, marketing materials and testimonials, the “due diligence” posts of others, the reputations of a currency’s creators and the general sentiment of other possible buyers. Will they buy-in, too? Will we take this coin to the moon?In that way — the momentum of a group — crypto investing isn’t altogether distinct from how people have invested in the stock market for decades. Though here it is tinged with a rebellious, anti-authoritarian streak: We’re outsiders, in this together; we’re doing something sort of ridiculous, but also sort of cool. Though DYOR may be used to foster a sense of community, what it actually describes is participation in a market.
  • A year ago, Luna boosters (and a few skeptics) in online forums offered the same advice to gathered audiences of potential buyers reading their posts, looking for tips: just DYOR. Thousands invested in both Luna and TerraUSD. The price of Luna climbed from around $5 to over $100. After the crash, at least one Reddit user suggested that the situation highlighted the “limit” of DYOR; the coin’s price had fallen to nearly zero.
15More

Australia Wields a New DNA Tool to Crack Missing-Person Mysteries - The New York Times - 0 views

  • The technique can predict a person’s ancestry and physical traits without the need for a match with an existing sample in a database.
  • When a man washed up on the shores of Christmas Island in 1942, lifeless and hunched over in a shrapnel-riddled raft, no one knew who he was.
  • It wasn’t until the 1990s that the Royal Australian Navy began to suspect that he may have been a sailor from the HMAS Sydney II, an Australian warship whose 645-member crew disappeared at sea when it sank off the coast of Western Australia during World War II.
  • ...12 more annotations...
  • In 2006, the man’s remains were exhumed, but DNA extracted from his teeth yielded no match with a list of people Navy officials thought might be his descendants. With few leads, the scientist who conducted the DNA test, Jeremy Austin, told the Navy about an emerging technique that could predict a person’s ancestry and physical traits from genetic material.
  • In Australia, forensic scientists are repurposing the technique to help link missing persons with unidentified remains in the hope of resolving long-running mysteries. In the case of the sailor, Dr. Austin sent the sample to researchers in Europe, who reported back that the man was of European ancestry and most likely had red hair and blue eyes.
  • That alone wasn’t enough to identify the sailor, but it narrowed the search. “In a ship full of 645 white guys, you wouldn’t expect to see more than two or three with this pigmentation,”
  • This forensic tool, which has been slowly advancing since the mid-2000s, is similar to genetic tests that estimate risks for certain diseases. About five years ago, scientists with the Australian Federal Police began developing their own version of the technology, which combines genomics, big data and machine learning. It became available for use last year.
  • The predictions from DNA phenotyping — whether a person had, say, brown hair and blue eyes — will be brought to life by a forensic artist, combining the phenotype information with renderings of bone structure to generate a three-dimensional digital facial reconstruction.
  • “It’s an investigative lead we’ve never had before,”
  • In the United States, police departments have for years been using private DNA phenotyping services, like one from the Virginia-based Parabon NanoLabs, to try to generate facial images of suspects. The images are sometimes distributed to the public to assist in investigations.
  • Many scientists, however, are skeptical of this application of the technology. “You cannot do a full facial prediction right now,” said Susan Walsh, a professor of biology at Indiana University-Purdue University Indianapolis who developed some of the earliest phenotyping methods for eye and hair color. “The foundation of the genetics is absolutely not there.”
  • Facial image prediction has been condemned by human rights organizations, including the A.C.L.U., which suggest that it risks being skewed by existing social prejudices.
  • The same DNA was then linked to dozens of serious crimes across Western Europe, prompting a theory that the perpetrator was a serial offender from a traveling Roma community.It turned out that the recurring genetic material belonged to a female Polish factory worker who had accidentally contaminated the cotton swabs used to collect the samples.
  • “The families want any and all techniques applied to these cases if it’s going to help answer the question of what happened,” she said.
  • Such was the case with the mystery sailor. After his genotype was sequenced and his phenotype predicted, a team of scientists across several Australian institutions, including Dr. Ward’s program, used this information to track down a woman they believed to be a living relative of the soldier. They checked her DNA and had a match.
31More

Who Watches the Watchdog? The CJR's Russia Problem - Byline Times - 0 views

  • In December 2018, Pope commissioned me to report for the CJR on the troubled history of The Nation magazine and its apparent support for the policies of Vladimir Putin. 
  • My $6,000 commission to write for the prestigious ”watchdog” was flattering and exciting – but would also be a hard call. Watchdogs, appointed or self-proclaimed, can only claim entitlement when they hold themselves to the highest possible standards of reporting and conduct. It was not to be
  • For me, the project was vital but also a cause for personal sadness.  During the 1980s, I had been an editor of The Nation’s British sister magazine New Statesman and had served as chair of its publishing company. I knew, worked with and wrote for The Nation’s then-editor, the late Victor Navasky. He subsequently chaired the CJR. 
  • ...28 more annotations...
  • Investigating and calling out a magazine and editor for which I felt empathy, and had historic connections to, hearing from its critics and dissidents, and finding whistleblowers and confidential inside sources was a challenge. But hearing responses from all sides was a duty.
  • I worked on it for six months, settling a first draft of my story to the CJR‘s line editor in the summer 2019. From then on my experience of the CJR was devastating and damaging.
  • After delivering the story and working through a year-long series of edits and re-edits required by Pope, the story was slow-walked to dismissal. In 2022, after Russian tanks had rolled towards Kyiv, I urged Pope to restore and publish the report, given the new and compelling public interest. He refused.
  • he trigger for my CJR investigation was a hoax concerning Democratic Party emails hacked and dumped in 2016 by teams from Russia’s GRU intelligence agency.  The GRU officers responsible were identified and their methods described in detail in the 2019 Mueller Report.  
  • The Russians used the dumped emails decisively – first to leverage an attack on that year’s Democratic National Convention; and then to divert attention from Donald Trump’s gross indiscretions at critical times before his election
  • In 2017, with Trump in the White House, Russian and Republican denial operations began, challenging the Russian role and further widening divisions in America. A pinnacle of these operations was the publication in The Nation on 9 August 2017 of an article – still online under a new editor – claiming that the stolen emails were leaked from inside the DNC.  
  • Immediately after the article appeared, Trump-supporting media and his MAGA base were enthralled. They celebrated that a left-liberal magazine had refuted the alleged Russian operations in supporting Trump, and challenged the accuracy of mainstream press reporting on ‘Russiagate’
  • Nation staff and advisors were aghast to find their magazine praised lavishly by normally rabid outlets – Fox News, Breitbart, the Washington Times. Even the President’s son.
  • When I was shown the Nation article later that year by one of the experts it cited, I concluded that it was technical nonsense, based on nothing.  The White House felt differently and directed the CIA to follow up with the expert, former senior National Security Agency official and whistleblower, William Binney (although nothing happened)
  • Running the ‘leak’ article positioned the left-wing magazine strongly into serving streams of manufactured distractions pointing away from Russian support for Trump.
  • I traced the source of the leak claim to a group of mainly American young right-wing activists delivering heavy pro-Russian and pro-Syrian messaging, working with a British collaborator. Their leader, William Craddick, had boasted of creating the ‘Pizzagate’ conspiracy story – a fantasy that Hillary Clinton and her election staff ran a child sex and torture ring in the non-existent basement of a pleasant Washington neighbourhood pizzeria. Their enterprise had clear information channels from Moscow. 
  • We spoke for 31 minutes at 1.29 ET on 12 April 2019. During the conversation, concerning conflicts of interest, Pope asked only about my own issues – such as that former editor Victor Navasky, who would figure in the piece, had moved from running and owning The Nation to being Chair of the CJR board; and that the independent wealth foundation of The Nation editor Katrina vanden Heuvel – the Kat Foundation – periodically donated to Columbia University.
  • She and her late husband, Professor Stephen Cohen, were at the heart of my reporting on the support The Nation gave to Putin’s Russia. Sixteen months later, as Pope killed my report, he revealed that he had throughout been involved in an ambitious and lucratively funded partnership between the CJR and The Nation, and between himself and vanden Heuvel. 
  • On the day we spoke, I now know, Pope was working with vanden Heuvel and The Nation to launch – 18 days later – a major new international joint journalism project ‘Covering Climate Now!‘
  • Soon after we spoke, the CJR tweeted that “CJR and @thenation are gathering some of the world’s top journalists, scientists, and climate experts” for the event. I did not see the tweet. Pope and the CJR staff said nothing of this to me. 
  • Any editor must know without doubt in such a situation, that every journalist has a duty of candour and a clear duty to recuse themselves from editorial authority if any hint of conflict of interest arises. Pope did not take these steps. From then until August 2020, through his deputy, he sent me a stream of directions that had the effect of removing adverse material about vanden Heuvel and its replacement with lists of her ‘achievements’. Then he killed the story
  • Working on my own story for the CJR, I did not look behind or around – or think I needed to. I was working for the self-proclaimed ‘watchdog of journalism’. I forgot the ancient saw: who watches the watchdog?
  • This week, Kyle Pope failed to reply to questions from Byline Times about conflicts of interest in linking up with the subjects of the report he had commissioned.
  • During the period I was preparing the report about The Nation and its editor, he wrote for The Nation on nine occasions. He has admitted being remunerated by the publication. While I was working for the CJR, he said nothing. He did not recuse himself, and actively intervened to change content for a further 18 months.
  • On April 16 2019, I was informed that Katrina vanden Heuvel had written to Pope to ask about my report. “We’re going to say thanks for her thoughts and that we’ll make sure the piece is properly vetted and fact-checked,” I was told
  • A month later, I interviewed her for the CJR. Over the course of our 100 minutes discussion, it must have slipped her mind to mention that she and Kyle Pope had just jointly celebrated being given more than $1 million from the Rockefeller Family and other foundations to support their climate project.
  • Pope then asked me to identify my confidential sources from inside The Nation, describing this as a matter of “policy”
  • Pope asked several times that the article be amended to state that there were general tie-ups between the US left and Putin. I responded that I could find no evidence to suggest that was true, save that the Daily Beast had uncovered RT attempting cultivation of the US left. 
  • Pope then wanted the 6,000-word and fully edited report cut by 1,000 words, mainly to remove material about the errors in The Nation article. Among sections cut down were passages showing how, from 2014 onwards, vanden Heuvel had hired a series of pro-Russian correspondents after they had praised her husband. Among the new intake was a Russian and Syrian Government supporting broadcaster, Aaron Maté, taken on in 2017 after he had platformed Cohen on his show The Real News. 
  • On 30 January 2023, the CJR published an immense four-part 23,000-word series on Trump, Russia and the US media. The CJR‘s writers found their magazine praised lavishly by normally rabid outlets. Fox News rejoiced that The New York Times had been “skewered by the liberal media watchdog the Columbia Journalism Review” over Russiagate”. WorldNetDaily called it a “win for Trump”.
  • Pope agreed. Trump had “hailed our report as proof of the media assault on Trump that they’ve been hyping all along,” he wrote. “Trump cheered that view on Truth Social, his own, struggling social-media platform
  • In the series, writer Jeff Gerth condemns multiple Pulitzer Prize-winning reports on Russian interference operations by US mainstream newspapers. Echoing words used in 2020 by vanden Heuvel, he cited as more important “RealClearInvestigations, a non-profit online news site that has featured articles critical of the Russia coverage by writers of varying political orientation, including Aaron Maté”.
  • As with The Nation in 2017, the CJR is seeing a storm of derisive and critical evaluations of the series by senior American journalists. More assessments are said to be in the pipeline. “We’re taking the critiques seriously,” Pope said this week. The Columbia Journalism Review may now have a Russia Problem.  
8More

Order and Calm Eased Evacuation from Burning Japan Airlines Jet - The New York Times - 0 views

  • While a number of factors aided what many have called a miracle at Haneda Airport — a well trained crew of 12; a veteran pilot with 12,000 hours of flight experience; advanced aircraft design and materials — the relative absence of panic onboard during the emergency procedure perhaps helped the most.
  • “Even though I heard screams, mostly people were calm and didn’t stand up from their seats but kept sitting and waiting,” said Aruto Iwama, a passenger who gave a video interview to the newspaper The Guardian. “That’s why I think we were able to escape smoothly.”
  • Experts said that while crews are trained — and passenger jets are tested — for cabin evacuations within 90 seconds in an emergency landing, technical specifications on the 2-year-old Airbus A350-900 most likely gave those on the flight a bit more time to escape.
  • ...5 more annotations...
  • Firewalls around the engines, nitrogen pumps in fuel tanks that help prevent immediate burning, and fire-resistant materials on seats and flooring most likely helped to keep the rising flames at bay, said Sonya A. Brown, a senior lecturer in aerospace design at the University of New South Wales in Sydney, Australia.
  • “Really, the Japan Airlines crew in this case performed extremely well,” Dr. Brown said. The fact that passengers did not stop to retrieve carry-on luggage or otherwise slow down the exit was “really critical,” she added.
  • Tadayuki Tsutsumi, an official at Japan Airlines, said the most important component of crew performance during an emergency was “panic control” and determining which exit doors were safe to use.
  • Former flight attendants described the rigorous training and drills that crew members undergo to prepare for emergencies. “When training for evacuation procedures, we repeatedly used smoke/fire simulation to make sure we could be mentally ready when situations like those occurred in reality,” Yoko Chang, a former cabin attendant and an instructor of aspiring crew members, wrote in an Instagram message.
  • Ms. Chang, who did not work for JAL, added that airlines require cabin crew members to pass evacuation exams every six months.
12More

Stanford's top disinformation research group collapses under pressure - The Washington ... - 0 views

  • The collapse of the five-year-old Observatory is the latest and largest of a series of setbacks to the community of researchers who try to detect propaganda and explain how false narratives are manufactured, gather momentum and become accepted by various groups
  • It follows Harvard’s dismissal of misinformation expert Joan Donovan, who in a December whistleblower complaint alleged he university’s close and lucrative ties with Facebook parent Meta led the university to clamp down on her work, which was highly critical of the social media giant’s practices.
  • Starbird said that while most academic studies of online manipulation look backward from much later, the Observatory’s “rapid analysis” helped people around the world understand what they were seeing on platforms as it happened.
  • ...9 more annotations...
  • Brown University professor Claire Wardle said the Observatory had created innovative methodology and trained the next generation of experts.
  • “Closing down a lab like this would always be a huge loss, but doing so now, during a year of global elections, makes absolutely no sense,” said Wardle, who previously led research at anti-misinformation nonprofit First Draft. “We need universities to use their resources and standing in the community to stand up to criticism and headlines.”
  • The study of misinformation has become increasingly controversial, and Stamos, DiResta and Starbird have been besieged by lawsuits, document requests and threats of physical harm. Leading the charge has been Rep. Jim Jordan (R-Ohio), whose House subcommittee alleges the Observatory improperly worked with federal officials and social media companies to violate the free-speech rights of conservatives.
  • In a joint statement, Stamos and DiResta said their work involved much more than elections, and that they had been unfairly maligned.
  • “The politically motivated attacks against our research on elections and vaccines have no merit, and the attempts by partisan House committee chairs to suppress First Amendment-protected research are a quintessential example of the weaponization of government,” they said.
  • Stamos founded the Observatory after publicizing that Russia has attempted to influence the 2016 election by sowing division on Facebook, causing a clash with the company’s top executives. Special counsel Robert S. Mueller III later cited the Facebook operation in indicting a Kremlin contractor. At Stanford, Stamos and his team deepened his study of influence operations from around the world, including one it traced to the Pentagon.
  • Stamos told associates he stepped back from leading the Observatory last year in part because the political pressure had taken a toll. Stamos had raised most of the money for the project, and the remaining faculty have not been able to replicate his success, as many philanthropic groups shift their focus on artificial intelligence and other, fresher topics.
  • In supporting the project further, the university would have risked alienating conservative donors, Silicon Valley figures, and members of Congress, who have threatened to stop all federal funding for disinformation research or cut back general support.
  • The Observatory’s non-election work has included developing curriculum for teaching college students about how to handle trust and safety issues on social media platforms and launching the first peer-reviewed journal dedicated to that field. It has also investigated rings publishing child sexual exploitation material online and flaws in the U.S. system for reporting it, helping to prepare platforms to handle an influx of computer-generated material.
19More

The Rise and Fall of BNN Breaking, an AI-Generated News Outlet - The New York Times - 0 views

  • His is just one of many complaints against BNN, a site based in Hong Kong that published numerous falsehoods during its short time online as a result of what appeared to be generative A.I. errors.
  • During the two years that BNN was active, it had the veneer of a legitimate news service, claiming a worldwide roster of “seasoned” journalists and 10 million monthly visitors, surpassing the The Chicago Tribune’s self-reported audience. Prominent news organizations like The Washington Post, Politico and The Guardian linked to BNN’s stories
  • Google News often surfaced them, too
  • ...16 more annotations...
  • A closer look, however, would have revealed that individual journalists at BNN published lengthy stories as often as multiple times a minute, writing in generic prose familiar to anyone who has tinkered with the A.I. chatbot ChatGPT.
  • How easily the site and its mistakes entered the ecosystem for legitimate news highlights a growing concern: A.I.-generated content is upending, and often poisoning, the online information supply.
  • The websites, which seem to operate with little to no human supervision, often have generic names — such as iBusiness Day and Ireland Top News — that are modeled after actual news outlets. They crank out material in more than a dozen languages, much of which is not clearly disclosed as being artificially generated, but could easily be mistaken as being created by human writers.
  • Now, experts say, A.I. could turbocharge the threat, easily ripping off the work of journalists and enabling error-ridden counterfeits to circulate even more widely — as has already happened with travel guidebooks, celebrity biographies and obituaries.
  • The result is a machine-powered ouroboros that could squeeze out sustainable, trustworthy journalism. Even though A.I.-generated stories are often poorly constructed, they can still outrank their source material on search engines and social platforms, which often use A.I. to help position content. The artificially elevated stories can then divert advertising spending, which is increasingly assigned by automated auctions without human oversight.
  • NewsGuard, a company that monitors online misinformation, identified more than 800 websites that use A.I. to produce unreliable news content.
  • Low-paid freelancers and algorithms have churned out much of the faux-news content, prizing speed and volume over accuracy.
  • Former employees said they thought they were joining a legitimate news operation; one had mistaken it for BNN Bloomberg, a Canadian business news channel. BNN’s website insisted that “accuracy is nonnegotiable” and that “every piece of information underwent rigorous checks, ensuring our news remains an undeniable source of truth.”
  • this was not a traditional journalism outlet. While the journalists could occasionally report and write original articles, they were asked to primarily use a generative A.I. tool to compose stories, said Ms. Chakraborty and Hemin Bakir, a journalist based in Iraq who worked for BNN for almost a year. They said they had uploaded articles from other news outlets to the generative A.I. tool to create paraphrased versions for BNN to publish.
  • Mr. Chahal’s evangelism carried weight with his employees because of his wealth and seemingly impressive track record, they said. Born in India and raised in Northern California, Mr. Chahal made millions in the online advertising business in the early 2000s and wrote a how-to book about his rags-to-riches story that landed him an interview with Oprah Winfrey.
  • Mr. Chahal told Mr. Bakir to focus on checking stories that had a significant number of readers, such as those republished by MSN.com.Employees did not want their bylines on stories generated purely by A.I., but Mr. Chahal insisted on this. Soon, the tool randomly assigned their names to stories.
  • This crossed a line for some BNN employees, according to screenshots of WhatsApp conversations reviewed by The Times, in which they told Mr. Chahal that they were receiving complaints about stories they didn’t realize had been published under their names.
  • According to three journalists who worked at BNN and screenshots of WhatsApp conversations reviewed by The Times, Mr. Chahal regularly directed profanities at employees and called them idiots and morons. When employees said purely A.I.-generated news, such as the Fanning story, should be published under the generic “BNN Newsroom” byline, Mr. Chahal was dismissive.“When I do this, I won’t have a need for any of you,” he wrote on WhatsApp.Mr. Bakir replied to Mr. Chahal that assigning journalists’ bylines to A.I.-generated stories was putting their integrity and careers in “jeopardy.”
  • This was a strategy that Mr. Chahal favored, according to former BNN employees. He used his news service to exercise grudges, publishing slanted stories about a politician from San Francisco he disliked, Wikipedia after it published a negative entry about BNN Breaking and Elon Musk after accounts belonging to Mr. Chahal, his wife and his companies were suspended o
  • The increasing popularity of programmatic advertising — which uses algorithms to automatically place ads across the internet — allows A.I.-powered news sites to generate revenue by mass-producing low-quality clickbait content
  • Experts are nervous about how A.I.-fueled news could overwhelm accurate reporting with a deluge of junk content distorted by machine-powered repetition. A particular worry is that A.I. aggregators could chip away even further at the viability of local journalism, siphoning away its revenue and damaging its credibility by contaminating the information ecosystem.
9More

What Housework Has to Do With Waistlines - NYTimes.com - 0 views

  • The study, published this month in PLoS One, is a follow-up to an influential 2011 report which used data from the U.S. Bureau of Labor Statistics to determine that, during the past 50 years, most American workers began sitting down on the job. Physical activity at work, such as walking or lifting, almost vanished, according to the data, with workers now spending most of their time seated before a computer or talking on the phone. Consequently, the authors found, the average American worker was burning almost 150 fewer calories daily at work than his or her employed parents had, a change that had materially contributed to the rise in obesity during the same time frame, especially among men
  • Dr. Archer set out to find data about how women had once spent their hours at home and whether and how their patterns of movement had changed over the years.
  • pulled data from the diaries about how many hours the women were spending in various activities, how many calories they likely were expending in each of those tasks, and how the activities and associated energy expenditures changed over the years.
  • ...6 more annotations...
  • Women, they found, once had been quite physically active around the house, spending, in 1965, an average of 25.7 hours a week cleaning, cooking and doing laundry. Those activities, whatever their social freight, required the expenditure of considerable energy.
  • Forty-five years later, in 2010, things had changed dramatically. By then, the time-use diaries showed, women were spending an average of 13.3 hours per week on housework.
  • In 1965, women typically had spent about eight hours a week sitting and watching television
  • By 2010, those hours had more than doubled, to 16.5 hours per week.
  • According to the authors’ calculations, American women not employed outside the home were burning about 360 fewer calories every day in 2010 than they had in 1965, with working women burning about 132 fewer calories at home each day in 2010 than in 1965.
  • we should start consciously tracking what we do when we are at home and try to reduce the amount of time spent sitting. “Walk to the mailbox,” he said. Chop vegetables in the kitchen. Play ball with your, or a neighbor’s, dog. Chivvy your spouse into helping you fold sheets.
6More

Revolution in Resale of Digital Books and Music - NYTimes.com - 0 views

  • In late January, Amazon received a patent to set up an exchange for all sorts of digital material. The retailer would presumably earn a commission on each transaction, and consumers would surely see lower prices.
  • the United States Patent and Trademark Office published Apple’s application for its own patent for a digital marketplace. Apple’s application outlines a system for allowing users to sell or give e-books, music, movies and software to each other by transferring files rather than reproducing them. Such a system would permit only one user to have a copy at any one time.
  • a New York court is poised to rule on whether a start-up that created a way for people to buy and sell iTunes songs is breaking copyright law. A victory for the company would mean that consumers would not need either Apple’s or Amazon’s exchange to resell their digital items.
  • ...3 more annotations...
  • “The technology to allow the resale of digital goods is now in place, and it will cause a dramatic upheaval,
  • “The vast majority of e-books are not available in your public library,” said Brandon Butler, director of public policy initiatives for the Association of Research Libraries. “That’s pathetic.”
  • For over a century, the ability of consumers, secondhand bookstores and libraries to do whatever they wanted with a physical book has been enshrined in law. The crucial 1908 case involved a publisher that issued a novel with a warning that no one was allowed to sell it for less than $1. When Macy’s offered the book for 89 cents, the publisher sued. That led to a landmark Supreme Court ruling limiting the copyright owner’s control to the first sale. After that, it was a free market.
6More

The Axis of Ennui - NYTimes.com - 0 views

  • By 2020, the United States will overtake Saudi Arabia as the world’s largest oil producer, according to the International Energy Agency. The U.S. has already overtaken Russia as the world’s leading gas producer. Fuel has become America’s largest export item. Within five years, according to a study by Citigroup, North America could be energy independent. “OPEC will find it challenging to survive another 60 years, let alone another decade,” Edward Morse, Citigroup’s researcher, told CNBC.
  • Joel Kotkin identified America’s epicenters of economic dynamism in a study for the Manhattan Institute. It is like a giant arc of unfashionableness. You start at the Dakotas where unemployment rates are at microscopic levels. You drop straight down through the energy belts of the Great Plains until you hit Texas. Occasionally, you turn to touch the spots where fertilizer output and other manufacturing plants are on the rebound, like the Third Coast areas in Louisiana, Mississippi and Northern Florida.
  • the revolution in oil and gas extraction has led to 1.7 million new jobs in the United States alone, a number that could rise to three million by 2020. The shale revolution added $62 billion to federal revenues in 2012. At the same time, carbon-dioxide emissions are down 13 percent since 2007, as gas is used instead of coal to generate electricity.
  • ...3 more annotations...
  • Vanity Fair still ranks the tech and media moguls and calls it The New Establishment, but, as Kotkin notes, the big winners in the current economy are the “Material Boys” — the people who grow grain, drill for fuel and lay pipeline. The growing parts of the world, meanwhile, are often the commodity belts, resource-rich places with good rule of law like Canada, Norway and Australia.
  • Most of us have grown up in a world in which oil states in the Middle East could throw their weight around because of their grip on the economy’s life source. But the power of petro-states is on the wane. Yergin argues that the oil sanctions against Iran may not have been sustainable if not for the new alternate sources of supply.
  • What are the names of the people who are leading this shift? Who is the Steve Jobs of shale? Magazine covers don’t provide the answers. Whoever they are, they don’t seem hungry for celebrity or good with the splashy project launch
8More

"Modern chicken has no flavor" - let's make it in a lab - Salon.com - 0 views

  • “If you take a fresh strawberry after processing, it’s nothing. It tastes like nothing,” said Wright, as a way of explaining why the food industry is so reliant on the $12 billion global flavoring industry.
  • When I asked Dave Adams, the food scientist who founded Savoury Systems, why actual meat is such an inferior source for the chicken flavor that, strangely enough, goes into chicken, he gave me the same answer Wright did. Modern chicken, he grumbled, has no flavor. “They grow them so fast, they don’t have time to develop flavor,” he said. And chicken — even tasteless, scrap stuff — is more expensive than soy.
  • The flavoring game wasn’t always so sophisticated. When it began in Europe in the 19th century, companies imported spices and procured plants such as lemongrass, which yielded citronella oil, ideal for concentrating into lemon flavor. These essential oils went mostly into fragrances, medicines and candies. As the field of chemistry progressed in the latter half of the century, European scientists, particularly Germans, figured out how to synthesize flavors and fragrances from chemicals instead of having to wrench them from natural materials.
  • ...5 more annotations...
  • World War II forced transformative market changes when supplies from Europe and elsewhere were cut off. Many companies expanded and moved across the Hudson to set up new factories
  • One of the newer breakthroughs to come along in the science of flavor is called taste modulation. About a decade ago, a biologist at the University of California at San Diego named Charles Zuker isolated, for the first time, the receptors on the tongue that are responsible for our perception of taste. He did this using tastebud cells from laboratory mice. What he found was that each cell was incredibly specific, containing receptors for just one taste — either sweet, sour, salty, bitter or savory (also called umami
  • Coupled with mass spectrometers, which identify what’s been isolated, this technology opened up a vast world of possibilities, allowing for a much more thorough (though still incomplete) map of nature’s aromas. The number of flavor chemicals known in orange peels, for instance, has gone from nine in 1948 to 207 today. In spearmint leaves, it’s leapt from 6 to 100, and in black peppercorns, from seven to 273.
  • In nature, flavor comes as a sophisticated mix of hundreds, sometimes thousands, of chemicals, each with its own unique taste and smell. Using early-20th-century chemistry tools, scientists could hope to identify perhaps a handful of these in any given plant
  • Some of the demand for flavoring is related to how plants and animals are grown and raised. Wright urged me to try a taste test at home if I was so inclined. Take three different whole chickens, she said — an average, low-priced frozen one from the supermarket; a mass-produced organic version like Bell and Evans; and what she termed a “happy chicken.” This was a bird that had spent its life outside running around and eating an evolutionary diet of grass, seeds, bugs and worms. Roast them in your kitchen and note the taste. The cheap chicken, she said, will have minimal flavor, thanks to its short life span, lack of sunlight and monotonous diet of corn and soy. The Bell and Evans will have a few “roast notes and fatty notes,” and the happy chicken will be “incomparable,” with a deep, succulent, nutty taste. Wright, as you might imagine, prefers consuming chickens of the happy variety, which her husband, who is also a flavorist (he works from home as a consultant), is generally the one to cook.
9More

Adbusters' War Against Too Much of Everything - NYTimes.com - 0 views

  • One of Mr. Lasn’s favorite words is “meme,” as in: “Adbusters floated the meme of occupying the iconic heart of global capitalism.” The biologist Richard Dawkins coined the term: a meme is a unit of cultural information spread among people like a gene. Spreading radically subversive memes is Mr. Lasn’s avowed mission.
  • He has written a new Adbusters book, “Meme Wars: The Creative Destruction of Neoclassical Economics” (Seven Stories Press). It is a lavishly illustrated collection, with photographs, drawings and essays that exhort university students to become “meme warriors” and revolutionize the field of economics.
  • Like the magazine, the book elaborates on an old theme: his belief that core economic values must shift from profit-making and expansion of the gross domestic product toward improvement of human health and protection of the planet.
  • ...6 more annotations...
  • Accomplishing that requires overturning economic orthodoxy and capitalism as we know it, he says. “We have to do this,” he says. “With climate change, and the exhaustion of the planet’s resources. I believe the alternative is apocalypse.”
  • Mr. Lasn is an analog man in a digital world. He favors spoken conversations, not e-mail or text messages, and owns only a simple cellphone — no iPhone or iPad for him.
  • Mr. Lasn says his lifestyle isn’t really sustainable. He commutes 30 minutes each way from the magazine to his home on five acres of countryside. He and his wife are occupying too much land, and his little Toyota Echo burns too much fuel for the planet’s health, he says: “What can I do? Living there helps to keep me sane.”
  • Advocating a life of material simplicity and spiritual richness, Mr. Enns urges people to “make things for others themselves, not to just go out and buy.” He says he and his wife make gifts like wooden figurines and animal dolls for children, and salsa and relish for adults.
  • Mr. Lasn does the initial design and editing of Adbusters on paper. Digitally savvy colleagues transfer his work online. The magazine’s paid circulation, which Mr. Lasn says is 60,000 to 70,000 worldwide, is overwhelmingly print, not digital. Digital subscriptions and downloads are cumbersome and must be improved, he says, although he doesn’t understand the processes.
  • Such apparent inconsistencies, and the magazine’s incendiary tone, can be maddening and even offensive, yet this rambunctious approach is also deeply appealing, some critics say. As Mr. Haiven, of New York University, puts it: “I’ve certainly been very critical of them but I’m also very glad they exist. I think they do very important work sometimes, in their own way.”
5More

Report on U.S. Meat Sounds Alarm on 'Superbugs' - NYTimes.com - 0 views

  • More than half of samples of ground turkey, pork chops and ground beef collected from supermarkets for testing by the federal government contained a bacteria resistant to antibiotics, according to a new report
  • Many animals grown for meat are fed diets containing antibiotics to promote growth and reduce costs, as well as to prevent and control illness. Public health officials in the United States and in Europe, however, are warning that the consumption of meat containing antibiotics contributes to resistance in humans. A growing public awareness of the problem has led to increased sales of antibiotic-free meat.
  • The federal researchers tested for the enterococcus bacteria, which is an indication of fecal contamination. Enterococcus also easily develops resistance to antibiotics, and it easily can pass that resistance on to other bacteria. Two species of the bacteria, Enterococcus faecalis and Enterococcus faecium, are the third-leading cause of infections in the intensive care units of United States hospitals.
  • ...2 more annotations...
  • Some 87 percent of the meat the researchers collected contained either normal or antibiotic-resistant enterococcus, suggesting that most of the meat came in contact with fecal material at some point.
  • More stark was the proportion of microbes identified that were resistant. Of all the salmonella found on raw chicken pieces sampled in 2011, 74 percent were antibiotic-resistant, while less than 50 percent of the salmonella found on chicken tested in 2002 was of a superbug variety.
« First ‹ Previous 101 - 120 of 560 Next › Last »
Showing 20 items per page