Skip to main content

Home/ History Readings/ Group items tagged mit

Rss Feed Group items tagged

gaglianoj

Turkish military says MIT shipped weapons to al-Qaeda - Al-Monitor: the Pulse of the Mi... - 0 views

  • Secret official documents about the searching of three trucks belonging to Turkey's national intelligence service (MIT) have been leaked online, once again corroborating suspicions that Ankara has not been playing a clean game in Syria.
  • According to the authenticated documents, the trucks were found to be transporting missiles, mortars and anti-aircraft ammunition.
  • When President Recep Tayyip Erdogan was prime minister, he had said, “You cannot stop the MIT truck. You cannot search it. You don’t have the authority. These trucks were taking humanitarian assistance to Turkmens.”
Javier E

Tech Is Splitting the U.S. Work Force in Two - The New York Times - 0 views

  • Phoenix cannot escape the uncomfortable pattern taking shape across the American economy: Despite all its shiny new high-tech businesses, the vast majority of new jobs are in workaday service industries, like health care, hospitality, retail and building services, where pay is mediocre.
  • automation is changing the nature of work, flushing workers without a college degree out of productive industries, like manufacturing and high-tech services, and into tasks with meager wages and no prospect for advancement.
  • Automation is splitting the American labor force into two worlds. There is a small island of highly educated professionals making good wages at corporations like Intel or Boeing, which reap hundreds of thousands of dollars in profit per employee. That island sits in the middle of a sea of less educated workers who are stuck at businesses like hotels, restaurants and nursing homes that generate much smaller profits per employee and stay viable primarily by keeping wages low.
  • ...24 more annotations...
  • economists are reassessing their belief that technological progress lifts all boats, and are beginning to worry about the new configuration of work.
  • “We automate the pieces that can be automated,” said Paul Hart, a senior vice president running the radio-frequency power business at NXP’s plant in Chandler. “The work force grows but we need A.I. and automation to increase the throughput.”
  • “The view that we should not worry about any of these things and follow technology to wherever it will go is insane,”
  • But the industry doesn’t generate that many jobs
  • Because it pushes workers to the less productive parts of the economy, automation also helps explain one of the economy’s thorniest paradoxes: Despite the spread of information technology, robots and artificial intelligence breakthroughs, overall productivity growth remains sluggish.
  • Axon, which makes the Taser as well as body cameras used by police forces, is also automating whatever it can. Today, robots make four times as many Taser cartridges as 80 workers once did less than 10 years ago
  • The same is true across the high-tech landscape. Aircraft manufacturing employed 4,234 people in 2017, compared to 4,028 in 2010. Computer systems design services employed 11,000 people in 2017, up from 7,000 in 2010.
  • To find the bulk of jobs in Phoenix, you have to look on the other side of the economy: where productivity is low. Building services, like janitors and gardeners, employed nearly 35,000 people in the area in 2017, and health care and social services accounted for 254,000 workers. Restaurants and other eateries employed 136,000 workers, 24,000 more than at the trough of the recession in 2010. They made less than $450 a week.
  • While Banner invests heavily in technology, the machines do not generally reduce demand for workers. “There are not huge opportunities to increase productivity, but technology has a significant impact on quality,” said Banner’s chief operating officer, Becky Kuhn
  • The 58 most productive industries in Phoenix — where productivity ranges from $210,000 to $30 million per worker, according to Mr. Muro’s and Mr. Whiton’s analysis — employed only 162,000 people in 2017, 14,000 more than in 2010
  • Employment in the 58 industries with the lowest productivity, where it tops out at $65,000 per worker, grew 10 times as much over the period, to 673,000.
  • The same is true across the national economy. Jobs grow in health care, social assistance, accommodation, food services, building administration and waste services
  • On the other end of the spectrum, the employment footprint of highly productive industries, like finance, manufacturing, information services and wholesale trade, has shrunk over the last 30 years
  • “In the standard economic canon, the proposition that you can increase productivity and harm labor is bunkum,” Mr. Acemoglu said
  • By reducing prices and improving quality, technology was expected to raise demand, which would require more jobs. What’s more, economists thought, more productive workers would have higher incomes. This would create demand for new, unheard-of things that somebody would have to make
  • To prove their case, economists pointed confidently to one of the greatest technological leaps of the last few hundred years, when the rural economy gave way to the industrial era.
  • In 1900, agriculture employed 12 million Americans. By 2014, tractors, combines and other equipment had flushed 10 million people out of the sector. But as farm labor declined, the industrial economy added jobs even faster. What happened? As the new farm machines boosted food production and made produce cheaper, demand for agricultural products grew. And farmers used their higher incomes to purchase newfangled industrial goods.
  • The new industries were highly productive and also subject to furious technological advancement. Weavers lost their jobs to automated looms; secretaries lost their jobs to Microsoft Windows. But each new spin of the technological wheel, from plastic toys to televisions to computers, yielded higher incomes for workers and more sophisticated products and services for them to buy.
  • In a new study, David Autor of the Massachusetts Institute of Technology and Anna Salomons of Utrecht University found that over the last 40 years, jobs have fallen in every single industry that introduced technologies to enhance productivity.
  • The only reason employment didn’t fall across the entire economy is that other industries, with less productivity growth, picked up the slack. “The challenge is not the quantity of jobs,” they wrote. “The challenge is the quality of jobs available to low- and medium-skill workers.”
  • the economy today resembles what would have happened if farmers had spent their extra income from the use of tractors and combines on domestic servants. Productivity in domestic work doesn’t grow quickly. As more and more workers were bumped out of agriculture into servitude, productivity growth across the economy would have stagnated.
  • The growing awareness of robots’ impact on the working class raises anew a very old question: Could automation go too far? Mr. Acemoglu and Pascual Restrepo of Boston University argue that businesses are not even reaping large rewards for the money they are spending to replace their workers with machines.
  • the cost of automation to workers and society could be substantial. “It may well be that,” Mr. Summers said, “some categories of labor will not be able to earn a subsistence income.” And this could exacerbate social ills, from workers dropping out of jobs and getting hooked on painkillers, to mass incarceration and families falling apart.
  • Silicon Valley’s dream of an economy without workers may be implausible. But an economy where most people toil exclusively in the lowliest of jobs might be little better.
Javier E

Why the Latest Campus Cancellation Is Different - The Atlantic - 0 views

  • Back in August, Abbot and a colleague criticized affirmative action and other ways to give candidates for admission or employment a leg up on the basis of their ethnic or racial identity in Newsweek. In their place, Abbot advocated what he calls a Merit, Fairness, and Equality (MFE) framework in which applicants would be “treated as individuals and evaluated through a rigorous and unbiased process based on their merit and qualifications alone.” This, Abbot emphasized, would also entail “an end to legacy and athletic admission advantages, which significantly favor white applicants.”
  • Is Abbot a climate-change denier? Or has he committed some terrible crime? No, he simply expressed his views about the way universities should admit students and hire faculty in the pages of a national magazine.
  • Dorian Abbot is a geophysicist at the University of Chicago. In recognition of his research on climate change, MIT invited him to deliver the John Carlson Lecture, which takes place every year at a large venue in the Boston area and is meant to “communicate exciting new results in climate science to the general public.”
  • ...1 more annotation...
  • Then the campaign to cancel Abbot’s lecture began. On Twitter, some students and professors called on the university to retract its invitation. And, sure enough, MIT buckled, becoming yet another major institution in American life to demonstrate that the commitment to free speech it trumpets on its website evaporates the moment some loud voices on social media call for a speaker’s head.
lilyrashkind

MIT Engineers Create A Lightweight Material That Is Stronger Than Steel Kids News Article - 0 views

  • nt polymers, which include all plastics, are made up of chains of building blocks called monomers. They are strung together in repetitive patterns. While the monomer chains are strong, the gaps between them are weak and porous. This is the reason you are sometimes able to smell food stored inside ziplock bags.
  • The researchers assert that the flat sheets of polymer can be stacked together to make strong, ultra-light building materials that could replace steel. Since 2DPA-1 is cheap to manufacture in large quantities, it would substantially reduce the cost of building different structures. It would also be better for the environment because steel production is responsible for about 8 percent of global carbon dioxide emissions.
  • The MIT scientists, who published their findings in the journal Nature on February 3, 2022, did not test to see if 2DPA-1 can be recycled. However, they believe the stronger, durable material could someday replace disposable containers. This would help reduce plastic pollution.
Javier E

Harvard and M.I.T. Offer Free Online Courses - NYTimes.com - 2 views

  • Harvard and M.I.T. have a rival — they are not the only elite universities planning to offer free massively open online courses, or MOOCs, as they are known. This month, Stanford, Princeton, the University of Pennsylvania and the University of Michigan announced their partnership with a new commercial company, Coursera, with $16 million in venture capital.
  • The technology for online education, with video lesson segments, embedded quizzes, immediate feedback and student-paced learning, is evolving so quickly that those in the new ventures say the offerings are still experimental.
  • M.I.T. and Harvard officials said they would use the new online platform not just to build a global community of online learners, but also to research teaching methods and technologies.
  • ...3 more annotations...
  • if I were president of a mid-tier university, I would be looking over my shoulder very nervously right now, because if a leading university offers a free circuits course, it becomes a real question whether other universities need to develop a circuits course.”
  • The edX project will include not only engineering courses, in which computer grading is relatively simple, but also humanities courses, in which essays might be graded through crowd-sourcing, or assessed with natural-language software. Coursera will also offer free humanities courses in which grading will be done by peers.
  • “What faculty don’t want to do is just take something off the shelf that’s somebody else’s and teach it, any more than they would take a textbook, start on Page 1, and end with the last chapter,” he said. “What’s still missing is an online platform that gives faculty the capacity to customize the content of their own highly interactive courses.”
  •  
    I think that Harvard, MIT, Stanford, Princeton, UP, and Michigan have a great idea in establishing these free online courses. When such high level courses are provided online and for free, there really is no excuse not to pursue greater knowledge. The only ingredient not provided is self-motivation. After both sites, Coursera seems to be developing much more rapidly then edX. Coursera is constantly updating their site with new courses. Also, after sifting through a few of the courses offered, I noticed that many teachers are willing to stream some online students in through video conferencing. These online students can virtually interact with their counterparts in the classroom at the given elite university. In other words, the intimate relationship found through interacting with other students and professors in a classroom setting is not completely lost.
Javier E

Destined for War: Can China and the United States Escape Thucydides's Trap? - The Atlantic - 0 views

  • The defining question about global order for this generation is whether China and the United States can escape Thucydides’s Trap. The Greek historian’s metaphor reminds us of the attendant dangers when a rising power rivals a ruling power—as Athens challenged Sparta in ancient Greece, or as Germany did Britain a century ago.
  • Most such contests have ended badly, often for both nations, a team of mine at the Harvard Belfer Center for Science and International Affairs has concluded after analyzing the historical record. In 12 of 16 cases over the past 500 years, the result was war.
  • When the parties avoided war, it required huge, painful adjustments in attitudes and actions on the part not just of the challenger but also the challenged.
  • ...23 more annotations...
  • Based on the current trajectory, war between the United States and China in the decades ahead is not just possible, but much more likely than recognized at the moment. Indeed, judging by the historical record, war is more likely than not.
  • A risk associated with Thucydides’s Trap is that business as usual—not just an unexpected, extraordinary event—can trigger large-scale conflict. When a rising power is threatening to displace a ruling power, standard crises that would otherwise be contained, like the assassination of an archduke in 1914, can initiate a cascade of reactions that, in turn, produce outcomes none of the parties would otherwise have chosen.
  • The preeminent geostrategic challenge of this era is not violent Islamic extremists or a resurgent Russia. It is the impact that China’s ascendance will have on the U.S.-led international order, which has provided unprecedented great-power peace and prosperity for the past 70 years. As Singapore’s late leader, Lee Kuan Yew, observed, “the size of China’s displacement of the world balance is such that the world must find a new balance. It is not possible to pretend that this is just another big player. This is the biggest player in the history of the world.”
  • More than 2,400 years ago, the Athenian historian Thucydides offered a powerful insight: “It was the rise of Athens, and the fear that this inspired in Sparta, that made war inevitable.
  • Note that Thucydides identified two key drivers of this dynamic: the rising power’s growing entitlement, sense of its importance, and demand for greater say and sway, on the one hand, and the fear, insecurity, and determination to defend the status quo this engenders in the established power, on the other.
  • However unimaginable conflict seems, however catastrophic the potential consequences for all actors, however deep the cultural empathy among leaders, even blood relatives, and however economically interdependent states may be—none of these factors is sufficient to prevent war, in 1914 or today.
  • Four of the 16 cases in our review did not end in bloodshed. Those successes, as well as the failures, offer pertinent lessons for today’s world leaders. Escaping the Trap requires tremendous effort
  • Lee Kuan Yew, the world’s premier China watcher and a mentor to Chinese leaders since Deng Xiaoping. Before his death in March, the founder of Singapore put the odds of China continuing to grow at several times U.S. rates for the next decade and beyond as “four chances in five.
  • Could China become #1? In what year could China overtake the United States to become, say, the largest economy in the world, or primary engine of global growth, or biggest market for luxury goods?
  • Could China Become #1? Manufacturer: Exporter: Trading nation: Saver: Holder of U.S. debt: Foreign-direct-investment destination: Energy consumer: Oil importer: Carbon emitter: Steel producer: Auto market: Smartphone market: E-commerce market: Luxury-goods market:   Internet user: Fastest supercomputer: Holder of foreign reserves: Source of initial public offerings: Primary engine of global growth: Economy: Most are stunned to learn that on each of these 20 indicators, China has already surpassed the U.S.
  • In 1980, China had 10 percent of America’s GDP as measured by purchasing power parity; 7 percent of its GDP at current U.S.-dollar exchange rates; and 6 percent of its exports. The foreign currency held by China, meanwhile, was just one-sixth the size of America’s reserves. The answers for the second column: By 2014, those figures were 101 percent of GDP; 60 percent at U.S.-dollar exchange rates; and 106 percent of exports. China’s reserves today are 28 times larger than America’s.
  • On whether China’s leaders are serious about displacing the United States as the top power in Asia in the foreseeable future, Lee answered directly: “Of course. Why not … how could they not aspire to be number one in Asia and in time the world?” And about accepting its place in an international order designed and led by America, he said absolutely not: “China wants to be China and accepted as such—not as an honorary member of the West.”
  • As the United States emerged as the dominant power in the Western hemisphere in the 1890s, how did it behave? Future President Theodore Roosevelt personified a nation supremely confident that the 100 years ahead would be an American century. Over a decade that began in 1895 with the U.S. secretary of state declaring the United States “sovereign on this continent,” America liberated Cuba; threatened Britain and Germany with war to force them to accept American positions on disputes in Venezuela and Canada; backed an insurrection that split Colombia to create a new state of Panama (which immediately gave the U.S. concessions to build the Panama Canal); and attempted to overthrow the government of Mexico, which was supported by the United Kingdom and financed by London bankers. In the half century that followed, U.S. military forces intervened in “our hemisphere” on more than 30 separate occasions to settle economic or territorial disputes in terms favorable to Americans, or oust leaders they judged unacceptable
  • When Deng Xiaoping initiated China’s fast march to the market in 1978, he announced a policy known as “hide and bide.” What China needed most abroad was stability and access to markets. The Chinese would thus “bide our time and hide our capabilities,” which Chinese military officers sometimes paraphrased as getting strong before getting even.
  • With the arrival of China’s new paramount leader, Xi Jinping, the era of “hide and bide” is over
  • Many observers outside China have missed the great divergence between China’s economic performance and that of its competitors over the seven years since the financial crisis of 2008 and Great Recession. That shock caused virtually all other major economies to falter and decline. China never missed a year of growth, sustaining an average growth rate exceeding 8 percent. Indeed, since the financial crisis, nearly 40 percent of all growth in the global economy has occurred in just one country: China
  • What Xi Jinping calls the “China Dream” expresses the deepest aspirations of hundreds of millions of Chinese, who wish to be not only rich but also powerful. At the core of China’s civilizational creed is the belief—or conceit—that China is the center of the universe. In the oft-repeated narrative, a century of Chinese weakness led to exploitation and national humiliation by Western colonialists and Japan. In Beijing’s view, China is now being restored to its rightful place, where its power commands recognition of and respect for China’s core interests.
  • Last November, in a seminal meeting of the entire Chinese political and foreign-policy establishment, including the leadership of the People’s Liberation Army, Xi provided a comprehensive overview of his vision of China’s role in the world. The display of self-confidence bordered on hubris. Xi began by offering an essentially Hegelian conception of the major historical trends toward multipolarity (i.e. not U.S. unipolarity) and the transformation of the international system (i.e. not the current U.S.-led system). In his words, a rejuvenated Chinese nation will build a “new type of international relations” through a “protracted” struggle over the nature of the international order. In the end, he assured his audience that “the growing trend toward a multipolar world will not change.”
  • Given objective trends, realists see an irresistible force approaching an immovable object. They ask which is less likely: China demanding a lesser role in the East and South China Seas than the United States did in the Caribbean or Atlantic in the early 20th century, or the U.S. sharing with China the predominance in the Western Pacific that America has enjoyed since World War II?
  • At this point, the established script for discussion of policy challenges calls for a pivot to a new strategy (or at least slogan), with a short to-do list that promises peaceful and prosperous relations with China. Shoehorning this challenge into that template would demonstrate only one thing: a failure to understand the central point I’m trying to make
  • What strategists need most at the moment is not a new strategy, but a long pause for reflection. If the tectonic shift caused by China’s rise poses a challenge of genuinely Thucydidean proportions, declarations about “rebalancing,” or revitalizing “engage and hedge,” or presidential hopefuls’ calls for more “muscular” or “robust” variants of the same, amount to little more than aspirin treating cancer. Future historians will compare such assertions to the reveries of British, German, and Russian leaders as they sleepwalked into 1914
  • The rise of a 5,000-year-old civilization with 1.3 billion people is not a problem to be fixed. It is a condition—a chronic condition that will have to be managed over a generation
  • Success will require not just a new slogan, more frequent summits of presidents, and additional meetings of departmental working groups. Managing this relationship without war will demand sustained attention, week by week, at the highest level in both countries. It will entail a depth of mutual understanding not seen since the Henry Kissinger-Zhou Enlai conversations in the 1970s. Most significantly, it will mean more radical changes in attitudes and actions, by leaders and publics alike, than anyone has yet imagined.
Javier E

The Social States Of America - The Dish | By Andrew Sullivan - The Daily Beast - 0 views

  • what our state lines might look like if we drew them based on who actually talks with each other, at least according to cell phone data gathered by MIT
manhefnawi

Distinctive brain pattern helps habits form | Big Think - 0 views

  • MIT neuroscientists have now found that certain neurons in the brain are responsible for marking the beginning and end of these chunked units of behavior. These neurons, located in a brain region highly involved in habit formation, fire at the outset of a learned routine, go quiet while it is carried out, then fire again once the routine has ended.
Javier E

Johan Rockström: Presenting a framework for preserving Earth's resilience | M... - 0 views

  • Sixty-seven percent of vertebrate wildlife species is projected to be extinct by 2020
  • if we avoid transgressing planetary boundaries we can maintain a semblance of the biosphere balance we enjoyed during the Holocene epoch of Earth history.
  • “The Holocene is the only equilibrium of the planet that we know for certain can support humanity as we know it,” he said. “We have no evidence to suggest that we could morally and ethically support 9.5 billion co-citizens with a minimum standard of good lives [outside of Holocene conditions].”
  • ...1 more annotation...
  • The use of renewable energy sources is doubling every 5.4 years; continuing that rate of growth is a key strategy to phase out the use of fossil fuels and achieve full decarbonization of the economy by 2050, according to Rockström.
Javier E

A Harvard Professor Doubles Down: If You Take Epstein's Money, Do It in Secret - The Ne... - 0 views

  • Mr. Lessig signed a letter in support of Mr. Ito, and then published a 3,500-word essay on the subject. He argued that in an ideal world, no institution should take money from people like Mr. Epstein, but that in reality, much of the money that props up universities and other elite institutions comes from troubling sources.
  • Mr. Lessig suggested that donors to places like the Massachusetts Institute of Technology could be organized in four buckets, ranging from “people like Tom Hanks or Taylor Swift — people who are wealthy and whose wealth comes from nothing but doing good” — to “entities and people whose wealth comes from clearly wrongful or harmful or immoral behavior.”
  • Mr. Lessig, who noted that he was a childhood victim of sexual abuse, also argued that the act of veiling Mr. Epstein’s contributions was good, because it avoided “whitewashing” his reputation.
  • ...1 more annotation...
  • “Everyone seems to treat it as if the anonymity and secrecy around Epstein’s gift are a measure of some kind of moral failing,” Mr. Lessig wrote. “I see it as exactly the opposite.”
Javier E

Coronavirus could overwhelm hospitals in small cities and rural areas, data shows - Was... - 0 views

  • f a health official wanted to know how many intensive-care beds there are in the United States, Jeremy Kahn would be the person to ask. The ICU physician and researcher at the University of Pittsburgh earns a living studying critical-care resources in U.S. hospitals.
  • Yet even Kahn can’t give a definitive answer. His best estimate is based on Medicare data gathered three years ago
  • “People are sort of in disbelief that even I don’t know how many ICU beds exist in each hospital in the United States,” he said, noting that reporting varies hospital to hospital, state to state. “And I’m sort of like, ‘Yep, the research community has been dealing with this problem for years.’ ”
  • ...22 more annotations...
  • But the pandemic has revealed a dearth of reliable data about the key parts of the nation’s health-care system now under assault. That leaves decision-makers operating in the dark
  • Given the limitations, The Washington Post assembled data to analyze the availability of the critical-care resources needed to treat severely ill patients who require extended hospitalization. The Post conducted a stress test of sorts on available resources, which revealed a patchwork of possible preparedness shortcomings in cities and towns where the full force of the virus has yet to hit and where people may not be following isolation and social distancing orders.
  • More than half of the nation’s population lives in areas that are less prepared than New York City, where in early April officials scrambled to add more ICU beds and find extra ventilators amid a surge of covid-19 patients.
  • To compare available resources across the country, The Post examined a year-long scenario in which the coronavirus would sicken 20 percent of U.S. adults, and about 20 percent of those infected would require hospitalization
  • Under that scenario, about 11 million adults would need hospitalization for nearly two weeks, and almost 2.5 million would require intensive care.
  • This level of hospitalization is considered by Harvard researchers to be a conservative outcome for the pandemic, while others have described it as severe.
  • about 76 million people, or 30 percent of the nation’s adult population, live in areas where the number of available ICU beds would not be enough to satisfy the demand of virus patients. The scenario for ventilator availability is even more dire: Nearly half of the adult population lives in regions where the demand would exceed the supply.
  • We need to know where our weapons are. We need to coordinate all of that,” said Retsef Levi, a Massachusetts Institute of Technology professor leading a health-care data initiative called the COVID-19 Policy Alliance. “This is a war.”
  • Kahn likened the task of evaluating the current readiness of the U.S. health-care system to peering into a dark room.
  • By The Post’s analysis, the general Seattle region would need all of its available ICU beds — plus a 15 percent increase — to handle an outbreak in which 20 percent of the population is infected with the coronavirus and 20 percent of those people need hospitalization. But the demand for ICU beds could be lower because the curve of infections in Washington appears to be flattening, according to officials.
  • Bergamo, as the ground zero of the Italian outbreak, was beset by ICU bed and ventilator shortages. “We think Italy may be the most comparable area to the United States, at this point, for a variety of reasons,” Vice President Pence said April 1 in a CNN interview.
  • The MIT research group, the COVID-19 Policy Alliance, has mapped high-risk areas in the United States where sudden spikes could inundate hospitals as the surge in northern Italy did.
  • In their U.S. analysis, MIT researchers considered several risk factors, including elderly population, high blood pressure and obesity.
  • The takeaway, the researchers said, is that across the nation, “micro-geographies” of individual Zip codes or small towns have the potential to generate surges of covid-19 patients that could overwhelm even the most-prepared hospitals.
  • Levi said nursing home populations should be prioritized for virus testing across the country, because outbreaks in such close quarters can rapidly sicken dozens of people, who then flood into area hospitals.
  • “We’re outside of it, and we’re all looking through different keyholes and seeing different aspects of it,” he said. “But there’s no way to just open the door and turn on the lights, because of how fragmented the data are. And that is a really, really depressing thing at all times, let alone during a pandemic, that we don’t have an ability to look at these things.”
  • The Society of Critical Care Medicine estimates that there are nearly 29,000 critical-care specialized physicians like Johnson who are trained to work in ICUs in the United States. Yet about half of all acute-care hospitals have no specialists dedicated to their ICUs. Because of the demands of treating covid-19 patients, the lack of dedicated physicians “will be strongly felt” through a lack of high-quality care, the society said in a statement.
  • The society also projects that the nurses, respiratory therapists and physician assistants specially qualified to work with ICU patients may be in short supply as patient demand increases and the ranks of medical workers are thinned by illness and quarantine.
  • what has the hospital been doing as a prevention epicenter in the four years between the Ebola epidemic and the emergence of the coronavirus pandemic?
  • “Drilling and preparing for it,” said Jorge Salinas, an infectious-disease physician working on the effort. “You may be preparing and training for 10 years and nothing happens. But if you don’t do that, when these pandemics do occur, you will not be prepared.”
  • Salinas said the pandemic has exposed the long-standing flaws in the nation’s “individualistic” health-care system, where hospitals look out for themselves. Electronic health-monitoring systems vary hospital to hospital. Supply tallies are kept in-house and generally not shared. To counter this in Iowa, he said, all hospitals have begun sharing daily information with state officials.
  • “The name of the game is solidarity,” Salinas said. “If we try to be individualists, we will fail.”
delgadool

Election Day Voting in 2020 Took Longer in America's Poorest Neighborhoods - The New Yo... - 0 views

  • Casting a vote typically took longer in poorer, less white neighborhoods than it did in whiter and more affluent ones.
  • This analysis found that voters in the very poorest neighborhoods in the country typically took longer to vote, and they were also modestly more likely to experience voting times of an hour or more.
  • Most Election Day voters spent 20 minutes or less voting. But those in overwhelmingly nonwhite neighborhoods were more likely to experience the longest voting times.
  • ...2 more annotations...
  • In a 2014 report, a bipartisan commission on election administration concluded that no one should have to wait longer than 30 minutes to vote, finding that wait times longer than that are “an indication that something is amiss and that corrective measures should be deployed.”
  • The S.P.A.E. found that 14 percent of Election Day voters waited more than 30 minutes to vote, an increase from 2016.
Javier E

What History Tells Us About the Accelerating AI Revolution - CIO Journal. - WSJ - 0 views

  • What History Tells Us About the Coming AI Revolution by Oxford professor Carl Benedikt Frey based on his 2019 book The Technology Trap.
  • a 2017 Pew Research survey found that three quarters of Americans expressed serious concerns about AI and automation, and just over a third believe that their children will be better off financially than they were.
  • “Many of the trends we see today, such as the disappearance of middle-income jobs, stagnant wages and growing inequality were also features of the Industrial Revolution,”
  • ...13 more annotations...
  • “We are at the brink of a technological revolution that promises not just to fundamentally alter the structure of our economy, but also to reshape the social fabric more broadly. History tells us anxiety tends to accompany rapid technological change, especially when technology takes the form of capital which threatens people’s jobs.” 
  • Over the past two centuries we’ve learned that there’s a significant time lag, between the broad acceptance of major new transformative technologies and their long-term economic and productivity growth.
  • In their initial phase, transformative technologies require massive complementary investments, such as business process redesign, co-invention of new products and business models, and the re-skilling of the workforce.  The more transformative the technologies, the longer it takes them to reach the harvesting phase
  • The time lags between the investment and harvesting phases are typically quite long.
  • While James Watt’s steam engine ushered the Industrial Revolution in the 1780s, “British factories were for the most part powered by water up until the 1840.”
  • Similarly, productivity growth did not increase until 40 years after the introduction of electric power in the early 1880s.  
  • In their early stages, the extensive investments required to embrace a GPT like AI will generally reduce productivity growth.
  • “the short run consequences of rapid technological change can be devastating for working people, especially when technology takes the form of capital which substitutes for labor.
  • In the long run, the Industrial Revolution led to a rising standard of living, improved health, and many other benefits.  “Yet in the short run, the lives of working people got nastier, more brutish, and shorter. And what economists regard as ‘the short run’ was a lifetime, for some,”
  • A 2017 McKinsey study concluded that while a growing technology-based economy will create a significant number of new occupations, as has been the case in the past, “the transitions will be very challenging - matching or even exceeding the scale of shifts out of agriculture and manufacturing we have seen in the past.” 
  • The US and other industrial economies have seen a remarkable rise in the polarization of job opportunities and wage inequality by educational attainment, with the earnings of the most-educated increasing, and the earnings of the least-educated falling in real terms
  • Since the 1980s, the earnings of those with a four year college degree have risen by 40% to 60%, while the earnings of those with a high school education or less have fallen among men and barely changed among women.
  • When upskilling is lagging behind, entire social groups might end up being excluded from the growth engine.”
brookegoodman

MIT elects first black woman student body president in its 159-year history - CNN - 0 views

  • Danielle Geathers and running mate Yu Jing Chen won the student government election earlier this month.
  • "Although some people think it is just a figurehead role, figureheads can matter in terms of people seeing themselves in terms of representation," she said. "Seeing yourself at a college is kind of an important part of the admissions process."
  • She said that a lot of the work of the student government takes place in meetings with administrators, so she hopes to make the group more visible on campus.
Javier E

Is Anything Still True? On the Internet, No One Knows Anymore - WSJ - 0 views

  • Creating and disseminating convincing propaganda used to require the resources of a state. Now all it takes is a smartphone.
  • Generative artificial intelligence is now capable of creating fake pictures, clones of our voices, and even videos depicting and distorting world events. The result: From our personal circles to the political circuses, everyone must now question whether what they see and hear is true.
  • exposure to AI-generated fakes can make us question the authenticity of everything we see. Real images and real recordings can be dismissed as fake. 
  • ...20 more annotations...
  • “When you show people deepfakes and generative AI, a lot of times they come out of the experiment saying, ‘I just don’t trust anything anymore,’” says David Rand, a professor at MIT Sloan who studies the creation, spread and impact of misinformation.
  • The signs that an image is AI-generated are easy to miss for a user simply scrolling past, who has an instant to decide whether to like or boost a post on social media. And as generative AI continues to improve, it’s likely that such signs will be harder to spot in the future.
  • The combination of easily-generated fake content and the suspicion that anything might be fake allows people to choose what they want to believe, adds DiResta, leading to what she calls “bespoke realities.”
  • Examples of misleading content created by generative AI are not hard to come by, especially on social media
  • This problem, which has grown more acute in the age of generative AI, is known as the “liar’s dividend,
  • “What our work suggests is that most regular people do not want to share false things—the problem is they are not paying attention,”
  • People’s attention is already limited, and the way social media works—encouraging us to gorge on content, while quickly deciding whether or not to share it—leaves us precious little capacity to determine whether or not something is true
  • are now using its existence as a pretext to dismiss accurate information
  • in the course of a lawsuit over the death of a man using Tesla’s “full self-driving” system, Elon Musk’s lawyers responded to video evidence of Musk making claims about this software by suggesting that the proliferation of “deepfakes” of Musk was grounds to dismiss such evidence. They advanced that argument even though the clip of Musk was verifiably real
  • If the crisis of authenticity were limited to social media, we might be able to take solace in communication with those closest to us. But even those interactions are now potentially rife with AI-generated fakes.
  • what sounds like a call from a grandchild requesting bail money may be scammers who have scraped recordings of the grandchild’s voice from social media to dupe a grandparent into sending money.
  • companies like Alphabet, the parent company of Google, are trying to spin the altering of personal images as a good thing. 
  • With its latest Pixel phone, the company unveiled a suite of new and upgraded tools that can automatically replace a person’s face in one image with their face from another, or quickly remove someone from a photo entirely.
  • Joseph Stalin, who was fond of erasing people he didn’t like from official photos, would have loved this technology.
  • In Google’s defense, it is adding a record of whether an image was altered to data attached to it. But such metadata is only accessible in the original photo and some copies, and is easy enough to strip out.
  • The rapid adoption of many different AI tools means that we are now forced to question everything that we are exposed to in any medium, from our immediate communities to the geopolitical, said Hany Farid, a professor at the University of California, Berkeley who
  • To put our current moment in historical context, he notes that the PC revolution made it easy to store and replicate information, the internet made it easy to publish it, the mobile revolution made it easier than ever to access and spread, and the rise of AI has made creating misinformation a cinch. And each revolution arrived faster than the one before it.
  • Not everyone agrees that arming the public with easy access to AI will exacerbate our current difficulties with misinformation. The primary argument of such experts is that there is already vastly more misinformation on the internet than a person can consume, so throwing more into the mix won’t make things worse.
  • it’s not exactly reassuring, especially given that trust in institutions is already at one of the lowest points in the past 70 years, according to the nonpartisan Pew Research Center, and polarization—a measure of how much we distrust one another—is at a high point.
  • “What happens when we have eroded trust in media, government, and experts?” says Farid. “If you don’t trust me and I don’t trust you, how do we respond to pandemics, or climate change, or have fair and open elections? This is how authoritarianism arises—when you erode trust in institutions.”
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

Steven Pinker's five-point plan to save Harvard from itself - 0 views

  • The fury was white-hot. Harvard is now the place where using the wrong pronoun is a hanging offense but calling for another Holocaust depends on context. Gay was excoriated not only by conservative politicians but by liberal alumni, donors, and faculty, by pundits across the spectrum, even by a White House spokesperson and by the second gentleman of the United States. Petitions demanding her resignation have circulated in Congress, X, and factions of the Harvard community, and at the time of this writing, a prediction market is posting 1.2:1 odds that she will be ousted by the end of the year.
  • I don’t believe that firing Gay is the appropriate response to the fiasco. It wasn’t just Gay who fumbled the genocide question but two other elite university presidents — Sally Kornbluth of MIT (my former employer) and Elizabeth Magill of the University of Pennsylvania, who resigned following her testimony — which suggests that the problem with Gay’s performance betrays a deeper problem in American universities.
  • Gay interpreted the question not at face value but as pertaining to whether Harvard students who had brandished slogans like “Globalize the intifada” and “From the river to the sea,” which many people interpret as tantamount to a call for genocide, could be prosecuted under Harvard’s policies. Though the slogans are simplistic and reprehensible, they are not calls for genocide in so many words. So even if a university could punish direct calls for genocide as some form of harassment, it might justifiably choose not to prosecute students for an interpretation of their words they did not intend.
  • ...21 more annotations...
  • Nor can a university with a commitment to academic freedom prohibit all calls for political violence. That would require it to punish, say, students who express support for the invasion of Gaza knowing that it must result in the deaths of thousands of civilians. Thus Gay was correct in saying that students’ political slogans are not punishable by Harvard’s rules on harassment and bullying unless they cross over into intimidation, personal threats, or direct incitement of violence.
  • Gay was correct yet again in replying to Stefanik’s insistent demand, “What action has been taken against students who are harassing Jews on campus?” by noting that no action can be taken until an investigation has been completed. Harvard should not mete out summary justice like the Queen of Hearts in “Alice in Wonderland”: Sentence first, verdict afterward.
  • The real problem with Gay’s testimony was that she could not clearly and credibly invoke those principles because they either have never been explicitly adopted by Harvard or they have been flagrantly flouted in the past (as Stefanik was quick to point out)
  • Harvard has persecuted scholars who said there are two sexes, or who signed an amicus brief taking the conservative side in a Supreme Court deliberation. It has retracted acceptances from students who were outed by jealous peers for having used racist trash talk on social media when they were teens. Harvard’s subzero FIRE rating reveals many other punishments of politically incorrect peccadillos.
  • Institutional neutrality. A university does not need a foreign policy, and it does not need to issue pronouncements on the controversies and events of the day. It is a forum for debate, not a protagonist in debates. When a university takes a public stand, it either puts words in the mouths of faculty and students who can speak for themselves or unfairly pits them against their own employer.
  • In the wake of this debacle, the natural defense mechanism of a modern university is to expand the category of forbidden speech to include antisemitism (and as night follows day, Islamophobia). Bad idea
  • Deplorable speech should be refuted, not criminalized. Outlawing hate speech would only result in students calling anything they didn’t want to hear “hate speech.” Even the apparent no-brainer of prohibiting calls for genocide would backfire. Trans activists would say that opponents of transgender women in women’s sports were advocating genocide, and Palestinian activists would use the ban to keep Israeli officials from speaking on campus.
  • For universities to have a leg to stand on when they try to stand on principle, they must embark on a long-term plan to undo the damage they have inflicted on themselves. This requires five commitments.
  • Free speech. Universities should adopt a clear and conspicuous policy on academic freedom. It might start with the First Amendment, which binds public universities and which has been refined over the decades with carefully justified exceptions.
  • So for the president of Harvard to suddenly come out as a born-again free-speech absolutist, disapproving of what genocidaires say but defending to the death their right to say it, struck onlookers as disingenuous or worse.
  • Since universities are institutions with a mission of research and education, they are also entitled to controls on speech that are necessary to fulfill that mission. These include standards of quality and relevance: You can’t teach anything you want at Harvard, just like you can’t publish anything you want in The Boston Globe. And it includes an environment conducive to learning.
  • The events of this autumn also show that university pronouncements are an invitation to rancor and distraction. Inevitably there will be constituencies who feel a statement is too strong, too weak, too late, or wrongheaded.
  • Nonviolence.
  • Universities should not indulge acts of vandalism, trespassing, and extortion. Free speech does not include a heckler’s veto, which blocks the speech of others. These goon tactics also violate the deepest value of a university, which is that opinions are advanced by reason and persuasion, not by force
  • Viewpoint diversity. Universities have become intellectual and political monocultures. Seventy-seven percent of the professors in Harvard’s Faculty of Arts and Sciences describe themselves as liberal, and fewer than 3 percent as conservative. Many university programs have been monopolized by extreme ideologies, such as the conspiracy theory that the world’s problems are the deliberate designs of a white heterosexual male colonialist oppressor class.
  • Vast regions in the landscape of ideas are no-go zones, and dissenting ideas are greeted with incomprehension, outrage, and censorship.
  • The entrenchment of dogma is a hazard of policies that hire and promote on the say-so of faculty backed by peer evaluations. Though intended to protect departments from outside interference, the policies can devolve into a network of like-minded cronies conferring prestige on each other. Universities should incentivize departments to diversify their ideologies, and they should find ways of opening up their programs to sanity checks from the world outside.
  • Disempowering DEI. Many of the assaults on academic freedom (not to mention common sense) come from a burgeoning bureaucracy that calls itself diversity, equity, and inclusion while enforcing a uniformity of opinion, a hierarchy of victim groups, and the exclusion of freethinkers. Often hastily appointed by deans as expiation for some gaffe or outrage, these officers stealthily implement policies that were never approved in faculty deliberations or by university leaders willing to take responsibility for them.
  • An infamous example is the freshman training sessions that terrify students with warnings of all the ways they can be racist (such as asking, “Where are you from?”). Another is the mandatory diversity statements for job applicants, which purge the next generation of scholars of anyone who isn’t a woke ideologue or a skilled liar. And since overt bigotry is in fact rare in elite universities, bureaucrats whose job depends on rooting out instances of it are incentivized to hone their Rorschach skills to discern ever-more-subtle forms of “systemic” or “implicit” bias.
  • Universities should stanch the flood of DEI officials, expose their policies to the light of day, and repeal the ones that cannot be publicly justified.
  • A fivefold way of free speech, institutional neutrality, nonviolence, viewpoint diversity, and DEI disempowerment will not be a quick fix for universities. But it’s necessary to reverse their tanking credibility and better than the alternatives of firing the coach or deepening the hole they have dug for themselves.
Javier E

Germany isn't turning its back on NATO. It only looks that way. - The Washington Post - 0 views

  • What some commentators abroad see as appeasement, cowardice and the triumph of economic interests over security concerns, many Germans see as a grown-up, sensible and conciliatory approach to foreign policy. (A recent poll found that 59 percent of Germans supported the decision not to send arms to Kyiv.) Germans view themselves as enlightened, having moved beyond power politics, the national interest and militarism.
  • The idea of deterrence, or of the military being an element of geopolitical power needed for strong diplomacy, is foreign to most German citizens.
  • y she is referring to. They read the 20th century, and 1933-1945 in particular, as a lesson in the evils of geopolitics and militarism, and they internalized the post-1989 “end of history” narrative better than anyone else.
  • ...5 more annotations...
  • In a debate about the Russia-NATO standoff carried by my local German radio station, the dominant view was that de-escalation and diplomacy are what’s needed, with one listener commenting, “I am against weapons in general,” and another warning that nobody should talk about war since, “If you talk about something, it becomes possible.”
  • After the end of the Cold War, Germany spent decades insulated from the harsh world of power politics; most Germans believed that countries were converging toward a system that marginalized military power and favored economic power and legal proceedings. Now that great power competition and military conflicts are back, Germany does not know what to do.
  • Many decision-makers and voters in Germany remain deeply committed to the hope that all conflicts can be solved through dialogue under international law and international organizations such as the United Nations — as if all conflict resulted from misunderstandings instead of competing interests.
  • In a 2020 poll, only 24 percent of Germans said they considered that under some circumstances war could sometimes be necessary to achieve justice, while over 51 percent said war is never necessary.
  • Nonetheless, an increasing number of Germans are beginning to argue that one might also draw a different lesson from history — such as that it is not a good idea to try to appease aggressors.
1 - 20 of 72 Next › Last »
Showing 20 items per page