Skip to main content

Home/ History Readings/ Group items tagged technical

Rss Feed Group items tagged

Javier E

Zuckerberg's refusal to testify branded 'absolutely astonishing' | Technology | The Gua... - 0 views

  • Testifying alongside Wylie was Paul-Olivier Dehaye, the co-founder of personaldata.io, who has been fighting to force the social network to apply European data protection law. Dehaye revealed that Facebook repeatedly tried to argue it was exempt from fulfilling “subject access” requirements, which allow individuals to see the data that companies hold about them, because it would be too expensive to comply.
  • “They’re invoking exceptions … involving disproportionate effort,” Dehaye said. “They’re saying it’s too much effort to give me access to my data. I find that quite intriguing, because they’re making a technical and a business argument as to why I shouldn’t have access to this data.
  • “In the technical argument they’re shooting themselves in the foot, because they’re saying they’re so big the cost would be too large to provide me data.”
  • ...1 more annotation...
  • In effect, Dehaye said, Facebook told him it was too big to regulate. “They’re really arguing that they’re too big to comply with data protection law, the cost is too high. Which is mind-boggling, that they wouldn’t see the direction they’re going there. Do they really want to make this argument?”
runlai_jiang

Boko Haram's Violent Push Puts New Heat on Nigerian President - WSJ - 0 views

  • Boko Haram’s abduction of more than 100 Nigerian schoolgirls last week is sending political shock waves through a key U.S. counterterrorism ally in Africa as President Muhammadu Buhari weighs whether to seek re-election.
  • won 2015 elections on a campaign to defeat the Islamist insurgency and liberate more than 200 Chibok schoolgirls, whose 2014 abduction prompted a global outcry.
  • The kidnapping of 110 girls from the Dapchi Government Girls Science and Technical College in northeastern Yobe state has emboldened his critics, who draw parallels to the Chibok abductions and the potential political fallout in Africa’s most-populous nation and biggest economy.
  • ...10 more annotations...
  • None of the Dapchi girls has been found. Nigeria’s government on Tuesday said it had set up a probe led by senior security officials to investigate the abductions.
  • The U.S. State Department last week condemned the attack and emphasized its support for Nigeria’s efforts to counter terror groups.
  • The abduction could have an equally detrimental effect on Buhari’s electoral fortunes as Chibok had on his predecessor.”
  • But revelations after the abductions that the government recently withdrew security forces stationed to protect the town, saying they were needed elsewhere, has deepened anger in Dapchi.
  • The security situation is in a shambles—Boko Haram are calling the shots,” said Maria Urgbashi, who runs a food stall close to the Hilton hotel. Another trader, Umar Danjuma, agreed: “Buhari has done his best, but I don’t think his best is good enough to take Nigeria out this present hardship.”
  • remains the front-runner if he is healthy and popular enough to secure his governing APC party’s nomination.
  • The APC—which on Monday opened its annual congress—is deeply divided, as is the opposition PDP
  • The APC has the political and economic advantages of incumbency, and the president isn’t seen as personally corrupt.
  • Buhari has governed more like a king than a leader at the center of a sophisticated political structure,”
  • Buhari has repeatedly claimed to have technically defeated appears to be stubbornly resilient. After losing hundreds of square miles of territory to government forces, the jihadists have increased attacks in the past year, sending more than 90 children strapped with bombs into public places.
anonymous

John Jacob Astor's Fortune Was Built on Opium - HISTORY - 0 views

  • Astor’s enormous fortune was made in part by sneaking opium into China against imperial orders. The resulting riches made him one of the world’s most powerful merchants—and also helped create the world’s first widespread opioid epidemic.
  • That damage took the form of drugs—namely, opium. Since there wasn’t much demand in China for western goods, England and the United States made up for it by providing something that was. They used the profits from opium to purchase tea, pottery and fabrics that they’d resell back home. This also allowed merchants to get around a big technical challenge: an international shortage of silver, the only currency the Chinese would take.
  • Opium was technically banned in China, but merchants like Astor found a way around the ban. Large ships containing gigantic hauls of opium met small vessels outside of legitimate ports and swiftly unloaded their illicit cargo. Bribery was common and officials who had taken bribes looked the other way instead of enforcing anti-opium laws.
  • ...3 more annotations...
  • By 1890, a full 10 percent of China’s population smoked opium. In a bid to curb opium use, imperial China banned producing or consuming the drug, even executing dealers and forcing users to wear heavy wooden collars and endure beatings.
  • Astor wasn’t the only American to make his fortune in part through opium smuggling: Warren Delano, Franklin Delano Roosevelt’s father, made millions engaging in what he called a “fair, honorable and legitimate” trade.
  • Astor’s reputation didn’t suffer from the trade—though it was illegal in China, Astor conducted his drug deals openly. But by participating in the opium trade in the early 1800s, he helped create a system that fueled addiction worldwide—and made millions while he was at it
Javier E

American and German Approaches to Energy-Climate Policy - 0 views

  • The challenges and concerns that have arisen in Germany should not be taken as indicators that the Energiewende is failed policy, or more specifically, to dismiss the importance of renewable energy.
  • 2022, it is expected that Germany will have 220 GW of total capacity, of which 90 GW will be from conventional sources and 130 GW from renewables, with wind and solar accounting for 90 percent of the added renewable power capacity.
  • German policymakers also point to robust investment in the country’s energy sector, job creation, a burst of renewable energy technology innovation and Germany’s status as a global leader in the renewable energy sector as positive outcomes of the Energiewende.
  • ...17 more annotations...
  • U.S. utility industry representatives expressed skepticism regarding the efficacy and viability of the Energiewende, reflected in the following issues and questions raised during the meeting:
  • Cost Impact on Households. Would rising household rates evidenced in Germany be acceptable in the United States?
  • mplications for the Economy and Industrial Competitiveness. How do the costs of renewable energy policy affect long-term economic growth and competitiveness? 
  • Impact on utilities. Will traditional utilities be driven out of business? Or are new business models emerging
  • Fairness and equity. Would a policy in which one sector (households) bears most of the costs be politically or socially viable in the United States? 
  • Technical barriers. How is Germany overcoming technical challenges in integrating large shares of variable renewable energy, including impacts on neighboring countries? 
  • we can see that Germany’s Energiewende provides several useful lessons for the U.S. as it thinks strategically about the future of its electricity industry.
  • “With the Renewable Energy Act that we created in 2000, we financed a learning curve that was expensive. But the good news is that we have learned in only 13 years to produce electricity with wind power and solar facilities at the same price as if we were to build new coal or gas power stations.”
  • In spite of high costs, and despite the realization that elements of the Energiewende need to be reworked, Germany has rolled out a sweeping and effective suite of policies and legislation successfully, supported by a remarkable political and social consensus.
  • Gaining a consensus on a clear policy direction is critically important and should precede and inform debates about which specific policy mechanisms to implement and how
  • Monitoring and course corrections are required, with solutions tailored to local conditions. Policymakers should be prepared not only to monitor continually the effectiveness of policy, but also to alter the policy as technology and market conditions change. Importantly, fine-tuning policy or market design should not be viewed as a failure.
  • Setting objectives and developing national policy are important. If a country can agree politically on fundamental objectives, designing and implementing effective policy mechanisms is easier
  • electricity consumption of the average American household is significantly greater than the average German family of four which uses about 3,500 kWh/year, while the U.S. average is 10,800 kWh/year,
  • making a U.S. ratepayer much more sensitive to price increases.
  • despite the Energiewende’s costs, German households and politicians remain ideologically committed to the goal of emissions reduction and highly tolerant of the associated costs
  • The fact that alarm over climate change and its impacts have not penetrated American politics or society in the same way may be the most significant cultural difference between the two countries and may explain American disbelief that Germans could remain supportive of an increasingly costly policy.
  • Germany has demonstrated that high levels of renewable energy penetration are possible, with limited to no impact on reliability and system stability
Javier E

Coronavirus has not suspended politics - it has revealed the nature of power ... - 0 views

  • e keep hearing that this is a war. Is it really? What helps to give the current crisis its wartime feel is the apparent absence of normal political argument.
  • this is not the suspension of politics. It is the stripping away of one layer of political life to reveal something more raw underneath
  • In a democracy we tend to think of politics as a contest between different parties for our support. We focus on the who and the what of political life: who is after our votes, what they are offering us, who stands to benefit. We see elections as the way to settle these arguments
  • ...23 more annotations...
  • But the bigger questions in any democracy are always about the how: how will governments exercise the extraordinary powers we give them? And how will we respond when they do?
  • These are the questions that have always preoccupied political theorists. But now they are not so theoretical.
  • As the current crisis shows, the primary fact that underpins political existence is that some people get to tell others what to do.
  • At the heart of all modern politics is a trade-off between personal liberty and collective choice. This is the Faustian bargain identified by the philosopher Thomas Hobbes in the middle of the 17th century, when the country was being torn apart by a real civil war.
  • As Hobbes knew, to exercise political rule is to have the power of life and death over citizens. The only reason we would possibly give anyone that power is because we believe it is the price we pay for our collective safety. But it also means that we are entrusting life-and-death decisions to people we cannot ultimately control.
  • The primary risk is that those on the receiving end refuse to do what they are told. At that point, there are only two choices. Either people are forced to obey, using the coercive powers the state has at its disposal. Or politics breaks down altogether, which Hobbes argued was the outcome we should fear most of all.
  • Autocratic regimes such as China also find it hard to face up to crises until they have to – and, unlike democracies, they can suppress the bad news for longer if it suits them. But when action becomes unavoidable, they can go further. The Chinese lockdown succeeded in containing the disease through ruthless pre-emption.
  • The rawness of these choices is usually obscured by the democratic imperative to seek consensus. That has not gone away. The government is doing all it can to dress up its decisions in the language of commonsense advice.
  • But as the experience of other European countries shows, as the crisis deepens the stark realities become clearer
  • This crisis has revealed some other hard truths. National governments really matter, and it really matters which one you happen to find yourself under.
  • In a democracy, we have the luxury of waiting for the next election to punish political leaders for their mistakes. But that is scant consolation when matters of basic survival are at stake. Anyway, it’s not much of a punishment, relatively speaking. They might lose their jobs, though few politicians wind up destitute. We might lose our lives.
  • for now, we are at the mercy of our national leaders. That is something else Hobbes warned about: there is no avoiding the element of arbitrariness at the heart of all politics. It is the arbitrariness of individual political judgment.
  • Under a lockdown, democracies reveal what they have in common with other political regimes: here too politics is ultimately about power and order. But we are also getting to see some of the fundamental differences. It is not that democracies are nicer, kinder, gentler places. They may try to be, but in the end that doesn’t last. Democracies do, though, find it harder to make the really tough choices.
  • We wait until we have no choice and then we adapt. That means democracies are always going to start off behind the curve of a disease like this one, though some are better at playing catch-up than others.
  • Though the pandemic is a global phenomenon, and is being experienced similarly in many different places, the impact of the disease is greatly shaped by decisions taken by individual governments.
  • Some democracies have managed to adapt faster: in South Korea the disease is being tamed by extensive tracing and widespread surveillance of possible carriers. But in that case, the regime had recent experience to draw on in its handling of the Mers outbreak of 2015, which also shaped the collective memory of its citizens
  • It is easier to adapt when you have adapted already. It is much harder when you are making it up as you go along
  • In recent years, it has sometimes appeared that global politics is simply a choice between rival forms of technocracy
  • In China, it is a government of engineers backed up by a one-party state. In the west, it is the rule of economists and central bankers, operating within the constraints of a democratic system
  • This creates the impression that the real choices are technical judgments about how to run vast, complex economic and social systems.
  • But in the last few weeks another reality has pushed through. The ultimate judgments are about how to use coercive power.
  • These aren’t simply technical questions. Some arbitrariness is unavoidable. And the contest in the exercise of that power between democratic adaptability and autocratic ruthlessness will shape all of our futures.
  • our political world is still one Hobbes would recognise
andrespardo

Coronavirus mask guidance is endangering US health workers, experts say | US news | The... - 0 views

  • Coronavirus mask guidance is endangering US health workers, experts say
  • With crucial protective gear in short supply, federal authorities are saying health workers can wear lower-grade surgical masks while treating Covid-19 patients – but growing evidence suggests the practice is putting workers in jeopardy.
  • But scholars, not-for-profit leaders and former regulators in the specialized field of occupational safety say relying on surgical masks – which are considerably less protective than N95 respirators – is almost certainly fueling illness among frontline health workers, who probably make up about 11% of all known Covid-19 cases.
  • ...14 more annotations...
  • The allowance for surgical masks made more sense when scientists initially thought the virus was spread by large droplets. But a growing body of research shows that it is spread by minuscule viral particles that can linger in the air as long as 16 hours.
  • A properly fitted N95 respirator will block 95% of tiny air particles – down to 0.3 micron in diameter, which are the hardest to catch – from reaching the wearer’s face. But surgical masks, designed to protect patients from a surgeon’s respiratory droplets, aren’t effective at blocking particles smaller than 100 microns, according to the mask maker 3M. A Covid-19 particle is smaller than 0.1 micron, according to South Korean researchers, and can pass through a surgical mask.
  • said Katie Scott, an RN at the hospital and vice-president of the Michigan Nurses Association. Employees who otherwise treat Covid-19 patients receive surgical masks.
  • A 2013 Chinese study found that twice as many health workers, 17%, contracted a respiratory illness if they wore only a surgical mask while treating sick patients, compared to 7% who continuously used an N95, per a study in the American Journal of Respiratory and Critical Care Medicine.
  • Earlier this month, the national Teamsters Union reported that 64% of its healthcare worker membership – which includes people working in nursing homes, hospitals and other medical facilities – could not get N95 masks.
  • The CDC’s recent advice on surgical masks contrasts with another CDC web page that says surgical masks do “NOT provide the wearer with a reliable level of protection from inhaling smaller airborne particles and is not considered respiratory protection”.
  • That matches CDC protocol, but leaves nurses like Scott – who has read the research on surgical masks versus N95s – feeling exposed.
  • At Michigan Medicine, employees are not allowed to bring in their own protective equipment, according to a complaint the nurses’ union filed with the Michigan Occupational Safety and Hazard Administration. Scott said friends and family have mailed her personal protective equipment (PPE), including N95 masks. It sits at home while she cares for patients.
  • “To think I’m going to work and am leaving this mask at home on my kitchen table, because the employer won’t let me wear it,”
  • News reports from Kentucky to Florida to California have documented nurses facing retaliation or pressure to step down when they’ve brought their own N95 respirators.
  • In New York, the center of the US’s outbreak, nurses across the state report receiving surgical masks, not N95s, to wear when treating Covid-19 patients, according to a court affidavit submitted by Lisa Baum, the lead occupational health and safety representative for the New York State Nurses Association (NYSNA).
  • White House to invoke the Defense Production Act, a Korean war-era law that allows the federal government, in an emergency, to direct private business in the production and distribution of goods.
  • provide health care workers with protective equipment, including N95s masks, when they interact with patients suspected to have Covid-19.
  • “Nurses are not afraid to care for our patients if we have the right protections,” said Bonnie Castillo, the executive director of National Nurses United, “but we’re not martyrs sacrificing our lives because our government and our employers didn’t do their job.”
Javier E

An Exit Interview With Richard Posner, Judicial Provocateur - The New York Times - 0 views

  • He called his approach to judging pragmatic. His critics called it lawless.
  • “I pay very little attention to legal rules, statutes, constitutional provisions,” Judge Posner said. “A case is just a dispute. The first thing you do is ask yourself — forget about the law — what is a sensible resolution of this dispute?”
  • The next thing, he said, was to see if a recent Supreme Court precedent or some other legal obstacle stood in the way of ruling in favor of that sensible resolution. “And the answer is that’s actually rarely the case,” he said. “When you have a Supreme Court case or something similar, they’re often extremely easy to get around.”
  • ...9 more annotations...
  • I asked him about his critics, and he said they fell into two camps.
  • The immediate reason for his retirement was less abstract, he said. He had become concerned with the plight of litigants who represented themselves in civil cases, often filing handwritten appeals. Their grievances were real, he said, but the legal system was treating them impatiently, dismissing their cases over technical matters.
  • “A lot of the people who say that are sincere,” he said. “That’s their conception of law. That’s fine.”
  • Some, he said, simply have a different view of the proper role of the judge. “There is a very strong formalist tradition in the law,” he said, summarizing it as: “Judges are simply applying rules, and the rules come from somewhere else, like the Constitution, and the Constitution is sacred. And statutes, unless they’re unconstitutional, are sacred also.”
  • He said he had less sympathy for the second camp. “There are others who are just, you know, reactionary beasts,” he said. “They’re reactionary beasts because they want to manipulate the statutes and the Constitution in their own way.”
  • low level of intelligence,” he said. “I gradually began to realize that this wasn’t right, what we were doing.”
  • Judge Posner said he hoped to work with groups concerned with prisoners’ rights, with a law school clinic and with law firms, to bring attention and aid to people too poor to afford lawyers.
  • In one of his final opinions, Judge Posner, writing for a three-judge panel, reinstated a lawsuit from a prisoner, Michael Davis, that had been dismissed on technical grounds.
  • “The basic thing is that most judges regard these people as kind of trash not worth the time of a federal judge,” he said.
clairemann

Here's Why Fears of Post-Election Chaos Are Overblown | Time - 0 views

  • In anxious tones, they ask about all of the election-related lawsuits, ballot deadlines, Electoral College technicalities and state-level hijinks. “People are so nervous, because they think this guy will do anything to stay in power,” he says.
  • Just 22% of Americans believe the election will be “free and fair,” according to a September Yahoo News/YouGov poll, compared with 46% who say it won’t be.
  • The President has sown doubt with groundless talk of a “rigged” election and repeated refusals to commit to a peaceful transfer of power. The COVID-19 pandemic has transformed voting procedures, while the charged political climate has focused attention on the mechanics of an electoral system that’s shaky, underfunded and under intense strain. It would be naive to predict that nothing will go wrong.
  • ...14 more annotations...
  • There are worst-case scenarios, and the President’s conduct has made them less unthinkable than usual. But the chances of their coming to pass are remote. Benjamin Ginsberg, who represented the GOP candidate in the 2000 recount, cautions against hysteria. “The panic seems to me to be way overblown,” he says.
  • What exactly are the worst-case scenarios? They start with the absence of a clear outcome on election night. Many states will be dealing with a massive increase in mail and absentee ballots, which take longer to process than in-person votes: they have to be removed from their envelopes, flattened for tabulation and checked for signatures and other technical requirements before they can be counted.
  • Three states loom largest in this concern: Michigan, Wisconsin and Pennsylvania. All three are key battlegrounds that have made a rapid and politically fraught push to expand voting by mail this year.
  • Other quirks, like a “naked ballot”–a legitimate ballot that a voter has failed to enclose in the required security envelope–may cause further uncertainty; a Pennsylvania court ruled this year that such ballots would not be counted in that state, which Trump won by just 44,000 votes. It all could add up to a presidential race that’s too close to call for days or weeks.
  • Current polls do not show a particularly tight race in those states, nor nationwide. And the polls have been far more stable, with far fewer undecided voters, than they were in 2016. Faster-counting states like Florida and Arizona, which have demonstrated the ability to rapidly tabulate large volumes of mail ballots, could well decide the election, rendering any uncertainty in the Rust Belt irrelevant.
  • The election’s outcome is unclear after days or weeks, and Trump is muddying the waters–lobbing lawsuits, disputing the count, accusing his opponents of cheating and convincing large swaths of the electorate that something untoward is going on behind the scenes.
  • Even if this happens, experts stress that Trump does not have the power to circumvent the nation’s labyrinthine election procedures by tweet. Elections are administered by state and local officials in thousands of jurisdictions, most of whom are experienced professionals with records of integrity.
  • There are well-tested processes in place for dealing with irregularities, challenges and contests. A candidate can’t demand a recount, for example, unless the tally is within a certain margin, which varies by state.
  • “While people may make claims to powers and make threats about what they may or may not do, the reality is that the candidates don’t have the power to determine the outcome of the election. It’s really important that voters understand that while a lot about our system is complicated, this isn’t a free-for-all.”
  • There’s a legal process to get there. The oft-invoked Bush v. Gore, the Supreme Court case that resolved the 2000 standoff, was decided narrowly, specific to a particular situation in a particular place, notes Joshua Geltzer, executive director of the Institute for Constitutional Advocacy and Protection at Georgetown Law. “These things Trump is saying–toss all the ballots, end the counting–those are not legal arguments,” he says.
  • Some fear a scenario in which, after weeks of uncertainty, the time comes for states to name electors to the Electoral College, and Republican legislators try to appoint their own rosters, overruling their state’s voters and forcing courts or Congress to resolve the matter.
  • “It’s unthinkably undemocratic to hold a popular vote for President and then nullify it if you don’t like the result,” says Adav Noti, chief of staff at the nonpartisan Campaign Legal Center. While the possibility can’t be entirely dismissed given Republicans’ fealty to Trump, judges would likely take a dim view of such an effort, not to mention the political storm that would ensue.
  • The past few years have convinced many Americans to expect the unlikely, haunted by failures of imagination past. But when it comes to post-election mayhem, people’s imaginations may be getting the better of them.
  • “But by amplifying it as if it’s realistic, you create a very real problem of people not having faith in the system by which we choose our leaders. And that’s really harmful.”
mariedhorne

6 North Carolina precincts to remain open later after polling places report minor techn... - 0 views

  • Six North Carolina voting precincts will remain open later than intended on Election Day after each polling site experienced "interruptions in voting," the North Carolina State Board of Elections announced Tuesday evening. 
  • More than 4.5 million people in North Carolina had already voted prior to Election Day -- including 3.6 million people who early voted and an additional 929,000 who voted absentee, the Charlotte Observer reported.
  • As of early Tuesday morning, just over 62% of the state’s 7.3 million registered voters had cast their ballots, according to the state elections board -- including 1.7 million Democrats, almost 1.5 million Republicans and nearly 1.4 million unaffiliated people.
Javier E

Why 'Ditch the algorithm' is the future of political protest | A-levels | The Guardian - 0 views

  • Our life chances – if we get a visa, whether our welfare claims are flagged as fraudulent, or whether we’re designated at risk of reoffending – are becoming tightly bound up with algorithmic outputs. Could the A-level scandal be a turning point for how we think of algorithms – and if so, what durable change might it spark?
  • Resistance to algorithms has often focused on issues such as data protection and privacy. The young people protesting against Ofqual’s algorithm were challenging something different. They weren’t focused on how their data might be used in the future, but how data had been actively used to change their futures
  • In the future, challenging algorithmic injustices will mean attending to how people’s choices in education, health, criminal justice, immigration and other fields are all diminished by a calculation that pays no attention to our individual personhood.
  • ...8 more annotations...
  • The Ofqual algorithm was the technical embodiment of a deeply political idea: that a person is only as good as their circumstances dictate. The metric took no account of how hard a person had worked, while its appeal system sought to deny individual redress, and only the “ranking” of students remained from the centres’ inputs
  • The A-level scandal made algorithms an object of direct resistance and exposed what many already know to be the case: that this type of decision-making involves far more than a series of computational steps
  • In their designs and assumptions, algorithms shape the world in which they’re used. To decide whether to include or exclude a data input, or to weight one feature over another are not merely technical questions – they’re also political propositions about what a society can and should be like.
  • In this case, Ofqual’s model decided it’s not possible that good teaching, hard work and inspiration can make a difference to a young person’s life and their grades.
  • The politics of the algorithm were visible for all to see. Many decisions – from what constitutes a “small” subject entry to whether a cohort’s prior attainment should nudge down the distribution curve – had profound and arbitrary effects on real lives
  • Grappling openly and transparently with difficult questions, such as how to achieve fairness, is precisely what characterises ethical decision-making in a society. Instead, Ofqual responded with non-disclosure agreements, offering no public insight into what it was doing as it tested competing models.
  • Algorithms offer governments the allure of definitive solutions and the promise of reducing intractable decisions to simplified outputs.
  • This logic runs counter to democratic politics, which express the contingency of the world and the deliberative nature of collective decision-making.
Javier E

Opinion | Skeptics Say, 'Do Your Own Research.' It's Not That Simple. - The New York Times - 0 views

  • On internet forums and social media platforms, people arguing about hotly contested topics like vaccines, climate change and voter fraud sometimes bolster their point or challenge their interlocutors by slipping in the acronym “D.Y.O.R.”
  • The slogan, which appeared in conspiracy theory circles in the 1990s, has grown in popularity over the past decade as conflicts over the reliability of expert judgment have become more pronounced.
  • It promotes an individualistic, freethinking approach to understanding the world: Don’t be gullible — go and find out for yourself what the truth is.
  • ...17 more annotations...
  • Isn’t it always a good idea to gather more information before making up your mind about a complex topic?
  • investigate topics on their own, instinctively skeptical of expert opinion, is often misguided
  • As psychological studies have repeatedly shown, when it comes to technical and complex issues like climate change and vaccine efficacy, novices who do their own research often end up becoming more misled than informed
  • Consider what can happen when people begin to learn about a topic. They may start out appropriately humble, but they can quickly become unreasonably confident after just a small amount of exposure to the subject. Researchers have called this phenomenon the beginner’s bubble.
  • In a 2018 study, for example, one of us (Professor Dunning) and the psychologist Carmen Sanchez asked people to try their hand at diagnosing certain diseases.
  • The study suggested that people place far too much credence in the initial bits of information they encounter when learning something. “A little learning,” as the poet Alexander Pope wrote, “is a dangerous thing.”
  • Research also shows that people learning about topics are vulnerable to hubris
  • Anecdotally, you can see the beginner’s bubble at work outside the laboratory too. Consider do-it-yourself projects gone wrong
  • when novices perceive themselves as having developed expertise about topics such as finance and geography, they will frequently claim that they know about nonexistent financial instruments (like “prerated stocks”) and made-up places (like Cashmere, Ore.) when asked about such things.
  • Likewise, a 2018 study of attitudes about vaccine policy found that when people ascribe authority to themselves about vaccines, they tend to view their own ideas as better than ideas from rival sources and as equal to those of doctors and scientists who have focused on the issue.
  • There should be no shame in identifying a consensus of independent experts and deferring to what they collectively report.
  • As individuals, our skills at adequately vetting information are spotty. You can be expert at telling reliable cardiologists from quacks without knowing how to separate serious authorities from pretenders on economic policy.
  • For D.Y.O.R. enthusiasts, one lesson to take away from all of this might be: Don’t do your own research, because you are probably not competent to do it.
  • Is that our message? Not necessarily. For one thing, that is precisely the kind of advice that advocates of D.Y.O.R. are primed to reject
  • appealing to the superiority of experts can trigger distrust.
  • Instead, our message, in part, is that it’s not enough for experts to have credentials, knowledge and lots of facts. They must show that they are trustworthy and listen seriously to objections from alternative perspectives.
  • If you are going to do your own research, the research you should do first is on how best to do your own research.
criscimagnael

The Taliban Have Staffing Issues. They Are Looking for Help in Pakistan. - The New York... - 0 views

  • Then, after Kabul fell to the Taliban last August, Khyal Mohammad Ghayoor received a call from a stranger who identified himself only by the dual honorifics, Hajji Sahib, which roughly translates to a distinguished man who has made a pilgrimage to Mecca. The man told Mr. Ghayoor he was needed back in Afghanistan, not as a baker but as a police chief.
  • “I am very excited to be back in a free and liberated Afghanistan,” he said.
  • A similar mass exodus of Afghanistan’s professional class occurred in the 1980s and 1990s, when the Soviets withdrew and the Taliban wrested control from the warlords who filled the leadership vacuum.
  • ...15 more annotations...
  • To help fill the gaps, Taliban officials are reaching into Pakistan.
  • Now, the Taliban are privately recruiting them to return and work in the new government.
  • It is unclear how many former fighters have returned from Pakistan, but there have already been several high-profile appointments, including Mr. Ghayoor.
  • The new hires are walking into a mounting catastrophe. Hunger is rampant. Many teachers and other public sector employees have not been paid in months. The millions of dollars in aid that helped prop up the previous government have vanished, billions in state assets are frozen and economic sanctions have led to a near collapse of the country’s banking system.
  • “Running insurgency and state are two different things,” said Noor Khan, 40, an accountant who fled Kabul for Islamabad in early September, among hundreds of other Afghan professionals hoping for asylum in Europe.
  • Five months after their takeover of Afghanistan, the Taliban are grappling with the challenges of governance. Leaders promised to retain civil servants and prioritize ethnic diversity for top government roles, but instead have filled positions at all management levels with soldiers and theologians. Other government employees have fled or refused to work, leaving widespread vacancies in the fragile state.
  • Mr. Ghayoor, the baker turned police chief, said that Kabul changed markedly in the two decades that he was away. As part of his duties, he tries to instill order at a busy produce market in Kabul as vendors tout fruit and vegetables, and taxi drivers call out stops, looking for fares.
  • Sirajuddin Haqqani, head of the militant Haqqani network and labeled a terrorist by the F.B.I., was appointed acting minister of the interior, overseeing police, intelligence and other security forces.
  • “They have no experience to run the departments,” said Basir Jan, a company employee. “They sit in the offices with guns and abuse the employees in the departments by calling them ‘corrupt’ and ‘facilitators of the invaders.’”
  • Taliban leaders blame the United States for the collapsing economy. But some analysts say that even if the United States unfreezes Afghanistan’s state assets and lifts sanctions, the Finance Ministry does not have the technical know-how to revive the country’s broken banking system.
  • “Their response to the catastrophic economic situation is ‘It’s not our fault, the internationals are holding the money back.’ But the reality is that they don’t have the capacity for this kind of day-to-day technical operation,”
  • Foreigners intentionally evacuated Afghans, most importantly, the educated and professional ones, to weaken the Islamic Emirates and undermine our administration,” Mr. Hashimi said.
  • “We are in touch with some Afghans in different parts of the world and are encouraging them to return to Afghanistan because we desperately need their help and expertise to help their people and government,”
  • Then as now, the Taliban preferred filling the government ranks with jihadis and loyalists. But this time, some civil servants have also stopped showing up for work, several of them said in interviews, either because they are not being paid, or because they do not want to taint their pending asylum cases in the United States or Europe by working for the Taliban.
  • Mr. Ghayoor said in December that neither he nor any other member of the Kabul police force had been paid in months. Nevertheless, he said he decided to sell his bakery in Quetta, a city in southwestern Pakistan, and move his extended family, including nine children, to Kabul.
Javier E

Climate Reparations Are Officially Happening - The Atlantic - 0 views

  • Today, on the opening day of COP28, the United Nations climate summit in Dubai, the host country pushed through a decision that wasn’t expected to happen until the last possible minute of the two-week gathering: the creation and structure of the “loss and damage” fund, which will source money from developed countries to help pay for climate damages in developing ones. For the first time, the world has a system in place for climate reparations.
  • Nearly every country on Earth has now adopted the fund, though the text is not technically final until the end of the conference, officially slated for December 12.
  • “We have delivered history today—the first time a decision has been adopted on day one of any COP,”
  • ...12 more annotations...
  • Over much opposition from developing countries, the U.S. has insisted that the fund (technically named the Climate Impact and Response Fund) will be housed at the World Bank, where the U.S. holds a majority stake; every World Bank president has been a U.S. citizen. The U.S. also insisted that contributing to the fund not be obligatory. Sue Biniaz, the deputy special envoy for climate at the State Department, said earlier this year that she “violently opposes” arguments that developed countries have a legal obligation under the UN framework to pay into the fund.
  • The text agreed upon in Dubai on Thursday appears to strike a delicate balance: The fund will indeed be housed at the World Bank, at least for four years, but it will be run according to direction provided at the UN climate gatherings each year, and managed by a board where developed nations are designated fewer than half the seats.
  • That board’s decisions will supersede those of the World Bank “where appropriate.” Small island nations, which are threatened by extinction because of sea-level rise, will have dedicated seats. Countries that are not members of the World Bank will still be able to access the fund.
  • the U.S. remains adamant that the fund does not amount to compensation for past emissions, and it rejects any whiff of suggestions that it is liable for other countries’ climate damages.
  • Other donations may continue to trickle in. But the sum is paltry considering researchers recently concluded that 55 climate-vulnerable countries have incurred $525 billion in climate-related losses from 2000 to 2019, depriving them of 20 percent of the wealth they would otherwise have
  • Several countries immediately announced their intended contribution to the fund. The United Arab Emirates and Germany each said they would give $100 million. The U.K. pledged more than$50 million, and Japan committed to $10 million. The U.S. said it would provide $17.5 million, a small number given its responsibility for the largest historical share of global emissions.
  • Total commitments came in on the order of hundreds of  millions, far shy of an earlier goal of $100 billion a year.
  • Even the name “loss and damage,” with its implication of both harm and culpability, has been contentious among delegates
  • Still, it’s a big change in how climate catastrophe is treated by developed nations. For the first time, the countries most responsible for climate change are collectively, formally claiming some of that responsibility
  • One crucial unresolved variable is whether countries such as China and Saudi Arabia—still not treated as “developed” nations under the original UN climate framework—will acknowledge their now-outsize role in worsening climate change by contributing to the fund.
  • Another big question now will be whether the U.S. can get Congress to agree to payments to the fund, something congressional Republicans are likely to oppose.
  • Influence by oil and gas industry interests—arguably the entities truly responsible for driving climate change—now delays even public funding of global climate initiatives, he said. “The fossil-fuel industry has successfully convinced the world that loss and damage is something the taxpayer should pay for.” And yet, Whitehouse told me that the industry lobbies against efforts to use public funding this way, swaying Congress and therefore hobbling the U.S.’s ability to uphold even its meager contributions to international climate funding.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

Opinion | Colleges Should Be More Than Just Vocational Schools - The New York Times - 0 views

  • Between 2013 and 2016, across the United States, 651 foreign language programs were closed, while majors in classics, the arts and religion have frequently been eliminated or, at larger schools, shrunk. The trend extends from small private schools like Marymount to the Ivy League and major public universities, and shows no sign of stopping.
  • The steady disinvestment in the liberal arts risks turning America’s universities into vocational schools narrowly focused on professional training. Increasingly, they have robust programs in subjects like business, nursing and computer science but less and less funding for and focus on departments of history, literature, philosophy, mathematics and theology.
  • America’s higher education system was founded on the liberal arts and the widespread understanding that mass access to art, culture, language and science were essential if America was to thrive. But a bipartisan coalition of politicians and university administrators is now hard at work attacking it — and its essential role in public life — by slashing funding, cutting back on tenure protections, ending faculty governance and imposing narrow ideological limits on what can and can’t be taught.
  • ...16 more annotations...
  • For decades — and particularly since the 2008 recession — politicians in both parties have mounted a strident campaign against government funding for the liberal arts. They express a growing disdain for any courses not explicitly tailored to the job market and outright contempt for the role the liberal arts-focused university has played in American society.
  • Former Gov. Scott Walker’s assault on higher education in Wisconsin formed the bedrock of many later conservative attacks. His work severely undermined a state university system that was once globally admired. Mr. Walker reportedly attempted to cut phrases like “the search for truth” and “public service” — as well as a call to improve “the human condition” — from the University of Wisconsin’s official mission statement
  • But blue states also regularly cut higher education funding, sometimes with similar rationales. In 2016, Matt Bevin, the Republican governor of Kentucky at the time, suggested that students majoring in the humanities shouldn’t receive state funding. The current secretary of education, Miguel Cardona, a Democrat, seems to barely disagree. “Every student should have access to an education that aligns with industry demands and evolves to meet the demands of tomorrow’s global work force,” he wrote in December.
  • Federal funding reflects those priorities. The National Endowment for the Humanities’ budget in 2022 was just $180 million. The National Science Foundation’s budget was about 50 times greater, having nearly doubled within two decades.
  • What were students meant to think? As the cost of higher education rose, substantially outpacing inflation since 1990, students followed funding — and what politicians repeatedly said about employability — into fields like business and computer science. Even majors in mathematics were hit by the focus on employability.
  • Universities took note and began culling. One recent study showed that history faculty across 28 Midwestern universities had dropped by almost 30 percent in roughly the past decade. Classics programs, including the only one at a historically Black college, were often simply eliminated.
  • this is a grim and narrow view of the purpose of higher education, merely as a tool to train workers as replaceable cogs in America’s economic machine, to generate raw material for its largest companies.
  • Higher education, with broad study in the liberal arts, is meant to create not merely good workers but good citizens
  • Citizens with knowledge of their history and culture are better equipped to lead and participate in a democratic society; learning in many different forms of knowledge teaches the humility necessary to accept other points of view in a pluralistic and increasingly globalized society.
  • In 1947, a presidential commission bemoaned an education system where a student “may have gained technical or professional training” while being “only incidentally, if at all, made ready for performing his duties as a man, a parent and a citizen.” The report recommended funding to give as many Americans as possible the sort of education that would “give to the student the values, attitudes, knowledge and skills that will equip him to live rightly and well in a free society,” which is to say the liberal arts as traditionally understood. The funding followed.
  • The report is true today, too
  • the American higher education system is returning to what it once was: liberal arts finishing schools for the wealthy and privileged, and vocational training for the rest.
  • Reversing this decline requires a concerted effort by both government and educational actors
  • renewed funding for the liberal arts — and especially the humanities — would support beleaguered departments and show students that this study is valuable and valued.
  • At the university level, instituting general education requirements would guarantee that even students whose majors have nothing to do with the humanities emerged from college equipped to think deeply and critically across disciplines.
  • Liberal arts professors must also be willing to leave their crumbling ivory towers and the parochial debates about their own career path, in order to engage directly in public life
Javier E

E-Notes: Nightmares of an I.R. Professor - FPRI - 0 views

  • the British, during their late Victorian heyday, believed theirs was the exceptional Land of Hope and Glory, a vanguard of progress and model for all nations.[3] Can it be—O scary thought—that the same faith in Special Providence that inspires energy, ingenuity, resilience, and civic virtue in a nation, may also tempt a people into complacency, arrogance, self-indulgence, and civic vice?
  • what Americans believe about their past is always a powerful influence on their present behavior and future prospects. No wonder we have “culture wars” in which the representation of history is a principal stake.
  • my study of European international relations naturally inclined me to think about foreign policy in terms of Realpolitik, balance of power, geography, contingency, tragedy, irony, folly, unintended consequences, and systemic interaction—all of which are foreign if not repugnant to Americans.
  • ...8 more annotations...
  • Times were certainly very good in the decade after the 1991 Soviet collapse ended the fifty year emergency that began with Pearl Harbor. So if one accepts my definition of a conservative as “someone who knows things could be worse than they are-period,” then conservatism was never more apt
  • the “third age” neoconservatives ensconced at The Weekly Standard, Commentary, and various think tanks thought Promised Land, Crusader State decidedly inconvenient. They wanted Americans to believe that the United States has always possessed the mission and duty to redeem the whole world by exertion as well as example, and that any American who shirks from that betrays the Founders themselves.[13] They were loudly decrying cuts in defense spending as unilateral disarmament, likening U.S. policies to Britain’s lethargy in the 1930s, and warning of new existential threats on the horizon.
  • what national assets must the United States husband, augment if possible, and take care not to squander? My list was as follows: (1) a strong economy susceptible only to mild recession; (2) robust armed forces boasting technical superiority and high morale designed for winning wars; (3) presidential leadership that is prudent, patriotic, and persuasive; (4) a bipartisan, internationalist consensus in Congress; (5) sturdy regional alliances; (6) engagement to promote balance of power in Europe, East Asia, and the Middle East; (7) strong Pan-American ties to secure of our southern border.
  • t the shock of the 9/11 attacks and the imperative duty to prevent their repetition caused the Bush administration to launch two wars for regime change that eventuated in costly, bloody occupations belatedly devoted to democratizing the whole Middle East. Thus did the United States squander in only five years all seven of the precious assets listed in my 1999 speech.
  • When the other shoe dropped—not another Al Qaeda attack but the 2008 sub-prime mortgage collapse—Americans wrestled anew with an inconvenient truth. Foreign enemies cannot harm the United States more than Americans harm themselves, over and over again, through strategic malpractice and financial malfeasance.
  • Unfortunately, in an era of interdependent globalization vexed by failed states, rogue regimes, ethnic cleansing, sectarian violence, famines, epidemics, transnational terrorism, and what William S. Lind dubbed asymmetrical “Fourth Generation Warfare,” the answer to questions about humanitarian or strategic interventions abroad can’t be “just say no!” For however often Americans rediscover how institutionally, culturally, and temperamentally ill-equipped they are to do nation-building, the United States will likely remain what I (and now Robert Merry) dubbed a Crusader State.
  • the urgent tasks for civilian and military planners are those of the penitent sinner called to confess, repent, and amend his ways. The tasks include refining procedures to coordinate planning for national security so that bureaucratic and interest-group rivalries do not produce “worst of both worlds” outcomes.[22] They include interpreting past counter-insurgencies and postwar occupations in light of their historical particularities lest facile overemphasis on their social scientific commonalities yield “one size fits all” field manuals
  • they include persuading politicians to cease playing the demagogue on national security and citizens to cease imagining every intervention a “crusade” or a “quagmire”
Javier E

The Smartphone Have-Nots - NYTimes.com - 0 views

  • Much of what we consider the American way of life is rooted in the period of remarkably broad, shared economic growth, from around 1900 to about 1978. Back then, each generation of Americans did better than the one that preceded it. Even those who lived through the Depression made up what was lost. By the 1950s, America had entered an era that economists call the Great Compression, in which workers — through unions and Social Security, among other factors — captured a solid share of the economy’s growth.
  • there’s a lot of disagreement about what actually happened during these years. Was it a golden age in which the U.S. government guided an economy toward fairness? Or was it a period defined by high taxes (until the early ’60s, the top marginal tax rate was 90 percent) and bureaucratic meddling?
  • the Great Compression gave way to a Great Divergence. Since 1979, according to the nonpartisan Congressional Budget Office, the bottom 80 percent of American families had their share of the country’s income fall, while the top 20 percent had modest gains. Of course, the top 1 percent — and, more so, the top 0.1 percent — has seen income rise stratospherically. That tiny elite takes in nearly a quarter of the nation’s income and controls nearly half its wealth.
  • ...13 more annotations...
  • The standard explanation of this unhinging, repeated in graduate-school classrooms and in advice to politicians, is technological change.
  • This explanation, known as skill-biased technical change, is so common that economists just call it S.B.T.C. They use it to explain why everyone from the extremely rich to the just-kind-of rich are doing so much better than everyone else.
  • For all their disagreements, Autor and Mishel are allies of sorts. Both are Democrats who have advised President Barack Obama, and both agree that rampant inequality can undermine democracy and economic growth by fostering despair among workers and corruption among the wealthy
  • The change came around 1978, Mishel said, when politicians from both parties began to think of America as a nation of consumers, not of workers.
  • each administration and Congress have made choices — expanding trade, deregulating finance and weakening welfare — that helped the rich and hurt everyone else. Inequality didn’t just happen, Mishel argued. The government created it.
  • David Autor, one of the country’s most celebrated labor economists, took the stage, fumbled for his own PowerPoint presentation and then explained that there was plenty of evidence showing that technological change explained a great deal about the rise of income inequality. Computers, Autor says, are fundamentally different. Conveyor belts and massive steel furnaces made blue-collar workers comparatively wealthier and hurt more highly skilled crafts­people, like blacksmiths and master carpenters, whose talents were disrupted by mass production. The computer revolution, however, displaced millions of workers from clerical and production occupations, forcing them to compete in lower-paying jobs in the retail, fast-food and home health sectors. Meanwhile, computers and the Internet disproportionately helped people like doctors, engineers and bankers in information-intensive jobs. Inequality was merely a side effect of the digital revolution, Autor said; it didn’t begin and end in Washington.
  • Computers and the Internet, Mishel argued, are just new examples on the continuum and cannot explain a development like extreme inequality, which is so recent. So what happened?
  • Levy suggested seeing how inequality has played out in other countries
  • In Germany, the average worker might make less than an American, but the government has established an impressive apprenticeship system to keep blue-collar workers’ skills competitive.
  • For decades, the Finnish government has offered free education all the way through college. It may have led to high taxes, but many believe it also turned a fairly poor fishing economy into a high-income, technological nation.
  • On the other hand, Greece, Spain and Portugal have so thoroughly protected their workers that they are increasingly unable to compete
  • Inequality has risen almost everywhere, which, Levy says, means that Autor is right that inequality is not just a result of American-government decisions. But the fact that inequality has risen unusually quickly in the United States suggests that government does have an impact
  • Still, economists certainly cannot tell us which policy is the right one. What do we value more: growth or fairness? That’s a value judgment. And for better or worse, it’s up to us.
Javier E

Life After Oil and Gas - NYTimes.com - 0 views

  • To what extent will we really “need” fossil fuel in the years to come? To what extent is it a choice?
  • Thirteen countries got more than 30 percent of their electricity from renewable energy in 2011, according to the Paris-based International Energy Agency, and many are aiming still higher.
  • Could we? Should we?
  • ...11 more annotations...
  • the United States could halve by 2030 the oil used in cars and trucks compared with 2005 levels by improving the efficiency of gasoline-powered vehicles and by relying more on cars that use alternative power sources, like electric batteries and biofuels.
  • New York State — not windy like the Great Plains, nor sunny like Arizona — could easily produce the power it needs from wind, solar and water power by 2030
  • “You could power America with renewables from a technical and economic standpoint. The biggest obstacles are social and political — what you need is the will to do it.”
  • “There is plenty of room for wind and solar to grow and they are becoming more competitive, but these are still variable resources — the sun doesn’t always shine and the wind doesn’t always blow,” said Alex Klein, the research director of IHS Emerging Energy Research, a consulting firm on renewable energy. “An industrial economy needs a reliable power source, so we think fossil fuel will be an important foundation of our energy mix for the next few decades.”
  • improving the energy efficiency of homes, vehicles and industry was an easier short-term strategy. He noted that the 19.5 million residents of New York State consume as much energy as the 800 million in sub-Saharan Africa (excluding South Africa)
  • a rapid expansion of renewable power would be complicated and costly. Using large amounts of renewable energy often requires modifying national power grids, and renewable energy is still generally more expensive than using fossil fuels
  • Promoting wind and solar would mean higher electricity costs for consumers and industry.
  • many of the European countries that have led the way in adopting renewables had little fossil fuel of their own, so electricity costs were already high. Others had strong environmental movements that made it politically acceptable to endure higher prices
  • countries could often get 25 percent of their electricity from renewable sources like wind and solar without much modification to their grids. A few states, like Iowa and South Dakota, get nearly that much of their electricity from renewable power (in both states, wind), while others use little at all.
  • America is rich in renewable resources and (unlike Europe) has the empty space to create wind and solar plants. New York State has plenty of wind and sun to do the job, they found. Their blueprint for powering the state with clean energy calls for 10 percent land-based wind, 40 percent offshore wind, 20 percent solar power plants and 18 percent solar panels on rooftops
  • the substantial costs of enacting the scheme could be recouped in under two decades, particularly if the societal cost of pollution and carbon emissions were factored in
Javier E

The Year in Hacking, by the Numbers - NYTimes.com - 0 views

  • there are now only two types of companies left in the United States: those that have been hacked and those that don’t know they’ve been hacked.
  • an annual Verizon report, which counted 621 confirmed data breaches last year, and more than 47,000 reported “security incidents.”
  • the report shows that no matter the size of the organization — large, small, government agencies, banks, restaurants, retailers — people are stealing data from a range of different organizations and it’s a problem everyone has to deal with.”
  • ...3 more annotations...
  • Three quarters of successful breaches were done by profit-minded criminals for financial gain. But the second most common type of breach was a state-affiliated attack “aimed at stealing intellectual property — such as classified information, trade secrets and technical resources — to further national and economic interests.”
  • In 76 percent of data breaches, weak or stolen user names and passwords were a cause. In 40 percent of cases, Verizon said the attackers installed malicious software on the victim’s systems; 35 percent of cases involved “physical attacks” in which the attackers did physical harm
  • In 29 percent of breaches, the attackers leveraged social tactics, such as spear phishing, in which a tailored e-mail to the victim purports to come from a friend or business contact. The e-mails contain malicious links or attachments that, when clicked, give the attacker a foothold in the victim’s computer network. Verizon said it witnessed four times as many “social engineering” attacks that used this method in 2012 as it did in 2011
grayton downing

Syria's Confirmation of Airstrike May Undercut Israel's Strategy of Silence - NYTimes.com - 0 views

  • Syria and Israel are technically at war
  • People are on edge and keep asking if we know anything about what may develop.”
‹ Previous 21 - 40 of 262 Next › Last »
Showing 20 items per page