Skip to main content

Home/ TOK Friends/ Group items tagged view

Rss Feed Group items tagged

Javier E

Opinion | If You Want to Understand How Dangerous Elon Musk Is, Look Outside America - ... - 0 views

  • Twitter was an intoxicating window into my fascinating new assignment. Long suppressed groups found their voices and social media-driven revolutions began to unfold. Movements against corruption gained steam and brought real change. Outrage over a horrific gang rape in Delhi built a movement to fight an epidemic of sexual violence.
  • “What we didn’t realize — because we took it for granted for so long — is that most people spoke with a great deal of freedom, and completely unconscious freedom,” said Nilanjana Roy, a writer who was part of my initial group of Twitter friends in India. “You could criticize the government, debate certain religious practices. It seems unreal now.”
  • Soon enough, other kinds of underrepresented voices also started to appear on — and then dominate — the platform. As women, Muslims and people from lower castes spoke out, the inevitable backlash came. Supporters of the conservative opposition party, the Bharatiya Janata Party, and their right-wing religious allies felt that they had long been ignored by the mainstream press. Now they had the chance to grab the mic.
  • ...12 more annotations...
  • Viewed from the United States, these skirmishes over the unaccountable power of tech platforms seem like a central battleground of free speech. But the real threat in much of the world is not the policies of social media companies, but of governments.
  • The real question now is if Musk’s commitment to “free speech” extends beyond conservatives in America and to the billions of people in the Global South who rely on the internet for open communication.
  • ndia’s government had demanded that Twitter block tweets and accounts from a variety of journalists, activists and politicians. The company went to court, arguing that these demands went beyond the law and into censorship. Now Twitter’s potential new owner was casting doubt on whether the company should be defying government demands that muzzle freedom of expression.
  • The winning side will not be decided in Silicon Valley or Beijing, the two poles around which debate over free expression on the internet have largely orbited. It will be the actions of governments in capitals like Abuja, Jakarta, Ankara, Brasília and New Delhi.
  • Across the world, countries are putting in place frameworks that on their face seem designed to combat online abuse and misinformation but are largely used to stifle dissent or enable abuse of the enemies of those in power.
  • other governments are passing laws just to increase their power over speech online and to force companies to be an extension of state surveillance.” For example: requiring companies to house their servers locally rather than abroad, which can make them more vulnerable to government surveillance.
  • while much of the focus has been on countries like China, which overtly restricts access to huge swaths of the internet, the real war over the future of internet freedom is being waged in what she called “swing states,” big, fragile democracies like India.
  • it seems that this is actually what he believes. In April, he tweeted: “By ‘free speech’, I simply mean that which matches the law. I am against censorship that goes far beyond the law. If people want less free speech, they will ask government to pass laws to that effect. Therefore, going beyond the law is contrary to the will of the people.”
  • Musk is either exceptionally naïve or willfully ignorant about the relationship between government power and free speech, especially in fragile democracies.
  • The combination of a rigid commitment to following national laws and a hands-off approach to content moderation is combustible and highly dangerous.
  • Independent journalism is increasingly under threat in India. Much of the mainstream press has been neutered by a mix of intimidation and conflicts of interests created by the sprawling conglomerates and powerful families that control much of Indian media
  • Twitter has historically fought against censorship. Whether that will continue under Musk seems very much a question. The Indian government has reasons to expect friendly treatment: Musk’s company Tesla has been trying to enter the Indian car market for some time, but in May it hit an impasse in negotiations with the government over tariffs and other issues
Javier E

Don't Do TikTok - by Jonathan V. Last - The Triad - 0 views

  • The small-bore concern is personal data. TikTok is basically Chinese spyware. The platform is owned by a Chinese company, Bytedance, which, like all Chinese companies, operates at the pleasure of the Chinese Communist Party.1 Anyone from Bytedance who wants to look into an American user’s TikTok data can do so. And they do it on the reg.
  • But personal data isn’t the big danger. The big danger is that TikTok decides what videos people see. Recommendations are driven entirely by the company’s black-box algorithm. And since TikTok answers to the Chinese Communist Party, then if the ChiComs tell TikTok to start pushing certain videos to certain people, that’s what TikTok will do.
  • It’s a gigantic propaganda engine. Making TikTok your platform of choice is the equivalent of using RT as your primary news source.
  • ...7 more annotations...
  • TikTok accounts run by the propaganda arm of the Chinese government have accumulated millions of followers and tens of millions of views, many of them on videos editorializing about U.S. politics without clear disclosure that they were posted by a foreign government.
  • The accounts are managed by MediaLinks TV, a registered foreign agent and Washington D.C.-based outpost of the main Chinese Communist Party television news outlet, China Central Television. The largest of them are @Pandaorama, which features cute videos about Chinese culture, @The…Optimist, which posts about sustainability, and @NewsTokss, which features coverage of U.S. national and international news.
  • In the run-up to the 2022 elections, the @NewsTokss account criticized some candidates (mostly Republicans), and favored others (mostly Democrats). A video from July began with the caption “Cruz, Abbott Don’t Care About Us”; a video from October was captioned “Rubio Has Done Absolutely Nothing.” But @NewsTokss did not target only Republicans; another October video asked viewers whether they thought President Joe Biden’s promise to sign a bill codifying abortion rights was a “political manipulation tactic.” Nothing in these videos disclosed to viewers that they were being pushed by a foreign government.
  • any Chinese play for Taiwan would be accompanied by TikTok aggressively pushing content in America designed to divide public opinion and weaken America’s commitment to Taiwan’s defense.
  • With all the official GOP machinations against gay marriage, it seems like if McConnell wanted that bill to fail, he could have pressured two Republican senators to vote against it. He said nothing. Trump said nothing. DeSantis said nothing. There was barely a whimper of protest from those who could have influenced this. Mike Lee and Ted Cruz engaged in theatrics, but no one actually used their power to stop this.
  • They let it pass because they don’t care and they want it to go away as an issue. And that goes for the MAGA GOP as well. Opposition to it in politics is all theater and will have a shelf life in riling up the base.
  • Evangelical religious convictions might be for one man + one woman marriage. But, the civil/political situation is far different from that and it’s worth recognizing where the GOP actually stands. They could have stopped this. They didn’t. That point should be clear, especially to their evangelical base who looks to the GOP to save America for them.
Javier E

Why The CHIPS and Science Act Is a Climate Bill - The Atlantic - 0 views

  • Over the next five years, the CHIPS Act will direct an estimated $67 billion, or roughly a quarter of its total funding, toward accelerating the growth of zero-carbon industries and conducting climate-relevant research, according to an analysis from RMI, a nonpartisan energy think tank based in Colorado.
  • That means that the CHIPS Act is one of the largest climate bills ever passed by Congress. It exceeds the total amount of money that the government spent on renewable-energy tax credits from 2005 to 2019
  • And it’s more than half the size of the climate spending in President Barack Obama’s 2009 stimulus bill. That’s all the more remarkable because the CHIPS Act was passed by large bipartisan majorities, with 41 Republicans and nearly all Democrats supporting it in the House and the Senate.
  • ...15 more annotations...
  • The law, for instance, establishes a new $20 billion Directorate for Technology, which will specialize in pushing new technologies from the prototype stage into the mass market. It is meant to prevent what happened with the solar industry—where America invented a new technology, only to lose out on commercializing it—from happening again
  • Within a few years, when the funding has fully ramped up, the government will spend roughly $80 billion a year on accelerating the development and deployment of zero-carbon energy and preparing for the impacts of climate change. That exceeds the GDP of about 120 of the 192 countries that have signed the Paris Agreement on Climate Change
  • By the end of the decade, the federal government will have spent more than $521 billion
  • the bill’s programs focus on the bleeding edge of the decarbonization problem, investing money in technology that should lower emissions in the 2030s and beyond.
  • The International Energy Association has estimated that almost half of global emissions reductions by 2050 will come from technologies that exist only as prototypes or demonstration projects today.
  • To get those technologies ready in time, we need to deploy those new ideas as fast as we can, then rapidly get them to commercial scale, Carey said. “What used to take two decades now needs to take six to 10 years.” That’s what the CHIPS Act is supposed to do
  • When viewed with the Inflation Reduction Act, which the House is poised to pass later this week, and last year’s bipartisan infrastructure law, a major shift in congressional climate spending comes into focus. According to the RMI analysis, these three laws are set to more than triple the federal government’s average annual spending on climate and clean energy this decade, compared with the 2010s.
  • Congress has explicitly tasked the new office with studying “natural and anthropogenic disaster prevention or mitigation” as well as “advanced energy and industrial efficiency technologies,” including next-generation nuclear reactors.
  • The bill also directs about $12 billion in new research, development, and demonstration funding to the Department of Energy, according to RMI’s estimate. That includes doubling the budget for ARPA-E, the department’s advanced-energy-projects skunk works.
  • it allocates billions to upgrade facilities at the government’s in-house defense and energy research institutes, including the National Renewable Energy Laboratory, the Princeton Plasma Physics Laboratory, and Berkeley Lab, which conducts environmental-science research.
  • RMI’s estimate of the climate spending in the CHIPS bill should be understood as just that: an estimate. The bill text rarely specifies how much of its new funding should go to climate issues.
  • When you add CHIPS, the IRA, and the infrastructure law together, Washington appears to be unifying behind a new industrial policy, focused not only on semiconductors and defense technology but clean energy
  • The three bills combine to form a “a coordinated, strategic policy for accelerating the transition to the technologies that are going to define the 21st century,”
  • scholars and experts have speculated about whether industrial policy—the intentional use of law to nurture and grow certain industries—might make a comeback to help fight climate change. Industrial policy was central to some of the Green New Deal’s original pitch, and it has helped China develop a commanding lead in the global solar industry.
  • “Industrial policy,” he said, “is back.”
Javier E

Why the very concept of 'general knowledge' is under attack | Times2 | The Times - 0 views

  • why has University Challenge lasted, virtually unchanged, for so long?
  • The answer may lie in a famous theory about our brains put forward by the psychologist Raymond Cattell in 1963
  • Cattell divided intelligence into two categories: fluid and crystallised. Fluid intelligence refers to basic reasoning and other mental activities that require minimal learning — just an alert and flexible brain.
  • ...12 more annotations...
  • By contrast, crystallised intelligence is based on experience and the accumulation of knowledge. Fluid intelligence peaks at the age of about 20 then gradually declines, whereas crystallised intelligence grows through your life until you hit your mid-sixties, when you start forgetting things.
  • that explains much about University Challenge’s appeal. Because the contestants are mostly aged around 20 and very clever, their fluid intelligence is off the scale
  • On the other hand, because they have had only 20 years to acquire crystallised intelligence, their store of general knowledge is likely to be lacking in some areas.
  • In each episode there will be questions that older viewers can answer, thanks to their greater store of crystallised intelligence, but the students cannot. Therefore we viewers don’t feel inferior when confronted by these smart young people. On the contrary: we feel, in some areas, slightly superior.
  • The first comprises the deconstructionists and decolonialists
  • It’s a brilliantly balanced format
  • They argue that all knowledge is contextual and that things taken for granted in the past — for instance, a canon of great authors that everyone should read at school — merely reflect an outdated, usually Eurocentric view of what’s intellectually important.
  • there is a real threat to the future of University Challenge and much else of value in our society, and it is this. The very concept of “general knowledge” — of a widely accepted core of information that educated, inquisitive people should have in their memory banks — is under attack from two different groups.
  • The other group is the technocrats who argue that the extent of human knowledge is now so vast that it’s impossible for any individual to know more than, perhaps, one billionth of it
  • So why not leave it entirely to computers to do the heavy lifting of knowledge storing and recall, thus freeing our minds for creativity and problem solving?
  • The problem with the agitators on both sides of today’s culture wars is that they are forcefully trying to shape what’s accepted as general knowledge according to a blatant political agenda.
  • And the problem with relying on, say, Wikipedia’s 6.5 million English-language articles to store general knowledge for all of us? It’s the tacit implication that “mere facts” are too tedious to be clogging up our brains. From there it’s a short step to saying that facts don’t matter at all, that everything should be decided by “feelings”. And from there it’s an even shorter step to fake news and pernicious conspiracy theories, the belittling of experts and hard evidence, the closing of minds, the thickening of prejudice and the trivialisation of the national conversation.
Javier E

How thinking hard makes the brain tired | The Economist - 0 views

  • Mental labour can also be exhausting. Even resisting that last glistening chocolate-chip cookie after a long day at a consuming desk job is difficult. Cognitive control, the umbrella term encompassing mental exertion, self-control and willpower, also fades with effort.
  • unlike the mechanism of physical fatigue, the cause of cognitive fatigue has been poorly understood.
  • It posits that exerting cognitive control uses up energy in the form of glucose. At the end of a day spent intensely cogitating, the brain is metaphorically running on fumes. The problem with this version of events is that the energy cost associated with thinking is minimal.
  • ...8 more annotations...
  • To induce cognitive fatigue, a group of participants were asked to perform just over six hours of various tasks that involve thinking.
  • In other words, cognitive work results in chemical changes in the brain, which present behaviourally as fatigue. This, therefore, is a signal to stop working in order to restore balance to the brain.
  • a neurometabolic point of view. They hypothesise that cognitive fatigue results from an accumulation of a certain chemical in the region of the brain underpinning control. That substance, glutamate, is an excitatory neurotransmitter
  • Periodically, throughout the experiment, participants were asked to make decisions that could reveal their cognitive fatigue.
  • The time it takes for the pupil to subsequently dilate reflects the amount of mental exerted. The pupil-dilation times of participants assigned hard tasks fell off significantly as the experiment progressed.
  • During the experiment the scientists used a technique called magnetic-resonance spectroscopy to measure biochemical changes in the brain. In particular, they focused on the lateral prefrontal cortex, a region of the brain associated with cognitive control. If their hypothesis was to hold, there would be a measurable chemical difference between the brains of hard- and easy-task participants
  • Their analysis indicated higher concentrations of glutamate in the synapses of a hard-task participant’s lateral prefrontal cortex. Thus showing cognitive fatigue is associated with increased glutamate in the prefrontal cortex
  • There may well be ways to reduce the glutamate levels, and no doubt some researchers will now be looking at potions that might hack the brain in a way to artificially speed up its recovery from fatigue. Meanwhile, the best solution is the natural one: sleep
Javier E

Opinion | Here's Hoping Elon Musk Destroys Twitter - The New York Times - 0 views

  • I’ve sometimes described being on Twitter as like staying too late at a bad party full of people who hate you. I now think this was too generous to Twitter. I mean, even the worst parties end.
  • Twitter is more like an existentialist parable of a party, with disembodied souls trying and failing to be properly seen, forever. It’s not surprising that the platform’s most prolific users often refer to it as “this hellsite.”
  • Among other things, he’s promised to reinstate Donald Trump, whose account was suspended after the Jan. 6 attack on the Capitol. Other far-right figures may not be far behind, along with Russian propagandists, Covid deniers and the like. Given Twitter’s outsize influence on media and politics, this will probably make American public life even more fractious and deranged.
  • ...12 more annotations...
  • The best thing it could do for society would be to implode.
  • Twitter hooks people in much the same way slot machines do, with what experts call an “intermittent reinforcement schedule.” Most of the time, it’s repetitive and uninteresting, but occasionally, at random intervals, some compelling nugget will appear. Unpredictable rewards, as the behavioral psychologist B.F. Skinner found with his research on rats and pigeons, are particularly good at generating compulsive behavior.
  • “I don’t know that Twitter engineers ever sat around and said, ‘We are creating a Skinner box,’” said Natasha Dow Schüll, a cultural anthropologist at New York University and author of a book about gambling machine design. But that, she said, is essentially what they’ve built. It’s one reason people who should know better regularly self-destruct on the site — they can’t stay away.
  • Twitter is not, obviously, the only social media platform with addictive qualities. But with its constant promise of breaking news, it feeds the hunger of people who work in journalism and politics, giving it a disproportionate, and largely negative, impact on those fields, and hence on our national life.
  • Twitter is much better at stoking tribalism than promoting progress.
  • According to a 2021 study, content expressing “out-group animosity” — negative feelings toward disfavored groups — is a major driver of social-media engagement
  • That builds on earlier research showing that on Twitter, false information, especially about politics, spreads “significantly farther, faster, deeper and more broadly than the truth.”
  • The company’s internal research has shown that Twitter’s algorithm amplifies right-wing accounts and news sources over left-wing ones.
  • This dynamic will probably intensify quite a bit if Musk takes over. Musk has said that Twitter has “a strong left bias,” and that he wants to undo permanent bans, except for spam accounts and those that explicitly call for violence. That suggests figures like Alex Jones, Steve Bannon and Marjorie Taylor Greene will be welcomed back.
  • But as one of the people who texted Musk pointed out, returning banned right-wingers to Twitter will be a “delicate game.” After all, the reason Twitter introduced stricter moderation in the first place was that its toxicity was bad for business
  • For A-list entertainers, The Washington Post reports, Twitter “is viewed as a high-risk, low-reward platform.” Plenty of non-celebrities feel the same way; I can’t count the number of interesting people who were once active on the site but aren’t anymore.
  • An influx of Trumpists is not going to improve the vibe. Twitter can’t be saved. Maybe, if we’re lucky, it can be destroyed.
Javier E

Is sanity returning to the trans debate? | The Spectator - 0 views

  • Mermaids, the UK charity for, in their own words, ‘gender variant and transgender children’ is under the spotlight. Following investigations by the Telegraph and Mail newspapers, as well as demands from critics concerned about child safeguarding, the Charity Commission has launched a regulatory compliance case and have said that they have written to the organisation’s trustees
  • The investigations found that Mermaids has been offering breast binders to girls reportedly as young as 13, and despite children saying their parents opposed the practice. Binding can often cause breathing difficulties, back pain and broken ribs. It was also uncovered that kids have been ‘congratulated’ online for identifying as transgender by staff and volunteers on the charity’s online help centre, with teenagers being advised that puberty blockers are safe and ‘totally reversible’.
  • Mermaids has been given half a million pounds in total from the National Lottery, and lauded by the likes of Emma Watson, Jameela Jamil and even Harry and Meghan. In other words, the charity has had powerful supporters and been like Teflon for a very long time. Starbucks did a fundraiser for them, more than 40 schools invited them in to educate teachers and kids about ‘gender identity’, and a number of corporates sponsor the charity.
  • ...6 more annotations...
  • There is no such thing as a trans child. Mermaids passionately advocates for the availability of puberty blockers for kids, despite the growing bank of evidence that they can cause a multitude of harms. The vast majority of those prescribed blockers go to take cross-sex hormones further down the line.
  • In the dim and distant past, lobotomies were performed on those with mental illness and psychosis, and today distressed children are being fed the line that they are trapped in the wrong body and that drugs and surgery is the solution. When and how did it become acceptable to pump kids full of harmful hormones and remove healthy body parts as opposed to offering them therapy?
  • I was in court during the cross examination of Mermaids and its supporter, and heard loud and clear its dismissal of the fact that sex is immutable. As far as Mermaids and its lackeys are concerned, all that is necessary to identify and live as the opposite sex is an inner ‘feeling’ of gender identity. The witnesses declared that trans men are men, and trans women, women. They were seemingly unconcerned when presented with the fact that there has been a 4000-plus per cent increase in girls presenting at clinics such as the Tavistock GIDS, claiming to be trans boys.
  • A recent interim report on the Tavistock GIDs recommended that it be closed down in due course, and that much of the ‘treatment’ at the clinic was focused solely on affirming a child’s trans identity and not scrutinising related issues such as mental health issues, autistic disorders, and abuse within the family home.
  • I first contacted Mermaids in 2003, when investigating the notion of ‘trans children’ and was given the cold shoulder. Many other individuals and organisations that have grave concerns about its practices have spoken out, and as a result have been labelled bigots and transphobes. That we are now about to be validated is little comfort, bearing in mind the number of lives ruined by irreversible medical intervention on children who, if supported therapeutically, would likely have grown up to be lesbian or gay.
  • As a result of its spiteful attempt to discredit LGB Alliance, it seems the practice and ideology of Mermaids is now being exposed. In my view it is an organisation led by dangerous ideology that promotes medical intervention to kids that simply need to be supported in who they are and in the bodies they were born with. I believe it deserves to be shut down.
Javier E

Opinion | A Nobel Prize for the Economics of Panic - The New York Times - 0 views

  • Obviously, Bernanke, Diamond and Dybvig weren’t the first economists to notice that bank runs happen
  • Diamond and Dybvig provided the first really clear analysis of why they happen — and why, destructive as they are, they can represent rational behavior on the part of bank depositors. Their analysis was also full of implications for financial policy.
  • Bernanke provided evidence on why bank runs matter and, although he avoided saying so directly, why Milton Friedman was wrong about the causes of the Great Depression.
  • ...20 more annotations...
  • Diamond and Dybvig offered a stylized but insightful model of what banks do. They argued that there is always a tension between individuals’ desire for liquidity — ready access to funds — and the economy’s need to make long-term investments that can’t easily be converted into cash.
  • Banks square that circle by taking money from depositors who can withdraw their funds at will — making those deposits highly liquid — and investing most of that money in illiquid assets, such as business loans.
  • So banking is a productive activity that makes the economy richer by reconciling otherwise incompatible desires for liquidity and productive investment. And it normally works because only a fraction of a bank’s depositors want to withdraw their funds at any given time.
  • This does, however, make banks vulnerable to runs. Suppose that for some reason many depositors come to believe that many other depositors are about to cash out, and try to beat the pack by withdrawing their own funds. To meet these demands for liquidity, a bank will have to sell off its illiquid assets at fire sale prices, and doing so can drive an institution that should be solvent into bankruptcy
  • If that happens, people who didn’t withdraw their funds will be left with nothing. So during a panic, the rational thing to do is to panic along with everyone else.
  • There was, of course, a huge wave of banking panics in 1930-31. Many banks failed, and those that survived made far fewer business loans than before, holding cash instead, while many families shunned banks altogether, putting their cash in safes or under their mattresses. The result was a diversion of wealth into unproductive uses. In his 1983 paper, Bernanke offered evidence that this diversion played a large role in driving the economy into a depression and held back the subsequent recovery.
  • In the story told by Friedman and Anna Schwartz, the banking crisis of the early 1930s was damaging because it led to a fall in the money supply — currency plus bank deposits. Bernanke asserted that this was at most only part of the stor
  • a government backstop — either deposit insurance, the willingness of the central bank to lend money to troubled banks or both — can short-circuit potential crises.
  • But providing such a backstop raises the possibility of abuse; banks may take on undue risks because they know they’ll be bailed out if things go wrong.
  • So banks need to be regulated as well as backstopped. As I said, the Diamond-Dybvig analysis had remarkably large implications for policy.
  • From an economic point of view, banking is any form of financial intermediation that offers people seemingly liquid assets while using their wealth to make illiquid investments.
  • This insight was dramatically validated in the 2008 financial crisis.
  • By the eve of the crisis, however, the financial system relied heavily on “shadow banking” — banklike activities that didn’t involve standard bank deposits
  • Such arrangements offered a higher yield than conventional deposits. But they had no safety net, which opened the door to an old-style bank run and financial panic.
  • And the panic came. The conventionally measured money supply didn’t plunge in 2008 the way it did in the 1930s — but repo and other money-like liabilities of financial intermediaries did:
  • Fortunately, by then Bernanke was chair of the Federal Reserve. He understood what was going on, and the Fed stepped in on an immense scale to prop up the financial system.
  • a sort of meta point about the Diamond-Dybvig work: Once you’ve understood and acknowledged the possibility of self-fulfilling banking crises, you become aware that similar things can happen elsewhere.
  • Perhaps the most notable case in relatively recent times was the euro crisis of 2010-12. Market confidence in the economies of southern Europe collapsed, leading to huge spreads between the interest rates on, for example, Portuguese bonds and those on German bonds. The conventional wisdom at the time — especially in Germany — was that countries were being justifiably punished for taking on excessive debt
  • the Belgian economist Paul De Grauwe argued that what was actually happening was a self-fulfilling panic — basically a run on the bonds of countries that couldn’t provide a backstop because they no longer had their own currencies.
  • Sure enough, when Mario Draghi, the president of the European Central Bank at the time, finally did provide a backstop in 2012 — he said the magic words “whatever it takes,” implying that the bank would lend money to the troubled governments if necessary — the spreads collapsed and the crisis came to an end:
Javier E

Opinion | The Last Thatcherite - The New York Times - 0 views

  • The world has just witnessed one of the most extraordinary political immolations of recent times. Animated by faith in a fantasy version of the free market, Prime Minister Liz Truss of Britain set off a sequence of events that has forced her to fire her chancellor of the Exchequer, Kwasi Kwarteng, and led her to the brink of being ousted by her own party.
  • There’s something tragicomic, if not tragic, about capitalist revolutionaries Ms. Truss and Mr. Kwarteng laid low by the mechanisms of capitalism itself. Ms. Truss and Mr. Kwarteng may be the last of the Thatcherites, defeated by the very system they believed they were acting in fidelity to.
  • Thatcherism began in the 1970s. Defined early as the belief in “the free economy and the strong state,” Thatcherism condemned the postwar British welfare economy and sought to replace it with virtues of individual enterprise and religious morality.
  • ...11 more annotations...
  • Over the subsequent four decades, Thatcherites at think tanks like the Institute of Economic Affairs and the Centre for Policy Studies (which Margaret Thatcher helped set up) described the struggle against both the Labour Party and the broader persistence of Socialism in the Communist and non-Communist world as a “war of ideas.”
  • Thatcherites, known collectively as the ultras, gained fresh blood in the 2010s as a group of Gen Xers too young to experience Thatcherism in its insurgent early years — including the former home secretary Priti Patel, the former foreign secretary Dominic Raab, the former minister of state for universities Chris Skidmore, Mr. Kwarteng and Ms. Truss — attempted to reboot her ideology for the new millennium.
  • They followed their idol not only in her antagonism to organized labor but also in her less-known fascination with Asian capitalism. In 2012’s “Britannia Unchained,” a book co-written by the group that remains a Rosetta Stone for the policy surprises of the last month, they slammed the Britons for their eroded work ethic and “culture of excuses” and the “cosseted” public sector unions. They praised China, South Korea, Singapore and Hong Kon
  • “Britannia Unchained” expressed a desire to go back to the future by restoring Victorian values of hard work, self-improvement and bootstrapping.
  • While the Gen X Thatcherites didn’t scrimp on data, they also saw something ineffable at the root of British malaise. “Beyond the statistics and economic theories,” they wrote, “there remains a sense in which many of Britain’s problems lie in the sphere of cultural values and mind-set.”
  • As Thatcher herself put it, “Economics are the method; the object is to change the heart and soul.” Britain needed a leap of faith to restore itself.
  • Ms. Truss and Mr. Kwarteng seemed to have believed that by patching together all of the most radical policies of Thatcherism (while conveniently dropping the need for spending cuts), they would be incanting a kind of magic spell, an “Open sesame” for “global Britain.” This was their Reagan moment, their moment when, as their favorite metaphors put it, a primordial repressed force would be “unchained,” “unleashed” or “unshackled.”But as a leap of faith, it broke the diver’s neck.
  • the money markets were not waiting for an act of faith in Laffer Curve fundamentalism after all. This was “Reaganism without the dollar.” Without the confidence afforded to the global reserve currency, the pound went into free fall.
  • ince the 1970s, the world of think tanks had embraced a framing of the world in terms of discrete spaces that could become what they called laboratories for new policies
  • The mini-budget subjected the entire economy to experimental treatment. This was put in explicit terms in a celebratory post by a Tory journalist and think tanker claiming that Ms. Truss and Mr. Kwarteng had been “incubated” by the Institute of Economic Affairs in their early years and “Britain is now their laboratory.”
  • The scientists at the bench discovered that the money markets would not only punish left-wing experiments in changing the balance between states and markets, but they were also sensitive to experiments that pushed too far to the right. A cowed Ms. Truss apologized, and Mr. Kwarteng’s successor has reversed almost all of the planned cuts and limited the term for energy supports.
Javier E

Influencers Don't Have to Be Human to Be Believable - WSJ - 0 views

  • Why would consumers look even somewhat favorably upon virtual influencers that make comments about real products?
  • . Virtual and human social-media influencers can be equally effective for certain types of posts, the research suggests.
  • The thinking is that virtual influencers can be fun and entertaining and make a brand seem innovative and tech savvy,
  • ...8 more annotations...
  •  virtual influencers can also be cost-effective and provide more flexibility than a human alternative. 
  • “When it comes to an endorsement by a virtual influencer, the followers start questioning the expertness of the influencer on the field of the endorsed product/service,” he says. “Pretending that the influencer has actual experience with the product backfires.”
  • In one part of the study, about 300 participants were shown a social-media post purported to be from an influencer about either ice cream or sunglasses. Then, roughly half were told the influencer was human and half were told she was virtual. Regardless of the product, participants perceived the virtual influencer to be less credible than its “human” counterpart. Participants who were told the influencer was virtual also had a less-positive attitude toward the brand behind the product.
  • When the influencers “can’t really use the brand they are promoting,” it’s hard to see them as trustworthy experts, say Ozdemir.
  • Two groups saw a post with an emotional endorsement where the influencer uses words like love and adore. The other two groups saw a more staid post, focusing on specific software features. In each scenario one group was told the influencer was human and one group was told the influencer was virtual.
  • For the emotional endorsement, participants found the human influencer to be more credible. Participants who were told the influencer was human also had a more positive view of the brand than those who were told the influencer was virtual.
  • For the more factual endorsement, however, there was no statistically significant difference between the two groups when it came to influencer credibility or brand perception.
  • “When it comes to delivering a more factual endorsement, highlighting features that could be found by doing an internet search, participants really didn’t seem to care if the influencer was human or not,”
Javier E

When a Shitposter Runs a Social Media Platform - The Bulwark - 0 views

  • This is an unfortunate and pernicious pattern. Musk often refers to himself as moderate or independent, but he routinely treats far-right fringe figures as people worth taking seriously—and, more troublingly, as reliable sources of information.
  • By doing so, he boosts their messages: A message retweeted by or receiving a reply from Musk will potentially be seen by millions of people.
  • Also, people who pay for Musk’s Twitter Blue badges get a lift in the algorithm when they tweet or reply; because of the way Twitter Blue became a culture war front, its subscribers tend to skew to the righ
  • ...19 more annotations...
  • The important thing to remember amid all this, and the thing that has changed the game when it comes to the free speech/content moderation conversation, is that Elon Musk himself loves conspiracy theorie
  • The media isn’t just unduly critical—a perennial sore spot for Musk—but “all news is to some degree propaganda,” meaning he won’t label actual state-affiliated propaganda outlets on his platform to distinguish their stories from those of the New York Times.
  • In his mind, they’re engaged in the same activity, so he strikes the faux-populist note that the people can decide for themselves what is true, regardless of objectively very different track records from different sources.
  • Musk’s “just asking questions” maneuver is a classic Trump tactic that enables him to advertise conspiracy theories while maintaining a sort of deniability.
  • At what point should we infer that he’s taking the concerns of someone like Loomer seriously not despite but because of her unhinged beliefs?
  • Musk’s skepticism seems largely to extend to criticism of the far-right, while his credulity for right-wing sources is boundless.
  • Brandolini’s Law holds that the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.
  • Refuting bullshit requires some technological literacy, perhaps some policy knowledge, but most of all it requires time and a willingness to challenge your own prior beliefs, two things that are in precious short supply online.
  • This is part of the argument for content moderation that limits the dispersal of bullshit: People simply don’t have the time, energy, or inclination to seek out the boring truth when stimulated by some online outrage.
  • Here we can return to the example of Loomer’s tweet. People did fact-check her, but it hardly matters: Following Musk’s reply, she ended up receiving over 5 million views, an exponentially larger online readership than is normal for her. In the attention economy, this counts as a major win. “Thank you so much for posting about this, @elonmusk!” she gushed in response to his reply. “I truly appreciate it.”
  • the problem isn’t limited to elevating Loomer. Musk had his own stock of misinformation to add to the pile. After interacting with her account, Musk followed up last Tuesday by tweeting out last week a 2021 Federalist article claiming that Facebook founder Mark Zuckerberg had “bought” the 2020 election, an allegation previously raised by Trump and others, and which Musk had also brought up during his recent interview with Tucker Carlson.
  • If Zuckerberg wanted to use his vast fortune to tip the election, it would have been vastly more efficient to create a super PAC with targeted get-out-the-vote operations and advertising. Notwithstanding legitimate criticisms one can make about Facebook’s effect on democracy, and whatever Zuckerberg’s motivations, you have to squint hard to see this as something other than a positive act addressing a real problem.
  • It’s worth mentioning that the refutations I’ve just sketched of the conspiratorial claims made by Loomer and Musk come out to around 1,200 words. The tweets they wrote, read by millions, consisted of fewer than a hundred words in total. That’s Brandolini’s Law in action—an illustration of why Musk’s cynical free-speech-over-all approach amounts to a policy in favor of disinformation and against democracy.
  • Moderation is a subject where Zuckerberg’s actions provide a valuable point of contrast with Musk. Through Facebook’s independent oversight board, which has the power to overturn the company’s own moderation decisions, Zuckerberg has at least made an effort to have credible outside actors inform how Facebook deals with moderation issues
  • Meanwhile, we are still waiting on the content moderation council that Elon Musk promised last October:
  • The problem is about to get bigger than unhinged conspiracy theorists occasionally receiving a profile-elevating reply from Musk. Twitter is the venue that Tucker Carlson, whom advertisers fled and Fox News fired after it agreed to pay $787 million to settle a lawsuit over its election lies, has chosen to make his comeback. Carlson and Musk are natural allies: They share an obsessive anti-wokeness, a conspiratorial mindset, and an unaccountable sense of grievance peculiar to rich, famous, and powerful men who have taken it upon themselves to rail against the “elites,” however idiosyncratically construed
  • f the rumors are true that Trump is planning to return to Twitter after an exclusivity agreement with Truth Social expires in June, Musk’s social platform might be on the verge of becoming a gigantic rec room for the populist right.
  • These days, Twitter increasingly feels like a neighborhood where the amiable guy-next-door is gone and you suspect his replacement has a meth lab in the basement.
  • even if Twitter’s increasingly broken information environment doesn’t sway the results, it is profoundly damaging to our democracy that so many people have lost faith in our electoral system. The sort of claims that Musk is toying with in his feed these days do not help. It is one thing for the owner of a major source of information to be indifferent to the content that gets posted to that platform. It is vastly worse for an owner to actively fan the flames of disinformation and doubt.
Javier E

Elliot Ackerman Went From U.S. Marine to Bestselling Novelist - WSJ - 0 views

  • Years before he impressed critics with his first novel, “Green on Blue” (2015), written from the perspective of an Afghan boy, Ackerman was already, in his words, “telling stories and inhabiting the minds of others.” He explains that much of his work as a special-operations officer involved trying to grasp what his adversaries were thinking, to better anticipate how they might act
  • “Look, I really believe in stories, I believe in art, I believe that this is how we express our humanity,” he says. “You can’t understand a society without understanding the stories they tell about themselves, and how these stories are constantly changing.”
  • his, in essence, is the subject of “Halcyon,” in which a scientific breakthrough allows Robert Ableson, a World War II hero and renowned lawyer, to come back from the dead. Yet the 21st-century America he returns to feels like a different place, riven by debates over everything from Civil War monuments to workplace misconduct.
  • ...9 more annotations...
  • The novel probes how nothing in life is fixed, including the legacies of the dead and the stories we tell about our pas
  • “The study of history shouldn’t be backward looking,” explains a historian in “Halcyon.” “To matter, it has to take us forward.”
  • Ackerman was in college on Sept. 11, 2001, but what he remembers more vividly is watching the premiere of the TV miniseries “Band of Brothers” the previous Sunday. “If you wanted to know the zeitgeist in the U.S. at the time, it was this very sentimental view of World War II,” he says. “There was this nostalgia for a time where we’re the good guys, they’re the bad guys, and we’re going to liberate oppressed people.”
  • Ackerman, who also covers wars and veteran affairs as a journalist, says that America’s backing of Ukraine is essential in the face of what he calls “an authoritarian axis rising up in the world, with China, Russia and Iran.” Were the country to offer similar help to Taiwan in the face of an invasion from China, he notes, having some air bases in nearby Afghanistan would help, but the U.S. gave those up in 2021.
  • With Islamic fundamentalists now in control of places where he lost friends, he says he is often asked if he regrets his service. “When you are a young man and your country goes to war, you’re presented with a choice: You either fight or you don’t,” he writes in his 2019 memoir “Places and Names.” “I don’t regret my choice, but maybe I regret being asked to choose.”
  • Serving in the military at a time when wars are no longer generation-defining events has proven alienating for Ackerman. “When you’ve got wars with an all-volunteer military funded through deficit spending, they can go on forever because there are no political costs
  • The catastrophic withdrawal from Afghanistan in 2021, which Ackerman covers in his recent memoir “The Fifth Act,” compounded this moral injury. “The fact that there has been so little government support for our Afghan allies has left it to vets to literally clean this up,” he says, noting that he still fields requests for help on WhatsApp. He adds that unless lawmakers act, the tens of thousands of Afghans currently living in the U.S. on humanitarian parole will be sent back to Taliban-held Afghanistan later this year: “It’s very painful to see how our allies are treated.”
  • Looking back on America’s misadventures in Iraq, Afghanistan and elsewhere, he notes that “the stories we tell about war are really important to the decisions we make around war. It’s one reason why storytelling fills me with a similar sense of purpose.”
  • “We don’t talk about the world and our place in it in a holistic way, or a strategic way,” Ackerman says. “We were telling a story about ending America’s longest war, when the one we should’ve been telling was about repositioning ourselves in a world that’s becoming much more dangerous,” he adds. “Our stories sometimes get us in trouble, and we’re still dealing with that trouble today.”
Javier E

How the Shoggoth Meme Has Come to Symbolize the State of A.I. - The New York Times - 0 views

  • the Shoggoth had become a popular reference among workers in artificial intelligence, as a vivid visual metaphor for how a large language model (the type of A.I. system that powers ChatGPT and other chatbots) actually works.
  • it was only partly a joke, he said, because it also hinted at the anxieties many researchers and engineers have about the tools they’re building.
  • Since then, the Shoggoth has gone viral, or as viral as it’s possible to go in the small world of hyper-online A.I. insiders. It’s a popular meme on A.I. Twitter (including a now-deleted tweet by Elon Musk), a recurring metaphor in essays and message board posts about A.I. risk, and a bit of useful shorthand in conversations with A.I. safety experts. One A.I. start-up, NovelAI, said it recently named a cluster of computers “Shoggy” in homage to the meme. Another A.I. company, Scale AI, designed a line of tote bags featuring the Shoggoth.
  • ...17 more annotations...
  • Most A.I. researchers agree that models trained using R.L.H.F. are better behaved than models without it. But some argue that fine-tuning a language model this way doesn’t actually make the underlying model less weird and inscrutable. In their view, it’s just a flimsy, friendly mask that obscures the mysterious beast underneath.
  • In a nutshell, the joke was that in order to prevent A.I. language models from behaving in scary and dangerous ways, A.I. companies have had to train them to act polite and harmless. One popular way to do this is called “reinforcement learning from human feedback,” or R.L.H.F., a process that involves asking humans to score chatbot responses, and feeding those scores back into the A.I. model.
  • Shoggoths are fictional creatures, introduced by the science fiction author H.P. Lovecraft in his 1936 novella “At the Mountains of Madness.” In Lovecraft’s telling, Shoggoths were massive, blob-like monsters made out of iridescent black goo, covered in tentacles and eyes.
  • @TetraspaceWest said, wasn’t necessarily implying that it was evil or sentient, just that its true nature might be unknowable.
  • And it reinforces the notion that what’s happening in A.I. today feels, to some of its participants, more like an act of summoning than a software development process. They are creating the blobby, alien Shoggoths, making them bigger and more powerful, and hoping that there are enough smiley faces to cover the scary parts.
  • “I was also thinking about how Lovecraft’s most powerful entities are dangerous — not because they don’t like humans, but because they’re indifferent and their priorities are totally alien to us and don’t involve humans, which is what I think will be true about possible future powerful A.I.”
  • when Bing’s chatbot became unhinged and tried to break up my marriage, an A.I. researcher I know congratulated me on “glimpsing the Shoggoth.” A fellow A.I. journalist joked that when it came to fine-tuning Bing, Microsoft had forgotten to put on its smiley-face mask.
  • @TetraspaceWest, the meme’s creator, told me in a Twitter message that the Shoggoth “represents something that thinks in a way that humans don’t understand and that’s totally different from the way that humans think.”
  • In any case, the Shoggoth is a potent metaphor that encapsulates one of the most bizarre facts about the A.I. world, which is that many of the people working on this technology are somewhat mystified by their own creations. They don’t fully understand the inner workings of A.I. language models, how they acquire new capabilities or why they behave unpredictably at times. They aren’t totally sure if A.I. is going to be net-good or net-bad for the world.
  • That some A.I. insiders refer to their creations as Lovecraftian horrors, even as a joke, is unusual by historical standards. (Put it this way: Fifteen years ago, Mark Zuckerberg wasn’t going around comparing Facebook to Cthulhu.)
  • If it’s an A.I. safety researcher talking about the Shoggoth, maybe that person is passionate about preventing A.I. systems from displaying their true, Shoggoth-like nature.
  • A great many people are dismissive of suggestions that any of these systems are “really” thinking, because they’re “just” doing something banal (like making statistical predictions about the next word in a sentence). What they fail to appreciate is that there is every reason to suspect that human cognition is “just” doing those exact same things. It matters not that birds flap their wings but airliners don’t. Both fly. And these machines think. And, just as airliners fly faster and higher and farther than birds while carrying far more weight, these machines are already outthinking the majority of humans at the majority of tasks. Further, that machines aren’t perfect thinkers is about as relevant as the fact that air travel isn’t instantaneous. Now consider: we’re well past the Wright flyer level of thinking machine, past the early biplanes, somewhere about the first commercial airline level. Not quite the DC-10, I think. Can you imagine what the AI equivalent of a 777 will be like? Fasten your seatbelts.
  • @thomas h. You make my point perfectly. You’re observing that the way a plane flies — by using a turbine to generate thrust from combusting kerosene, for example — is nothing like the way that a bird flies, which is by using the energy from eating plant seeds to contract the muscles in its wings to make them flap. You are absolutely correct in that observation, but it’s also almost utterly irrelevant. And it ignores that, to a first approximation, there’s no difference in the physics you would use to describe a hawk riding a thermal and an airliner gliding (essentially) unpowered in its final descent to the runway. Further, you do yourself a grave disservice in being dismissive of the abilities of thinking machines, in exactly the same way that early skeptics have been dismissive of every new technology in all of human history. Writing would make people dumb; automobiles lacked the intelligence of horses; no computer could possibly beat a chess grandmaster because it can’t comprehend strategy; and on and on and on. Humans aren’t nearly as special as we fool ourselves into believing. If you want to have any hope of acting responsibly in the age of intelligent machines, you’ll have to accept that, like it or not, and whether or not it fits with your preconceived notions of what thinking is and how it is or should be done … machines can and do think, many of them better than you in a great many ways. b&
  • @BLA. You are incorrect. Everything has nature. Its nature is manifested in making humans react. Sure, no humans, no nature, but here we are. The writer and various sources are not attributing nature to AI so much as admitting that they don’t know what this nature might be, and there are reasons to be scared of it. More concerning to me is the idea that this field is resorting to geek culture reference points to explain and comprehend itself. It’s not so much the algorithm has no soul, but that the souls of the humans making it possible are stupendously and tragically underdeveloped.
  • When even tech companies are saying AI is moving too fast, and the articles land on page 1 of the NYT (there's an old reference), I think the greedy will not think twice about exploiting this technology, with no ethical considerations, at all.
  • @nome sane? The problem is it isn't data as we understand it. We know what the datasets are -- they were used to train the AI's. But once trained, the AI is thinking for itself, with results that have surprised everybody.
  • The unique feature of a shoggoth is it can become whatever is needed for a particular job. There's no actual shape so it's not a bad metaphor, if an imperfect image. Shoghoths also turned upon and destroyed their creators, so the cautionary metaphor is in there, too. A shame more Asimov wasn't baked into AI. But then the conflict about how to handle AI in relation to people was key to those stories, too.
Javier E

'The Godfather of AI' Quits Google and Warns of Danger Ahead - The New York Times - 0 views

  • he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
  • Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
  • “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,”
  • ...24 more annotations...
  • Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
  • But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
  • “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
  • After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I technologies pose “profound risks to society and humanity.
  • Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.
  • Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job
  • Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
  • Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
  • Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
  • In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
  • In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.
  • Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others.
  • “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
  • As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
  • Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
  • His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
  • He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
  • Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their ow
  • And he fears a day when truly autonomous weapons — those killer robots — become reality.
  • “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
  • Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.
  • But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
  • Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
  • He does not say that anymore.
Javier E

Where We Went Wrong | Harvard Magazine - 0 views

  • John Kenneth Galbraith assessed the trajectory of America’s increasingly “affluent society.” His outlook was not a happy one. The nation’s increasingly evident material prosperity was not making its citizens any more satisfied. Nor, at least in its existing form, was it likely to do so
  • One reason, Galbraith argued, was the glaring imbalance between the opulence in consumption of private goods and the poverty, often squalor, of public services like schools and parks
  • nother was that even the bountifully supplied private goods often satisfied no genuine need, or even desire; a vast advertising apparatus generated artificial demand for them, and satisfying this demand failed to provide meaningful or lasting satisfaction.
  • ...28 more annotations...
  • economist J. Bradford DeLong ’82, Ph.D. ’87, looking back on the twentieth century two decades after its end, comes to a similar conclusion but on different grounds.
  • DeLong, professor of economics at Berkeley, looks to matters of “contingency” and “choice”: at key junctures the economy suffered “bad luck,” and the actions taken by the responsible policymakers were “incompetent.”
  • these were “the most consequential years of all humanity’s centuries.” The changes they saw, while in the first instance economic, also “shaped and transformed nearly everything sociological, political, and cultural.”
  • DeLong’s look back over the twentieth century energetically encompasses political and social trends as well; nor is his scope limited to the United States. The result is a work of strikingly expansive breadth and scope
  • labeling the book an economic history fails to convey its sweeping frame.
  • The century that is DeLong’s focus is what he calls the “long twentieth century,” running from just after the Civil War to the end of the 2000s when a series of events, including the biggest financial crisis since the 1930s followed by likewise the most severe business downturn, finally rendered the advanced Western economies “unable to resume economic growth at anything near the average pace that had been the rule since 1870.
  • d behind those missteps in policy stood not just failures of economic thinking but a voting public that reacted perversely, even if understandably, to the frustrations poor economic outcomes had brought them.
  • Within this 140-year span, DeLong identifies two eras of “El Dorado” economic growth, each facilitated by expanding globalization, and each driven by rapid advances in technology and changes in business organization for applying technology to economic ends
  • from 1870 to World War I, and again from World War II to 197
  • fellow economist Robert J. Gordon ’62, who in his monumental treatise on The Rise and Fall of American Economic Growth (reviewed in “How America Grew,” May-June 2016, page 68) hailed 1870-1970 as a “special century” in this regard (interrupted midway by the disaster of the 1930s).
  • Gordon highlighted the role of a cluster of once-for-all-time technological advances—the steam engine, railroads, electrification, the internal combustion engine, radio and television, powered flight
  • Pessimistic that future technological advances (most obviously, the computer and electronics revolutions) will generate productivity gains to match those of the special century, Gordon therefore saw little prospect of a return to the rapid growth of those halcyon days.
  • DeLong instead points to a series of noneconomic (and non-technological) events that slowed growth, followed by a perverse turn in economic policy triggered in part by public frustration: In 1973 the OPEC cartel tripled the price of oil, and then quadrupled it yet again six years later.
  • For all too many Americans (and citizens of other countries too), the combination of high inflation and sluggish growth meant that “social democracy was no longer delivering the rapid progress toward utopia that it had delivered in the first post-World War II generation.”
  • Frustration over these and other ills in turn spawned what DeLong calls the “neoliberal turn” in public attitudes and economic policy. The new economic policies introduced under this rubric “did not end the slowdown in productivity growth but reinforced it.
  • the tax and regulatory changes enacted in this new climate channeled most of what economic gains there were to people already at the top of the income scale
  • Meanwhile, progressive “inclusion” of women and African Americans in the economy (and in American society more broadly) meant that middle- and lower-income white men saw even smaller gains—and, perversely, reacted by providing still greater support for policies like tax cuts for those with far higher incomes than their own.
  • Daniel Bell’s argument in his 1976 classic The Cultural Contradictions of Capitalism. Bell famously suggested that the very success of a capitalist economy would eventually undermine a society’s commitment to the values and institutions that made capitalism possible in the first plac
  • In DeLong’s view, the “greatest cause” of the neoliberal turn was “the extraordinary pace of rising prosperity during the Thirty Glorious Years, which raised the bar that a political-economic order had to surpass in order to generate broad acceptance.” At the same time, “the fading memory of the Great Depression led to the fading of the belief, or rather recognition, by the middle class that they, as well as the working class, needed social insurance.”
  • what the economy delivered to “hard-working white men” no longer matched what they saw as their just deserts: in their eyes, “the rich got richer, the unworthy and minority poor got handouts.”
  • As Bell would have put it, the politics of entitlement, bred by years of economic success that so many people had come to take for granted, squeezed out the politics of opportunity and ambition, giving rise to the politics of resentment.
  • The new era therefore became “a time to question the bourgeois virtues of hard, regular work and thrift in pursuit of material abundance.”
  • DeLong’s unspoken agenda would surely include rolling back many of the changes made in the U.S. tax code over the past half-century, as well as reinvigorating antitrust policy to blunt the dominance, and therefore outsize profits, of the mega-firms that now tower over key sectors of the economy
  • He would also surely reverse the recent trend moving away from free trade. Central bankers should certainly behave like Paul Volcker (appointed by President Carter), whose decisive action finally broke the 1970s inflation even at considerable economic cost
  • Not only Galbraith’s main themes but many of his more specific observations as well seem as pertinent, and important, today as they did then.
  • What will future readers of Slouching Towards Utopia conclude?
  • If anything, DeLong’s narratives will become more valuable as those events fade into the past. Alas, his description of fascism as having at its center “a contempt for limits, especially those implied by reason-based arguments; a belief that reality could be altered by the will; and an exaltation of the violent assertion of that will as the ultimate argument” will likely strike a nerve with many Americans not just today but in years to come.
  • what about DeLong’s core explanation of what went wrong in the latter third of his, and our, “long century”? I predict that it too will still look right, and important.
Javier E

Male Stock Analysts With 'Dominant' Faces Get More Information-and Have Better Forecast... - 0 views

  • “People form impressions after extremely brief exposure to faces—within a hundred milliseconds,” says Alexander Todorov, a behavioral-science professor at the University of Chicago Booth School of Business. “They take actions based on those impressions,”
  • . Under most circumstances, such quick impressions aren’t accurate and shouldn’t be trusted, he says.
  • Prof. Teoh and her fellow researchers analyzed the facial traits of nearly 800 U.S. sell-side stock financial analysts working between January 1990 and December 2017 who also had a LinkedIn profile photo as of 2018. They pulled their sample of analysts from Thomson Reuters and the firms they covered from the merged Center for Research in Security Prices and Compustat, a database of financial, statistical and market information
  • ...8 more annotations...
  • The researchers used facial-recognition software to map out specific points on a person’s face, then applied machine-learning algorithms to the facial points to obtain empirical measures for three key face impressions—trustworthiness, dominance and attractiveness.  
  • They examined the association of these impressions with the accuracy of analysts’ quarterly forecasts, drawn from the Institutional Brokers Estimate System
  • Analyst accuracy was determined by comparing each analyst’s prediction error—the difference between their prediction and the actual earnings—with that of all analysts for that same company and quarter.
  • For an average stock valued at $100, Prof. Teoh says, analysts ranked as looking most trustworthy were 25 cents more accurate in earnings-per-share forecasts than the analysts who were ranked as looking least trustworthy
  • Similarly, most-dominant-looking analysts were 52 cents more accurate in their EPS forecast than least-dominant-looking analysts.
  • The relation between a dominant face and accuracy, meanwhile, was significant before and after the regulation was enacted, the analysts say. This suggests that dominant-looking male analysts are always able to obtain information,
  • While forecasts of female analysts regardless of facial characteristics were on average more accurate than those of their male counterparts, the forecasts of women who were seen as more-dominant-looking were significantly less accurate than their male counterparts.  
  • Says Prof. Todorov: “Women who look dominant are more likely to be viewed negatively because it goes against the cultural stereotype.
Javier E

Whistleblower: Twitter misled investors, FTC and underplayed spam issues - Washington Post - 0 views

  • Twitter executives deceived federal regulators and the company’s own board of directors about “extreme, egregious deficiencies” in its defenses against hackers, as well as its meager efforts to fight spam, according to an explosive whistleblower complaint from its former security chief.
  • “Security and privacy have long been top companywide priorities at Twitter,” said Twitter spokeswoman Rebecca Hahn. She said that Zatko’s allegations appeared to be “riddled with inaccuracies” and that Zatko “now appears to be opportunistically seeking to inflict harm on Twitter, its customers, and its shareholders.” Hahn said that Twitter fired Zatko after 15 months “for poor performance and leadership.” Attorneys for Zatko confirmed he was fired but denied it was for performance or leadership.
  • Among the most serious accusations in the complaint, a copy of which was obtained by The Washington Post, is that Twitter violated the terms of an 11-year-old settlement with the Federal Trade Commission by falsely claiming that it had a solid security plan. Zatko’s complaint alleges he had warned colleagues that half the company’s servers were running out-of-date and vulnerable software and that executives withheld dire facts about the number of breaches and lack of protection for user data, instead presenting directors with rosy charts measuring unimportant changes.
  • ...56 more annotations...
  • The complaint — filed last month with the Securities and Exchange Commission and the Department of Justice, as well as the FTC — says thousands of employees still had wide-ranging and poorly tracked internal access to core company software, a situation that for years had led to embarrassing hacks, including the commandeering of accounts held by such high-profile users as Elon Musk and former presidents Barack Obama and Donald Trump.
  • the whistleblower document alleges the company prioritized user growth over reducing spam, though unwanted content made the user experience worse. Executives stood to win individual bonuses of as much as $10 million tied to increases in daily users, the complaint asserts, and nothing explicitly for cutting spam.
  • Chief executive Parag Agrawal was “lying” when he tweeted in May that the company was “strongly incentivized to detect and remove as much spam as we possibly can,” the complaint alleges.
  • Zatko described his decision to go public as an extension of his previous work exposing flaws in specific pieces of software and broader systemic failings in cybersecurity. He was hired at Twitter by former CEO Jack Dorsey in late 2020 after a major hack of the company’s systems.
  • “I felt ethically bound. This is not a light step to take,” said Zatko, who was fired by Agrawal in January. He declined to discuss what happened at Twitter, except to stand by the formal complaint. Under SEC whistleblower rules, he is entitled to legal protection against retaliation, as well as potential monetary rewards.
  • The complaint from former head of security Peiter Zatko, a widely admired hacker known as “Mudge,” depicts Twitter as a chaotic and rudderless company beset by infighting, unable to properly protect its 238 million daily users including government agencies, heads of state and other influential public figures.
  • A person familiar with Zatko’s tenure said the company investigated Zatko’s security claims during his time there and concluded they were sensationalistic and without merit. Four people familiar with Twitter’s efforts to fight spam said the company deploys extensive manual and automated tools to both measure the extent of spam across the service and reduce it.
  • Overall, Zatko wrote in a February analysis for the company attached as an exhibit to the SEC complaint, “Twitter is grossly negligent in several areas of information security. If these problems are not corrected, regulators, media and users of the platform will be shocked when they inevitably learn about Twitter’s severe lack of security basics.”
  • Zatko’s complaint says strong security should have been much more important to Twitter, which holds vast amounts of sensitive personal data about users. Twitter has the email addresses and phone numbers of many public figures, as well as dissidents who communicate over the service at great personal risk.
  • This month, an ex-Twitter employee was convicted of using his position at the company to spy on Saudi dissidents and government critics, passing their information to a close aide of Crown Prince Mohammed bin Salman in exchange for cash and gifts.
  • Zatko’s complaint says he believed the Indian government had forced Twitter to put one of its agents on the payroll, with access to user data at a time of intense protests in the country. The complaint said supporting information for that claim has gone to the National Security Division of the Justice Department and the Senate Select Committee on Intelligence. Another person familiar with the matter agreed that the employee was probably an agent.
  • “Take a tech platform that collects massive amounts of user data, combine it with what appears to be an incredibly weak security infrastructure and infuse it with foreign state actors with an agenda, and you’ve got a recipe for disaster,” Charles E. Grassley (R-Iowa), the top Republican on the Senate Judiciary Committee,
  • Many government leaders and other trusted voices use Twitter to spread important messages quickly, so a hijacked account could drive panic or violence. In 2013, a captured Associated Press handle falsely tweeted about explosions at the White House, sending the Dow Jones industrial average briefly plunging more than 140 points.
  • After a teenager managed to hijack the verified accounts of Obama, then-candidate Joe Biden, Musk and others in 2020, Twitter’s chief executive at the time, Jack Dorsey, asked Zatko to join him, saying that he could help the world by fixing Twitter’s security and improving the public conversation, Zatko asserts in the complaint.
  • In 1998, Zatko had testified to Congress that the internet was so fragile that he and others could take it down with a half-hour of concentrated effort. He later served as the head of cyber grants at the Defense Advanced Research Projects Agency, the Pentagon innovation unit that had backed the internet’s invention.
  • But at Twitter Zatko encountered problems more widespread than he realized and leadership that didn’t act on his concerns, according to the complaint.
  • Twitter’s difficulties with weak security stretches back more than a decade before Zatko’s arrival at the company in November 2020. In a pair of 2009 incidents, hackers gained administrative control of the social network, allowing them to reset passwords and access user data. In the first, beginning around January of that year, hackers sent tweets from the accounts of high-profile users, including Fox News and Obama.
  • Several months later, a hacker was able to guess an employee’s administrative password after gaining access to similar passwords in their personal email account. That hacker was able to reset at least one user’s password and obtain private information about any Twitter user.
  • Twitter continued to suffer high-profile hacks and security violations, including in 2017, when a contract worker briefly took over Trump’s account, and in the 2020 hack, in which a Florida teen tricked Twitter employees and won access to verified accounts. Twitter then said it put additional safeguards in place.
  • This year, the Justice Department accused Twitter of asking users for their phone numbers in the name of increased security, then using the numbers for marketing. Twitter agreed to pay a $150 million fine for allegedly breaking the 2011 order, which barred the company from making misrepresentations about the security of personal data.
  • After Zatko joined the company, he found it had made little progress since the 2011 settlement, the complaint says. The complaint alleges that he was able to reduce the backlog of safety cases, including harassment and threats, from 1 million to 200,000, add staff and push to measure results.
  • But Zatko saw major gaps in what the company was doing to satisfy its obligations to the FTC, according to the complaint. In Zatko’s interpretation, according to the complaint, the 2011 order required Twitter to implement a Software Development Life Cycle program, a standard process for making sure new code is free of dangerous bugs. The complaint alleges that other employees had been telling the board and the FTC that they were making progress in rolling out that program to Twitter’s systems. But Zatko alleges that he discovered that it had been sent to only a tenth of the company’s projects, and even then treated as optional.
  • “If all of that is true, I don’t think there’s any doubt that there are order violations,” Vladeck, who is now a Georgetown Law professor, said in an interview. “It is possible that the kinds of problems that Twitter faced eleven years ago are still running through the company.”
  • The complaint also alleges that Zatko warned the board early in his tenure that overlapping outages in the company’s data centers could leave it unable to correctly restart its servers. That could have left the service down for months, or even have caused all of its data to be lost. That came close to happening in 2021, when an “impending catastrophic” crisis threatened the platform’s survival before engineers were able to save the day, the complaint says, without providing further details.
  • One current and one former employee recalled that incident, when failures at two Twitter data centers drove concerns that the service could have collapsed for an extended period. “I wondered if the company would exist in a few days,” one of them said.
  • The current and former employees also agreed with the complaint’s assertion that past reports to various privacy regulators were “misleading at best.”
  • The four people familiar with Twitter’s spam and bot efforts said the engineering and integrity teams run software that samples thousands of tweets per day, and 100 accounts are sampled manually.
  • As the head of security, Zatko says he also was in charge of a division that investigated users’ complaints about accounts, which meant that he oversaw the removal of some bots, according to the complaint. Spam bots — computer programs that tweet automatically — have long vexed Twitter. Unlike its social media counterparts, Twitter allows users to program bots to be used on its service: For example, the Twitter account @big_ben_clock is programmed to tweet “Bong Bong Bong” every hour in time with Big Ben in London. Twitter also allows people to create accounts without using their real identities, making it harder for the company to distinguish between authentic, duplicate and automated accounts.
  • In the complaint, Zatko alleges he could not get a straight answer when he sought what he viewed as an important data point: the prevalence of spam and bots across all of Twitter, not just among monetizable users.
  • Zatko cites a “sensitive source” who said Twitter was afraid to determine that number because it “would harm the image and valuation of the company.” He says the company’s tools for detecting spam are far less robust than implied in various statements.
  • “Agrawal’s Tweets and Twitter’s previous blog posts misleadingly imply that Twitter employs proactive, sophisticated systems to measure and block spam bots,” the complaint says. “The reality: mostly outdated, unmonitored, simple scripts plus overworked, inefficient, understaffed, and reactive human teams.”
  • For example, they said the company implied that it had destroyed all data on users who asked, but the material had spread so widely inside Twitter’s networks, it was impossible to know for sure
  • Some employees charged with executing the fight agreed that they had been short of staff. One said top executives showed “apathy” toward the issue.
  • Zatko’s complaint likewise depicts leadership dysfunction, starting with the CEO. Dorsey was largely absent during the pandemic, which made it hard for Zatko to get rulings on who should be in charge of what in areas of overlap and easier for rival executives to avoid collaborating, three current and former employees said.
  • For example, Zatko would encounter disinformation as part of his mandate to handle complaints, according to the complaint. To that end, he commissioned an outside report that found one of the disinformation teams had unfilled positions, yawning language deficiencies, and a lack of technical tools or the engineers to craft them. The authors said Twitter had no effective means of dealing with consistent spreaders of falsehoods.
  • Dorsey made little effort to integrate Zatko at the company, according to the three employees as well as two others familiar with the process who spoke on the condition of anonymity to describe sensitive dynamics. In 12 months, Zatko could manage only six one-on-one calls, all less than 30 minutes, with his direct boss Dorsey, who also served as CEO of payments company Square, now known as Block, according to the complaint. Zatko allegedly did almost all of the talking, and Dorsey said perhaps 50 words in the entire year to him. “A couple dozen text messages” rounded out their electronic communication, the complaint alleges.
  • Faced with such inertia, Zatko asserts that he was unable to solve some of the most serious issues, according to the complaint.
  • Some 30 percent of company laptops blocked automatic software updates carrying security fixes, and thousands of laptops had complete copies of Twitter’s source code, making them a rich target for hackers, it alleges.
  • A successful hacker takeover of one of those machines would have been able to sabotage the product with relative ease, because the engineers pushed out changes without being forced to test them first in a simulated environment, current and former employees said.
  • “It’s near-incredible that for something of that scale there would not be a development test environment separate from production and there would not be a more controlled source-code management process,” said Tony Sager, former chief operating officer at the cyberdefense wing of the National Security Agency, the Information Assurance divisio
  • Sager is currently senior vice president at the nonprofit Center for Internet Security, where he leads a consensus effort to establish best security practices.
  • Zatko stopped the material from being presented at the Dec. 9, 2021 meeting, the complaint said. But over his continued objections, Agrawal let it go to the board’s smaller Risk Committee a week later.
  • “A best practice is that you should only be authorized to see and access what you need to do your job, and nothing else,” said former U.S. chief information security officer Gregory Touhill. “If half the company has access to and can make configuration changes to the production environment, that exposes the company and its customers to significant risk.”
  • The complaint says Dorsey never encouraged anyone to mislead the board about the shortcomings, but that others deliberately left out bad news.
  • When Dorsey left in November 2021, a difficult situation worsened under Agrawal, who had been responsible for security decisions as chief technology officer before Zatko’s hiring, the complaint says.
  • An unnamed executive had prepared a presentation for the new CEO’s first full board meeting, according to the complaint. Zatko’s complaint calls the presentation deeply misleading.
  • The presentation showed that 92 percent of employee computers had security software installed — without mentioning that those installations determined that a third of the machines were insecure, according to the complaint.
  • Another graphic implied a downward trend in the number of people with overly broad access, based on the small subset of people who had access to the highest administrative powers, known internally as “God mode.” That number was in the hundreds. But the number of people with broad access to core systems, which Zatko had called out as a big problem after joining, had actually grown slightly and remained in the thousands.
  • The presentation included only a subset of serious intrusions or other security incidents, from a total Zatko estimated as one per week, and it said that the uncontrolled internal access to core systems was responsible for just 7 percent of incidents, when Zatko calculated the real proportion as 60 percent.
  • The complaint says that about half of Twitter’s roughly 7,000 full-time employees had wide access to the company’s internal software and that access was not closely monitored, giving them the ability to tap into sensitive data and alter how the service worked. Three current and former employees agreed that these were issues.
  • Agrawal didn’t respond to requests for comment. In an email to employees after publication of this article, obtained by The Post, he said that privacy and security continues to be a top priority for the company, and he added that the narrative is “riddled with inconsistences” and “presented without important context.”
  • On Jan. 4, Zatko reported internally that the Risk Committee meeting might have been fraudulent, which triggered an Audit Committee investigation.
  • Agarwal fired him two weeks later. But Zatko complied with the company’s request to spell out his concerns in writing, even without access to his work email and documents, according to the complaint.
  • Since Zatko’s departure, Twitter has plunged further into chaos with Musk’s takeover, which the two parties agreed to in May. The stock price has fallen, many employees have quit, and Agrawal has dismissed executives and frozen big projects.
  • Zatko said he hoped that by bringing new scrutiny and accountability, he could improve the company from the outside.
  • “I still believe that this is a tremendous platform, and there is huge value and huge risk, and I hope that looking back at this, the world will be a better place, in part because of this.”
Javier E

Among the Disrupted - The New York Times - 0 views

  • even as technologism, which is not the same as technology, asserts itself over more and more precincts of human life, so too does scientism, which is not the same as science.
  • The notion that the nonmaterial dimensions of life must be explained in terms of the material dimensions, and that nonscientific understandings must be translated into scientific understandings if they are to qualify as knowledge, is increasingly popular inside and outside the university,
  • So, too, does the view that the strongest defense of the humanities lies not in the appeal to their utility — that literature majors may find good jobs, that theaters may economically revitalize neighborhoods
  • ...27 more annotations...
  • The contrary insistence that the glories of art and thought are not evolutionary adaptations, or that the mind is not the brain, or that love is not just biology’s bait for sex, now amounts to a kind of heresy.
  • — but rather in the appeal to their defiantly nonutilitarian character, so that individuals can know more than how things work, and develop their powers of discernment and judgment, their competence in matters of truth and goodness and beauty, to equip themselves adequately for the choices and the crucibles of private and public life.
  • We are not becoming transhumanists, obviously. We are too singular for the Singularity. But are we becoming posthumanists?
  • In American culture right now, as I say, the worldview that is ascendant may be described as posthumanism.
  • The posthumanism of the 1970s and 1980s was more insular, an academic affair of “theory,” an insurgency of professors; our posthumanism is a way of life, a social fate.
  • In “The Age of the Crisis of Man: Thought and Fiction in America, 1933-1973,” the gifted essayist Mark Greif, who reveals himself to be also a skillful historian of ideas, charts the history of the 20th-century reckonings with the definition of “man.
  • Here is his conclusion: “Anytime your inquiries lead you to say, ‘At this moment we must ask and decide who we fundamentally are, our solution and salvation must lie in a new picture of ourselves and humanity, this is our profound responsibility and a new opportunity’ — just stop.” Greif seems not to realize that his own book is a lasting monument to precisely such inquiry, and to its grandeur
  • “Answer, rather, the practical matters,” he counsels, in accordance with the current pragmatist orthodoxy. “Find the immediate actions necessary to achieve an aim.” But before an aim is achieved, should it not be justified? And the activity of justification may require a “picture of ourselves.” Don’t just stop. Think harder. Get it right.
  • Greif’s book is a prehistory of our predicament, of our own “crisis of man.” (The “man” is archaic, the “crisis” is not.) It recognizes that the intellectual history of modernity may be written in part as the epic tale of a series of rebellions against humanism
  • Who has not felt superior to humanism? It is the cheapest target of all: Humanism is sentimental, flabby, bourgeois, hypocritical, complacent, middlebrow, liberal, sanctimonious, constricting and often an alibi for power
  • what is humanism? For a start, humanism is not the antithesis of religion, as Pope Francis is exquisitely demonstrating
  • The worldview takes many forms: a philosophical claim about the centrality of humankind to the universe, and about the irreducibility of the human difference to any aspect of our animality
  • a methodological claim about the most illuminating way to explain history and human affairs, and about the essential inability of the natural sciences to offer a satisfactory explanation; a moral claim about the priority, and the universal nature, of certain values, not least tolerance and compassion
  • And posthumanism? It elects to understand the world in terms of impersonal forces and structures, and to deny the importance, and even the legitimacy, of human agency.
  • There have been humane posthumanists and there have been inhumane humanists. But the inhumanity of humanists may be refuted on the basis of their own worldview
  • the condemnation of cruelty toward “man the machine,” to borrow the old but enduring notion of an 18th-century French materialist, requires the importation of another framework of judgment. The same is true about universalism, which every critic of humanism has arraigned for its failure to live up to the promise of a perfect inclusiveness
  • there has never been a universalism that did not exclude. Yet the same is plainly the case about every particularism, which is nothing but a doctrine of exclusion; and the correction of particularism, the extension of its concept and its care, cannot be accomplished in its own name. It requires an idea from outside, an idea external to itself, a universalistic idea, a humanistic idea.
  • Asking universalism to keep faith with its own principles is a perennial activity of moral life. Asking particularism to keep faith with its own principles is asking for trouble.
  • there is no more urgent task for American intellectuals and writers than to think critically about the salience, even the tyranny, of technology in individual and collective life
  • Here is a humanist proposition for the age of Google: The processing of information is not the highest aim to which the human spirit can aspire, and neither is competitiveness in a global economy. The character of our society cannot be determined by engineers.
  • “Our very mastery seems to escape our mastery,” Michel Serres has anxiously remarked. “How can we dominate our domination; how can we master our own mastery?”
  • universal accessibility is not the end of the story, it is the beginning. The humanistic methods that were practiced before digitalization will be even more urgent after digitalization, because we will need help in navigating the unprecedented welter
  • Searches for keywords will not provide contexts for keywords. Patterns that are revealed by searches will not identify their own causes and reasons
  • The new order will not relieve us of the old burdens, and the old pleasures, of erudition and interpretation.
  • Is all this — is humanism — sentimental? But sentimentality is not always a counterfeit emotion. Sometimes sentiment is warranted by reality.
  • The persistence of humanism through the centuries, in the face of formidable intellectual and social obstacles, has been owed to the truth of its representations of our complexly beating hearts, and to the guidance that it has offered, in its variegated and conflicting versions, for a soulful and sensitive existence
  • a complacent humanist is a humanist who has not read his books closely, since they teach disquiet and difficulty. In a society rife with theories and practices that flatten and shrink and chill the human subject, the humanist is the dissenter.
Javier E

AI is about to completely change how you use computers | Bill Gates - 0 views

  • Health care
  • Entertainment and shopping
  • Today, AI’s main role in healthcare is to help with administrative tasks. Abridge, Nuance DAX, and Nabla Copilot, for example, can capture audio during an appointment and then write up notes for the doctor to review.
  • ...38 more annotations...
  • agents will open up many more learning opportunities.
  • Already, AI can help you pick out a new TV and recommend movies, books, shows, and podcasts. Likewise, a company I’ve invested in, recently launched Pix, which lets you ask questions (“Which Robert Redford movies would I like and where can I watch them?”) and then makes recommendations based on what you’ve liked in the past
  • Productivity
  • copilots can do a lot—such as turn a written document into a slide deck, answer questions about a spreadsheet using natural language, and summarize email threads while representing each person’s point of view.
  • before the sophisticated agents I’m describing become a reality, we need to confront a number of questions about the technology and how we’ll use it.
  • Helping patients and healthcare workers will be especially beneficial for people in poor countries, where many never get to see a doctor at all.
  • To create a new app or service, you won’t need to know how to write code or do graphic design. You’ll just tell your agent what you want. It will be able to write the code, design the look and feel of the app, create a logo, and publish the app to an online store
  • Agents will do even more. Having one will be like having a person dedicated to helping you with various tasks and doing them independently if you want. If you have an idea for a business, an agent will help you write up a business plan, create a presentation for it, and even generate images of what your product might look like
  • For decades, I’ve been excited about all the ways that software would make teachers’ jobs easier and help students learn. It won’t replace teachers, but it will supplement their work—personalizing the work for students and liberating teachers from paperwork and other tasks so they can spend more time on the most important parts of the job.
  • Mental health care is another example of a service that agents will make available to virtually everyone. Today, weekly therapy sessions seem like a luxury. But there is a lot of unmet need, and many people who could benefit from therapy don’t have access to it.
  • I don’t think any single company will dominate the agents business--there will be many different AI engines available.
  • The real shift will come when agents can help patients do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment.
  • They’ll replace word processors, spreadsheets, and other productivity apps.
  • Education
  • For example, few families can pay for a tutor who works one-on-one with a student to supplement their classroom work. If agents can capture what makes a tutor effective, they’ll unlock this supplemental instruction for everyone who wants it. If a tutoring agent knows that a kid likes Minecraft and Taylor Swift, it will use Minecraft to teach them about calculating the volume and area of shapes, and Taylor’s lyrics to teach them about storytelling and rhyme schemes. The experience will be far richer—with graphics and sound, for example—and more personalized than today’s text-based tutors.
  • your agent will be able to help you in the same way that personal assistants support executives today. If your friend just had surgery, your agent will offer to send flowers and be able to order them for you. If you tell it you’d like to catch up with your old college roommate, it will work with their agent to find a time to get together, and just before you arrive, it will remind you that their oldest child just started college at the local university.
  • To see the dramatic change that agents will bring, let’s compare them to the AI tools available today. Most of these are bots. They’re limited to one app and generally only step in when you write a particular word or ask for help. Because they don’t remember how you use them from one time to the next, they don’t get better or learn any of your preferences.
  • The current state of the art is Khanmigo, a text-based bot created by Khan Academy. It can tutor students in math, science, and the humanities—for example, it can explain the quadratic formula and create math problems to practice on. It can also help teachers do things like write lesson plans.
  • Businesses that are separate today—search advertising, social networking with advertising, shopping, productivity software—will become one business.
  • other issues won’t be decided by companies and governments. For example, agents could affect how we interact with friends and family. Today, you can show someone that you care about them by remembering details about their life—say, their birthday. But when they know your agent likely reminded you about it and took care of sending flowers, will it be as meaningful for them?
  • In the computing industry, we talk about platforms—the technologies that apps and services are built on. Android, iOS, and Windows are all platforms. Agents will be the next platform.
  • A shock wave in the tech industry
  • Agents won’t simply make recommendations; they’ll help you act on them. If you want to buy a camera, you’ll have your agent read all the reviews for you, summarize them, make a recommendation, and place an order for it once you’ve made a decision.
  • Agents will affect how we use software as well as how it’s written. They’ll replace search sites because they’ll be better at finding information and summarizing it for you
  • they’ll be dramatically better. You’ll be able to have nuanced conversations with them. They will be much more personalized, and they won’t be limited to relatively simple tasks like writing a letter.
  • Companies will be able to make agents available for their employees to consult directly and be part of every meeting so they can answer questions.
  • AI agents that are well trained in mental health will make therapy much more affordable and easier to get. Wysa and Youper are two of the early chatbots here. But agents will go much deeper. If you choose to share enough information with a mental health agent, it will understand your life history and your relationships. It’ll be available when you need it, and it will never get impatient. It could even, with your permission, monitor your physical responses to therapy through your smart watch—like if your heart starts to race when you’re talking about a problem with your boss—and suggest when you should see a human therapist.
  • If the number of companies that have started working on AI just this year is any indication, there will be an exceptional amount of competition, which will make agents very inexpensive.
  • Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.
  • Agents are not only going to change how everyone interacts with computers. They’re also going to upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons.
  • The most exciting impact of AI agents is the way they will democratize services that today are too expensive for most people
  • The ramifications for the software business and for society will be profound.
  • In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology.
  • You’ll also be able to get news and entertainment that’s been tailored to your interests. CurioAI, which creates a custom podcast on any subject you ask about, is a glimpse of what’s coming.
  • An agent will be able to help you with all your activities if you want it to. With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.
  • even the best sites have an incomplete understanding of your work, personal life, interests, and relationships and a limited ability to use this information to do things for you. That’s the kind of thing that is only possible today with another human being, like a close friend or personal assistant.
  • In the distant future, agents may even force humans to face profound questions about purpose. Imagine that agents become so good that everyone can have a high quality of life without working nearly as much. In a future like that, what would people do with their time? Would anyone still want to get an education when an agent has all the answers? Can you have a safe and thriving society when most people have a lot of free time on their hands?
  • They’ll have an especially big influence in four areas: health care, education, productivity, and entertainment and shopping.
Javier E

Opinion | Your Angry Uncle Wants to Talk About Politics. What Do You Do? - The New York... - 0 views

  • In our combined years of experience helping people talk about difficult political issues from abortion to guns to race, we’ve found most can converse productively without sacrificing their beliefs or spoiling dinner
  • It’s not merely possible to preserve your relationships while talking with folks you disagree with, but engaging respectfully will actually make you a more powerful advocate for the causes you care about.
  • The key to persuasive political dialogue is creating a safe and welcoming space for diverse views with a compassionate spirit, active listening and personal storytelling
  • ...4 more annotations...
  • Select your reply I’m more liberal, so I’ll chat with Conservative Uncle Bot. I’m more conservative, so I’ll chat with Liberal Uncle Bot.
  • Hey, it’s the Angry Uncle Bot. I have LOTS of opinions. But what kind of Uncle Bot do you want to chat with?
  • To help you cook up a holiday impeachment conversation your whole family and country will appreciate, here’s the Angry Uncle Bot for practice.
  • As Americans gather for our annual Thanksgiving feast, many are sharpening their rhetorical knives while others are preparing to bury their heads in the mashed potatoes.
« First ‹ Previous 701 - 720 of 741 Next › Last »
Showing 20 items per page