Skip to main content

Home/ History Readings/ Group items matching "internet" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
6More

Opinion | Elon Musk Is Buying Twitter. Shudder. - The New York Times - 0 views

  • Carnegie and the steel barons elected Republican lawmakers and presidents committed to protecting their companies’ profits by levying high tariffs on foreign competitors. Mr. Musk’s companies, and his fortune, were built with billions of dollars’ worth of subsidies for his electric-car company, Tesla, and billions more in NASA contracts to ferry American astronauts into space, launch satellites and provide high-speed internet services tethered to his fleet of some 3,000 satellites.
  • What makes Mr. Musk particularly powerful and potentially more dangerous than the industrial-era moguls is his ability to promote his businesses and political notions with a tweet
  • The likely consequences of Mr. Musk’s Twitter ownership will be political as well as economic disruption.
  • ...3 more annotations...
  • It is not unreasonable to expect that a Musk-owned and controlled Twitter will, in the name of free speech, allow disinformation and misinformation to be tweeted ad infinitum so long as it discredits his political opponents and celebrates and enriches himself and his allies.
  • Mr. Musk is correct that “free speech” must be honored and protected. But is it not time that we, as a people and a nation, engage in a wide-ranging, inclusive public debate on when and how free speech creates “a clear and present danger” — as Justice Oliver Wendell Holmes Jr. wrote a century ago — and whether we need government to find a way, through law or regulation or persuasion, to prevent this from happening?
  • Elon Musk is a product of his — and our — times. Rather than debate or deride his influence, we must recognize that he is not the self-made genius businessman he plays in the media. Instead, his success was prompted and paid for by taxpayer money and abetted by government officials who have allowed him and other billionaire businessmen to exercise more and more control over our economy and our politics.
21More

Opinion | The Crypto Collapse and the End of Magical Thinking - The New York Times - 0 views

  • I have come to view cryptocurrencies not simply as exotic assets but as a manifestation of a magical thinking that had come to infect part of the generation who grew up in the aftermath of the Great Recession — and American capitalism, more broadly.
  • magical thinking is the assumption that favored conditions will continue on forever without regard for history.
  • It is the minimizing of constraints and trade-offs in favor of techno-utopianism and the exclusive emphasis on positive outcomes and novelty.
  • ...18 more annotations...
  • It is the conflation of virtue with commerce.
  • Where did this ideology come from? An exceptional period of low interest rates and excess liquidity provided the fertile soil for fantastical dreams to flourish.
  • Cryptocurrency is the most ideal vessel of these impulses. A speculative asset with a tenuous underlying predetermined value provides a blank slate that meaning can be imposed onto
  • Anger after the 2008 global financial crisis created a receptivity to radical economic solutions, and disappointment with traditional politics displaced social ambitions onto the world of commerce.
  • The hothouse of Covid’s peaks turbocharged all these impulses as we sat bored in front of screens, fueled by seemingly free money.
  • The unwinding of magical thinking will dominate this decade in painful but ultimately restorative ways — and that unwinding will be most painful to the generation conditioned to believe these fantasies.
  • Pervasive consumer-facing technology allowed individuals to believe that the latest platform company or arrogant tech entrepreneur could change everything.
  • illusory and ridiculous promises share a common anti-establishment sentiment fueled by a technology that most of us never understood. Who needs governments, banks, the traditional internet or homespun wisdom when we can operate above and beyond?
  • Mainstream financial markets came to manifest these same tendencies, as magical thinking pervaded the wider investor class. During a period of declining and zero interest rates, mistakes and mediocrities were obscured or forgiven, while speculative assets with low probabilities of far-off success inflated in value enormously.
  • For an extended period, many investors bought the equivalent of lottery tickets. And many won.
  • The real economy could not escape infection. Companies flourished by inflating their scope and ambition to feed the desire for magical thinking.
  • Most broadly, many corporations have come to embrace broader social missions in response to the desire of younger investors and employees to use their capital and employment as instruments for social change.
  • Another manifestation of magical thinking is believing that the best hope for progress on our greatest challenges — climate change, racial injustice and economic inequality — are corporations and individual investment and consumption choices, rather than political mobilization and our communities.
  • Every business problem, I am told, can be solved in radically new and effective ways by applying artificial intelligence to ever-increasing amounts of data with a dash of design thinking. Many graduates coming of age in this period of financial giddiness and widening corporate ambition have been taught to chase these glittery objects with their human and financial capital instead of investing in sustainable paths — a habit that will be harder to instill at later ages.
  • The fundamentals of business have not changed merely because of new technologies or low interest rates. The way to prosper is still by solving problems in new ways that sustainably deliver value to employees, capital providers and customers.
  • What comes next? Hopefully, a revitalization of that great American tradition of pragmatism will follow. Speculative assets without any economic function should be worth nothing. Existing institutions, flawed as they are, should be improved upon rather than being displaced
  • Corporations are valuable socially because they solve problems and generate wealth. But they should not be trusted as arbiters of progress and should be balanced by a state that mediates political questions
  • Trade-offs are everywhere and inescapable. Navigating these trade-offs, rather than ignoring them, is the recipe for a good life.
7More

Opinion | The Alt-Right Manipulated My Comic. Then A.I. Claimed It. - The New York Times - 0 views

  • Legally, it appears as though LAION was able to scour what seems like the entire internet because it deems itself a nonprofit organization engaging in academic research. While it was funded at least in part by Stability AI, the company that created Stable Diffusion, it is technically a separate entity. Stability AI then used its nonprofit research arm to create A.I. generators first via Stable Diffusion and then commercialized in a new model called DreamStudio.
  • hat makes up these data sets? Well, pretty much everything. For artists, many of us had what amounted to our entire portfolios fed into the data set without our consent. This means that A.I. generators were built on the backs of our copyrighted work, and through a legal loophole, they were able to produce copies of varying levels of sophistication.
  • eing able to imitate a living artist has obvious implications for our careers, and some artists are already dealing with real challenges to their livelihood.
  • ...4 more annotations...
  • Greg Rutkowski, a hugely popular concept artist, has been used in a prompt for Stable Diffusion upward of 100,000 times. Now, his name is no longer attached to just his own work, but it also summons a slew of imitations of varying quality that he hasn’t approved. This could confuse clients, and it muddies the consistent and precise output he usually produces. When I saw what was happening to him, I thought of my battle with my shadow self. We were each fighting a version of ourself that looked similar but that was uncanny, twisted in a way to which we didn’t consent.
  • In theory, everyone is at risk for their work or image to become a vulgarity with A.I., but I suspect those who will be the most hurt are those who are already facing the consequences of improving technology, namely members of marginalized groups.
  • In the future, with A.I. technology, many more people will have a shadow self with whom they must reckon. Once the features that we consider personal and unique — our facial structure, our handwriting, the way we draw — can be programmed and contorted at the click of a mouse, the possibilities for violations are endless.
  • I’ve been playing around with several generators, and so far none have mimicked my style in a way that can directly threaten my career, a fact that will almost certainly change as A.I. continues to improve. It’s undeniable; the A.I.s know me. Most have captured the outlines and signatures of my comics — black hair, bangs, striped T-shirts. To others, it may look like a drawing taking shape.I see a monster forming.
11More

Influencers Don't Have to Be Human to Be Believable - WSJ - 0 views

  • . Virtual and human social-media influencers can be equally effective for certain types of posts, the research suggests.
  • Why would consumers look even somewhat favorably upon virtual influencers that make comments about real products?
  • The thinking is that virtual influencers can be fun and entertaining and make a brand seem innovative and tech savvy,
  • ...8 more annotations...
  •  virtual influencers can also be cost-effective and provide more flexibility than a human alternative. 
  • Two groups saw a post with an emotional endorsement where the influencer uses words like love and adore. The other two groups saw a more staid post, focusing on specific software features. In each scenario one group was told the influencer was human and one group was told the influencer was virtual.
  • In one part of the study, about 300 participants were shown a social-media post purported to be from an influencer about either ice cream or sunglasses. Then, roughly half were told the influencer was human and half were told she was virtual. Regardless of the product, participants perceived the virtual influencer to be less credible than its “human” counterpart. Participants who were told the influencer was virtual also had a less-positive attitude toward the brand behind the product.
  • When the influencers “can’t really use the brand they are promoting,” it’s hard to see them as trustworthy experts, say Ozdemir.
  • “When it comes to an endorsement by a virtual influencer, the followers start questioning the expertness of the influencer on the field of the endorsed product/service,” he says. “Pretending that the influencer has actual experience with the product backfires.”
  • For the emotional endorsement, participants found the human influencer to be more credible. Participants who were told the influencer was human also had a more positive view of the brand than those who were told the influencer was virtual.
  • For the more factual endorsement, however, there was no statistically significant difference between the two groups when it came to influencer credibility or brand perception.
  • “When it comes to delivering a more factual endorsement, highlighting features that could be found by doing an internet search, participants really didn’t seem to care if the influencer was human or not,”
17More

'There was all sorts of toxic behaviour': Timnit Gebru on her sacking by Google, AI's d... - 0 views

  • t feels like a gold rush,” says Timnit Gebru. “In fact, it is a gold rush. And a lot of the people who are making money are not the people actually in the midst of it. But it’s humans who decide whether all this should be done or not. We should remember that we have the agency to do that.”
  • something that the frenzied conversation about AI misses out: the fact that many of its systems may well be built on a huge mess of biases, inequalities and imbalances of power.
  • As the co-leader of Google’s small ethical AI team, Gebru was one of the authors of an academic paper that warned about the kind of AI that is increasingly built into our lives, taking internet searches and user recommendations to apparently new levels of sophistication and threatening to master such human talents as writing, composing music and analysing images
  • ...14 more annotations...
  • The clear danger, the paper said, is that such supposed “intelligence” is based on huge data sets that “overrepresent hegemonic viewpoints and encode biases potentially damaging to marginalised populations”. Put more bluntly, AI threatens to deepen the dominance of a way of thinking that is white, male, comparatively affluent and focused on the US and Europe.
  • What all this told her, she says, is that big tech is consumed by a drive to develop AI and “you don’t want someone like me who’s going to get in your way. I think it made it really clear that unless there is external pressure to do something different, companies are not just going to self-regulate. We need regulation and we need something better than just a profit motive.”
  • one particularly howling irony: the fact that an industry brimming with people who espouse liberal, self-consciously progressive opinions so often seems to push the world in the opposite direction.
  • Gebru began to specialise in cutting-edge AI, pioneering a system that showed how data about particular neighbourhoods’ patterns of car ownership highlighted differences bound up with ethnicity, crime figures, voting behaviour and income levels. In retrospect, this kind of work might look like the bedrock of techniques that could blur into automated surveillance and law enforcement, but Gebru admits that “none of those bells went off in my head … that connection of issues of technology with diversity and oppression came later”.
  • The next year, Gebru made a point of counting other black attenders at the same event. She found that, among 8,500 delegates, there were only six people of colour. In response, she put up a Facebook post that now seems prescient: “I’m not worried about machines taking over the world; I’m worried about groupthink, insularity and arrogance in the AI community.”
  • When Gebru arrived, Google employees were loudly opposing the company’s role in Project Maven, which used AI to analyse surveillance footage captured by military drones (Google ended its involvement in 2018). Two months later, staff took part in a huge walkout over claims of systemic racism, sexual harassment and gender inequality. Gebru says she was aware of “a lot of tolerance of harassment and all sorts of toxic behaviour”.
  • She and her colleagues prided themselves on how diverse their small operation was, as well as the things they brought to the company’s attention, which included issues to do with Google’s ownership of YouTube
  • A colleague from Morocco raised the alarm about a popular YouTube channel in that country called Chouf TV, “which was basically operated by the government’s intelligence arm and they were using it to harass journalists and dissidents. YouTube had done nothing about it.” (Google says that it “would need to review the content to understand whether it violates our policies. But, in general, our harassment policies strictly prohibit content that threatens individuals,
  • in 2020, Gebru, Mitchell and two colleagues wrote the paper that would lead to Gebru’s departure. It was titled On the Dangers of Stochastic Parrots. Its key contention was about AI centred on so-called large language models: the kind of systems – such as OpenAI’s ChatGPT and Google’s newly launched PaLM 2 – that, crudely speaking, feast on vast amounts of data to perform sophisticated tasks and generate content.
  • Gebru and her co-authors had an even graver concern: that trawling the online world risks reproducing its worst aspects, from hate speech to points of view that exclude marginalised people and places. “In accepting large amounts of web text as ‘representative’ of ‘all’ of humanity, we risk perpetuating dominant viewpoints, increasing power imbalances and further reifying inequality,” they wrote.
  • When the paper was submitted for internal review, Gebru was quickly contacted by one of Google’s vice-presidents. At first, she says, non-specific objections were expressed, such as that she and her colleagues had been too “negative” about AI. Then, Google asked Gebru either to withdraw the paper, or remove her and her colleagues’ names from it.
  • After her departure, Gebru founded Dair, the Distributed AI Research Institute, to which she now devotes her working time. “We have people in the US and the EU, and in Africa,” she says. “We have social scientists, computer scientists, engineers, refugee advocates, labour organisers, activists … it’s a mix of people.”
  • Running alongside this is a quest to push beyond the tendency of the tech industry and the media to focus attention on worries about AI taking over the planet and wiping out humanity while questions about what the technology does, and who it benefits and damages, remain unheard.
  • “That conversation ascribes agency to a tool rather than the humans building the tool,” she says. “That means you can aggregate responsibility: ‘It’s not me that’s the problem. It’s the tool. It’s super-powerful. We don’t know what it’s going to do.’ Well, no – it’s you that’s the problem. You’re building something with certain characteristics for your profit. That’s extremely distracting, and it takes the attention away from real harms and things that we need to do. Right now.”
35More

AI is already writing books, websites and online recipes - The Washington Post - 0 views

  • Experts say those books are likely just the tip of a fast-growing iceberg of AI-written content spreading across the web as new language software allows anyone to rapidly generate reams of prose on almost any topic. From product reviews to recipes to blog posts and press releases, human authorship of online material is on track to become the exception rather than the norm.
  • Semrush, a leading digital marketing firm, recently surveyed its customers about their use of automated tools. Of the 894 who responded, 761 said they’ve at least experimented with some form of generative AI to produce online content, while 370 said they now use it to help generate most if not all of their new content, according to Semrush Chief Strategy Officer Eugene Levin.
  • What that may mean for consumers is more hyper-specific and personalized articles — but also more misinformation and more manipulation, about politics, products they may want to buy and much more.
  • ...32 more annotations...
  • As AI writes more and more of what we read, vast, unvetted pools of online data may not be grounded in reality, warns Margaret Mitchell, chief ethics scientist at the AI start-up Hugging Face
  • “The main issue is losing track of what truth is,” she said. “Without grounding, the system can make stuff up. And if it’s that same made-up thing all over the world, how do you trace it back to what reality is?”
  • a raft of online publishers have been using automated writing tools based on ChatGPT’s predecessors, GPT-2 and GPT-3, for years. That experience shows that a world in which AI creations mingle freely and sometimes imperceptibly with human work isn’t speculative; it’s flourishing in plain sight on Amazon product pages and in Google search results.
  • “If you have a connection to the internet, you have consumed AI-generated content,” said Jonathan Greenglass, a New York-based tech investor focused on e-commerce. “It’s already here.
  • “In the last two years, we’ve seen this go from being a novelty to being pretty much an essential part of the workflow,”
  • the news credibility rating company NewsGuard identified 49 news websites across seven languages that appeared to be mostly or entirely AI-generated.
  • The sites sport names like Biz Breaking News, Market News Reports, and bestbudgetUSA.com; some employ fake author profiles and publish hundreds of articles a day, the company said. Some of the news stories are fabricated, but many are simply AI-crafted summaries of real stories trending on other outlets.
  • Ingenio, the San Francisco-based online publisher behind sites such as horoscope.com and astrology.com, is among those embracing automated content. While its flagship horoscopes are still human-written, the company has used OpenAI’s GPT language models to launch new sites such as sunsigns.com, which focuses on celebrities’ birth signs, and dreamdiary.com, which interprets highly specific dreams.
  • Ingenio used to pay humans to write birth sign articles on a handful of highly searched celebrities like Michael Jordan and Ariana Grande, said Josh Jaffe, president of its media division. But delegating the writing to AI allows sunsigns.com to cheaply crank out countless articles on not-exactly-A-listers
  • In the past, Jaffe said, “We published a celebrity profile a month. Now we can do 10,000 a month.”
  • It isn’t just text. Google users have recently posted examples of the search engine surfacing AI-generated images. For instance, a search for the American artist Edward Hopper turned up an AI image in the style of Hopper, rather than his actual art, as the first result.
  • Jaffe said he isn’t particularly worried that AI content will overwhelm the web. “It takes time for this content to rank well” on Google, he said — meaning that it appears on the first page of search results for a given query, which is critical to attracting readers. And it works best when it appears on established websites that already have a sizable audience: “Just publishing this content doesn’t mean you have a viable business.”
  • Google clarified in February that it allows AI-generated content in search results, as long as the AI isn’t being used to manipulate a site’s search rankings. The company said its algorithms focus on “the quality of content, rather than how content is produced.”
  • Reputations are at risk if the use of AI backfires. CNET, a popular tech news site, took flack in January when fellow tech site Futurism reported that CNET had been using AI to create articles or add to existing ones without clear disclosures. CNET subsequently investigated and found that many of its 77 AI-drafted stories contained errors.
  • But CNET’s parent company, Red Ventures, is forging ahead with plans for more AI-generated content, which has also been spotted on Bankrate.com, its popular hub for financial advice. Meanwhile, CNET in March laid off a number of employees, a move it said was unrelated to its growing use of AI.
  • BuzzFeed, which pioneered a media model built around reaching readers directly on social platforms like Facebook, announced in January it planned to make “AI inspired content” part of its “core business,” such as using AI to craft quizzes that tailor themselves to each reader. BuzzFeed announced last month that it is laying off 15 percent of its staff and shutting down its news division, BuzzFeed News.
  • it’s finding traction in the murkier worlds of online clickbait and affiliate marketing, where success is less about reputation and more about gaming the big tech platforms’ algorithms.
  • That business is driven by a simple equation: how much it costs to create an article vs. how much revenue it can bring in. The main goal is to attract as many clicks as possible, then serve the readers ads worth just fractions of a cent on each visit — the classic form of clickbait
  • In the past, such sites often outsourced their writing to businesses known as “content mills,” which harness freelancers to generate passable copy for minimal pay. Now, some are bypassing content mills and opting for AI instead.
  • “Previously it would cost you, let’s say, $250 to write a decent review of five grills,” Semrush’s Levin said. “Now it can all be done by AI, so the cost went down from $250 to $10.”
  • The problem, Levin said, is that the wide availability of tools like ChatGPT means more people are producing similarly cheap content, and they’re all competing for the same slots in Google search results or Amazon’s on-site product reviews
  • So they all have to crank out more and more article pages, each tuned to rank highly for specific search queries, in hopes that a fraction will break through. The result is a deluge of AI-written websites, many of which are never seen by human eyes.
  • Jaffe said his company discloses its use of AI to readers, and he promoted the strategy at a recent conference for the publishing industry. “There’s nothing to be ashamed of,” he said. “We’re actually doing people a favor by leveraging generative AI tools” to create niche content that wouldn’t exist otherwise.
  • The rise of AI is already hurting the business of Textbroker, a leading content platform based in Germany and Las Vegas, said Jochen Mebus, the company’s chief revenue officer. While Textbroker prides itself on supplying credible, human-written copy on a huge range of topics, “People are trying automated content right now, and so that has slowed down our growth,”
  • Mebus said the company is prepared to lose some clients who are just looking to make a “fast dollar” on generic AI-written content. But it’s hoping to retain those who want the assurance of a human touch, while it also trains some of its writers to become more productive by employing AI tools themselves.
  • He said a recent survey of the company’s customers found that 30 to 40 percent still want exclusively “manual” content, while a similar-size chunk is looking for content that might be AI-generated but human-edited to check for tone, errors and plagiarism.
  • Levin said Semrush’s clients have also generally found that AI is better used as a writing assistant than a sole author. “We’ve seen people who even try to fully automate the content creation process,” he said. “I don’t think they’ve had really good results with that. At this stage, you need to have a human in the loop.”
  • For Cowell, whose book title appears to have inspired an AI-written copycat, the experience has dampened his enthusiasm for writing.“My concern is less that I’m losing sales to fake books, and more that this low-quality, low-priced, low-effort writing is going to have a chilling effect on humans considering writing niche technical books in the future,”
  • It doesn’t help, he added, knowing that “any text I write will inevitably be fed into an AI system that will generate even more competition.”
  • Amazon removed the impostor book, along with numerous others by the same publisher, after The Post contacted the company for comment.
  • AI-written books aren’t against Amazon’s rules, per se, and some authors have been open about using ChatGPT to write books sold on the site.
  • “Amazon is constantly evaluating emerging technologies and innovating to provide a trustworthy shopping experience for our customers,”
13More

Ex-ByteDance Executive Accuses TikTok Parent Company of 'Lawlessness' - The New York Times - 0 views

  • A former executive at ByteDance, the Chinese company that owns TikTok, has accused the technology giant of a “culture of lawlessness,” including stealing content from rival platforms Snapchat and Instagram in its early years, and called the company a “useful propaganda tool for the Chinese Communist Party.
  • The claims were part of a wrongful dismissal suit filed on Friday by Yintao Yu, who was the head of engineering for ByteDance’s U.S. operations from August 2017 to November 2018. The complaint, filed in San Francisco Superior Court, says Mr. Yu was fired because he raised concerns about a “worldwide scheme” to steal and profit from other companies’ intellectual property.
  • Among the most striking claims in Mr. Yu’s lawsuit is that ByteDance’s offices in Beijing had a special unit of Chinese Communist Party members sometimes referred to as the Committee, which monitored the company’s apps, “guided how the company advanced core Communist values” and possessed a “death switch” that could turn off the Chinese apps entirely.
  • ...10 more annotations...
  • The video app, which is used by more than 150 million Americans, has become hugely popular for memes and entertainment. But lawmakers and U.S. officials are concerned that the app is passing sensitive information about Americans to Beijing.
  • In his complaint, Mr. Yu, 36, said that as TikTok sought to attract users in its early days, ByteDance engineers copied videos and posts from Snapchat and Instagram without permission and then posted them to the app. He also claimed that ByteDance “systematically created fabricated users” — essentially an army of bots — to boost engagement numbers, a practice that Mr. Yu said he flagged to his superiors.
  • Mr. Yu says he raised these concerns with Zhu Wenjia, who was in charge of the TikTok algorithm, but that Mr. Zhu was “dismissive” and remarked that it was “not a big deal.”
  • he also witnessed engineers for Douyin, the Chinese version of TikTok, tweak the algorithm to elevate content that expressed hatred for Japan.
  • he said that the promotion of anti-Japanese sentiments, which would make it more prominent for users, was done without hesitation.
  • “There was no debate,” he said. “They just did it.”
  • The lawsuit also accused ByteDance engineers working on Chinese apps of demoting content that expressed support for pro-democracy protests in Hong Kong, while making more prominent criticisms of the protests.
  • the lawsuit says the founder of ByteDance, Zhang Yiming, facilitated bribes to Lu Wei, a senior government official charged with internet regulation. Chinese media at the time covered the trial of Lu Wei, who was charged in 2018 and subsequently convicted of bribery, but there was no mention of who had paid the bribes.
  • Mr. Yu, who was born and raised in China and now lives in San Francisco, said in the interview that during his time with the company, American user data on TikTok was stored in the United States. But engineers in China had access to it, he said.
  • The geographic location of servers is “irrelevant,” he said, because engineers could be a continent away but still have access. During his tenure at the company, he said, certain engineers had “backdoor” access to user data.
6More

Opinion | Zeynep Tufekci: I Was Wrong About the Power of Protest - The New York Times - 0 views

  • Why did we think big protests reliably brought about social change? Because it seemed to be the case in the past
  • In the past, a truly big march was the culmination of long-term organizing, an exclamation mark at the end of a sentence, indicating prior planning and strength.
  • they didn’t just manage to hold a protest; lacking easier ways to organize, they ended up having to build organizational capacity, which then helped navigate what came after.
  • ...3 more annotations...
  • although today’s big protests look the same as those in the past, the different mechanisms that produce them — in particular, the internet and lately, especially, social media — help determine whether governments or other authorities will see them as a genuine threat or just something that can be dismissed like a focus group.
  • My optimism about the power of our protest had been colored by my inability to recognize that the rules of the game had changed with the changing environment.
  • Being on the right side of history doesn’t insulate one from weak analyses or the temptation to conflate what we collectively hoped to be true with an examination of how things really were.
13More

'Conflict' Review: How Wars Are Fought and Won - WSJ - 0 views

  • “Conflict” brings together one of America’s top military thinkers and Britain’s pre-eminent military historian to examine the evolution of warfare since 1945. Retired Gen. David Petraeus, who co-authored the U.S. Army’s field manual on counterinsurgency warfare and oversaw the troop surge in Iraq in 2007, brings a professional eye to politico-military strategy. Andrew Roberts, who has been writing on military leadership since the early 1990s, offers an “arc of history” approach to the subject of mass destruction.
  • The pair’s ambitious goals: to provide some context to the tapestry of modern conflict and a glimpse of wars to come.
  • The book begins with the early struggles of the postwar era. China’s brutal civil war, the authors observe, demonstrated “that guerrilla warfare undertaken according to Maoist military principles by smaller forces could ultimately be successful against a Western-backed government.”
  • ...10 more annotations...
  • the authors argue that the first job of a strategic leader is to get the big ideas right. Those who have succeeded include Gerald Templer, who became Britain’s high commissioner for Malaya in 1952 and whose reference to winning “the hearts and minds of the people,”
  • “remains the most succinct explanation for how to win a counter-insurgency.”
  • By contrast, the nationalist forces in China, the French in Algeria and the Americans in Vietnam got the big ideas wrong and paid a steep price.
  • On the 2021 collapse of Afghanistan’s government troops, who had been so expensively trained and equipped under Presidents Bush, Obama, Trump and Biden, Mr. Petraeus remarks that “the troops were brave enough—the 66,000 dead Afghan soldiers killed during the war attest to that. But they fought for an often corrupt and incompetent government that never gained the trust and confidence of local communities, which had historically determined the balance of power within Afghanistan.”
  • Russia’s invasion of Ukraine in 2022 serves as the book’s case study on how badly Goliath can stumble against David
  • Elon Musk’s control of the Starlink satellite internet system, they note, gave him a unique veto power over Ukrainian operations in Crimea. “With individual tycoons such as Elon Musk, Mark Zuckerberg and Jeff Bezos wielding such extraordinary power,” the authors tell us, “wars of the future will have to take their influence into account.”
  • The final chapter teases out the contours of future conflicts. Artificial intelligence, strategic mineral monopolies and “hybrid wars”—where weapons include deepfake disinformation, political manipulation, proxy forces and cyberattacks—cap an incisive look at the next phase of warfare. “Hybrid warfare particularly appeals to China and Russia, since they are much more able to control the information their populaces receive than are their Western adversaries,”
  • . And with the line between limited and total wars growing fuzzier every year, the combatant of the next war might be a woman sitting at a drone desk, a computer geek hacking into a power grid or a robotics designer refining directed-energy weapons systems.
  • “Conflict” is, in some ways, an extension of Mr. Roberts’s thesis in “The Storm of War” (2009)—that dictatorships tend to crack under the stress of a sustained war against popular democracies. While autocracies enjoy some advantages at war’s outset—they are nimble and can achieve true strategic surprise, for instance—if the sucker punch doesn’t end the fight quickly, democracies, shocked into action, may bring to bear more motivated, more efficient and often larger forces to turn the tide.
  • Both men see modern military history as a succession of partnerships created to counter violent challenges from nationalists, terrorists and dictators.
7More

The Israel-Hamas War Shows Just How Broken Social Media Has Become - The Atlantic - 0 views

  • major social platforms have grown less and less relevant in the past year. In response, some users have left for smaller competitors such as Bluesky or Mastodon. Some have simply left. The internet has never felt more dense, yet there seem to be fewer reliable avenues to find a signal in all the noise. One-stop information destinations such as Facebook or Twitter are a thing of the past. The global town square—once the aspirational destination that social-media platforms would offer to all of us—lies in ruins, its architecture choked by the vines and tangled vegetation of a wild informational jungle
  • Musk has turned X into a deepfake version of Twitter—a facsimile of the once-useful social network, altered just enough so as to be disorienting, even terrifying.
  • At the same time, Facebook’s user base began to erode, and the company’s transparency reports revealed that the most popular content circulating on the platform was little more than viral garbage—a vast wasteland of CBD promotional content and foreign tabloid clickbait.
  • ...4 more annotations...
  • What’s left, across all platforms, is fragmented. News and punditry are everywhere online, but audiences are siloed; podcasts are more popular than ever, and millions of younger people online have turned to influencers and creators on Instagram and especially TikTok as trusted sources of news.
  • Social media, especially Twitter, has sometimes been an incredible news-gathering tool; it has also been terrible and inefficient, a game of do your own research that involves batting away bullshit and parsing half truths, hyperbole, outright lies, and invaluable context from experts on the fly. Social media’s greatest strength is thus its original sin: These sites are excellent at making you feel connected and informed, frequently at the expense of actually being informed.
  • At the center of these pleas for a Twitter alternative is a feeling that a fundamental promise has been broken. In exchange for our time, our data, and even our well-being, we uploaded our most important conversations onto platforms designed for viral advertising—all under the implicit understanding that social media could provide an unparalleled window to the world.
  • What comes next is impossible to anticipate, but it’s worth considering the possibility that the centrality of social media as we’ve known it for the past 15 years has come to an end—that this particular window to the world is being slammed shut.
10More

Apocalypse When? Global Warming's Endless Scroll - The New York Times - 0 views

  • the climate crisis is outpacing our emotional capacity to describe it
  • I can’t say precisely when the end began, just that in the past several years, “the end of the world” stopped referring to a future cataclysmic event and started to describe our present situation
  • Across the ironized hellscape of the internet, we began “tweeting through the apocalypse” and blogging the Golden Globes ceremony “during the end times” and streaming “Emily in Paris” “at the end of the world.”
  • ...7 more annotations...
  • global warming represents the collapse of such complex systems at such an extreme scale that it overrides our emotional capacity
  • it is darkly inverted on the Instagram account @afffirmations, where new-age positive thinking buckles under the weight of generational despair, and serene stock photography collides with mantras like “I am not climate change psychosis” and “Humanity is not doomed.”
  • Often the features of our dystopia are itemized, as if we are briskly touring the concentric circles of hell — rising inequality, declining democracy, unending pandemic, the financial system optimistically described as “late” capitalism — until we have reached the inferno’s toasty center, which is the destruction of the Earth through man-made global warming.
  • This creates its own perverse flavor of climate denial: We acknowledge the science but do not truly accept it, at least not enough to urgently act.
  • This paralysis itself is almost too horrible to contemplate. As global warming cooks the Earth, it melts our brains, fries our nerves and explodes the narratives that we like to tell about humankind — even the apocalyptic ones.
  • This “end of the world” does not resemble the ends of religious prophecies or disaster films, in which the human experiment culminates in dramatic final spectacles
  • Instead we persist in an oxymoronic state, inhabiting an end that has already begun but may never actually end.
41More

Opinion | The Reactionary Futurism of Marc Andreessen - The New York Times - 0 views

  • “I consider Mark and Elon to be role models to children in their embrace of fighting,” Andreessen writes.
  • Modern American society, at least in the big cities, is turning on law enforcement and tolerating crime, so you need combat skills to protect your loved ones. We are also fat and depressed, and learning to fight might help on both counts. In conclusion, “if it was good enough for Heracles and Theseus, it’s good enough for us.”
  • what caught my eye was the veneration of the virile aggression of the Greeks, the call to rediscover the ways of the ancients. A list of things that were good enough for the Greeks but not good enough for us would run long: Slavery, pederasty and bloodletting come to mind
  • ...38 more annotations...
  • This is what connects figures as disparate as Jordan Peterson and J.D. Vance and Peter Thiel and Donald Trump. These are the ideas that unite both the mainstream and the weirder figures of the so-called postliberal right, from Patrick Deneen to the writer Bronze Age Pervert.
  • I think the Republican Party’s collapse into incoherence reflects the fact that much of the modern right is reactionary, not conservative
  • As Paul Valéry, the French poet, once said, “Ancient Greece is the most beautiful invention of the modern age.” To treat Andreessen’s essay as an argument misses the point. It’s a vibe. And the vibe is reactionary.
  • It’s a coalition obsessed with where we went wrong: the weakness, the political correctness, the liberalism, the trigger warnings, the smug elites. It’s a coalition that believes we were once hard and have become soft; worse, we have come to lionize softness and punish hardness.
  • The story of the reactionary follows a template across time and place. It “begins with a happy, well-ordered state where people who know their place live in harmony and submit to tradition and their God,” Mark Lilla writes in his 2016 book, “The Shipwrecked Mind: On Political Reaction.”
  • He continues:Then alien ideas promoted by intellectuals — writers, journalists, professors — challenge this harmony, and the will to maintain order weakens at the top. (The betrayal of elites is the linchpin of every reactionary story.) A false consciousness soon descends on the society as a whole as it willingly, even joyfully, heads for destruction. Only those who have preserved memories of the old ways see what is happening. Whether the society reverses direction or rushes to its doom depends entirely on their resistance.
  • The Silicon Valley cohort Andreessen belongs to has added a bit to this formula. In their story, the old way that is being lost is the appetite for risk and inequality and dominance that drives technology forward and betters human life. What the muscled ancients knew and what today’s flabby whingers have forgotten is that man must cultivate the strength and will to master nature, and other men, for the technological frontier to give way
  • Now Andreessen has distilled the whole ideology to a procession of stark bullet points in his latest missive, the buzzy, bizarre “Techno-Optimist Manifesto.”
  • it’s the pairing of the reactionary’s sodden take on modern society with the futurist’s starry imagining of the bright tomorrow. So call it what it is: reactionary futurism
  • Andreessen’s argument is simple: Technology is good. Very good. Those who stand in its way are bad.
  • “The Enemy.” The list is long, ranging from “anti-greatness” to “statism” to “corruption” to “the ivory tower” to “cartels” to “bureaucracy” to “socialism” to “abstract theories” to anyone “disconnected from the real world … playing God with everyone else’s lives”
  • So who is it, exactly, who extinguishes the dancing star within the human soul?
  • Our present society has been subjected to a mass demoralization campaign for six decades — against technology and against life — under varying names like “existential risk,” “sustainability,” “E.S.G.,” “sustainable development goals,” “social responsibility,” “stakeholder capitalism,” “precautionary principle,” “trust and safety,” “tech ethics,” “risk management,” “degrowth,” “the limits of growth.”
  • The enemy, in other words, is anything or anyone who might seek to yoke technology to social goals or structures
  • For years, I’ve been arguing for politics to take technology more seriously, to see new inventions as no less necessary than social insurance and tax policy in bringing about a worthier world. Too often, we debate only how to divvy up what we already have. We have lost the habit of imagining what we could have; we are too timid in deploying the coordinated genius and muscle of society
  • I’ve been digging into the history of where and when we lost faith in technology and, more broadly, growth. At the core of that story is an inability to manage, admit or even see when technologies or policies go awry
  • The turn toward a less-is-more politics came in the 1970s, when the consequences of reckless growth became unignorable
  • Did we, in some cases, overcorrect? Absolutely. But the only reason we can even debate whether we overcorrected is because we corrected: The Clean Air Act and the Clean Water Act and a slew of other bills and regulations did exactly what they promised.
  • It is telling that Andreessen groups sustainability and degrowth into the same bucket of antagonists
  • Degrowth is largely, though not wholly, skeptical of technological solutions to our problems
  • But the politics of sustainability — as evidenced in legislation like the Inflation Reduction Act — have settled into another place entirely: a commitment to solving our hardest environmental problems by driving technology forward, by investing and deploying clean energy infrastructure at a scale unlike anything the government has done since the 1950s.
  • Andreessen focuses at some length on the nuclear future he believes we’ve been denied —
  • but curiously ignores the stunning advances in solar and wind and battery power that public policy has delivered.
  • He yearns for a kind of person, not just a kind of technology. “We believe in ambition, aggression, persistence, relentlessness — strength,” he writes, italics included. “We believe in merit and achievement. We believe in bravery, in courage.”
  • There are ways in which these virtues have become undervalued, in which the left, in particular, has a dysfunctional relationship with individual achievement and entrepreneurial élan.
  • Andreessen’s ideas trace an odd, meme-based philosophy that has flourished in some corners of the internet known as effective accelerationism
  • “Effective accelerationism aims to follow the ‘will of the universe’: leaning into the thermodynamic bias towards futures with greater and smarter civilizations that are more effective at finding/extracting free energy from the universe,”
  • “E/acc has no particular allegiance to the biological substrate for intelligence and life, in contrast to transhumanism.” OK!
  • Take Andreessen’s naming of trust and safety teams as among his enemies.
  • That, in a way, is my core disagreement with Andreessen. Reactionary futurism is accelerationist in affect but deccelerationist in practice
  • How has that worked out? A new analysis by Similarweb found that traffic to twitter.com fell in the United States by 19 percent from September 2022 to September 2023 and traffic on mobile devices fell by almost 18 percent. Indications are that advertising revenue on the platform is collapsing.
  • Andreessen spends much of his manifesto venerating the version of markets that you hear in the first few weeks of Econ 101, before the professor begins complicating the picture with all those annoying market failures
  • Throughout his essay, Andreessen is at pains to attack those who might slow the development of artificial intelligence in the name of safety, but nothing would do more to freeze progress in A.I. than a disaster caused by its reckless deployment
  • It is hard to read Andreessen’s manifesto, with its chopped-up paragraphs and its blunt jabs of thought delivered for maximum engagement and polarization, and not feel that Andreessen now reflects the medium in which he has made his home: X. He doesn’t just write in the way the medium rewards. He increasingly seems to think in its house style, too.
  • One reason I left Twitter long ago is that I noticed that it was a kind of machine for destroying trust. It binds you to the like-minded but cuts you from those with whom you have even modest disagreements
  • There is a reason that Twitter’s rise was conducive to politics of revolution and reaction rather than of liberalism and conservatism. If you are there too often, seeing the side of humanity it serves up, it is easy to come to think that everything must be burned down.
  • Musk purchased Twitter (in an acquisition that Andreessen Horowitz helped finance) and gutted its trust and safety teams. The result has been a profusion of chaos, disinformation and division on his platform
  • Treating so much of society with such withering contempt will not speed up a better future. It will turn people against the politics and policies of growth, just as it did before. Trust is the most essential technology of all.
25More

The New AI Panic - The Atlantic - 0 views

  • export controls are now inflaming tensions between the United States and China. They have become the primary way for the U.S. to throttle China’s development of artificial intelligence: The department last year limited China’s access to the computer chips needed to power AI and is in discussions now to expand the controls. A semiconductor analyst told The New York Times that the strategy amounts to a kind of economic warfare.
  • If enacted, the limits could generate more friction with China while weakening the foundations of AI innovation in the U.S.
  • The same prediction capabilities that allow ChatGPT to write sentences might, in their next generation, be advanced enough to produce individualized disinformation, create recipes for novel biochemical weapons, or enable other unforeseen abuses that could threaten public safety.
  • ...22 more annotations...
  • Of particular concern to Commerce are so-called frontier models. The phrase, popularized in the Washington lexicon by some of the very companies that seek to build these models—Microsoft, Google, OpenAI, Anthropic—describes a kind of “advanced” artificial intelligence with flexible and wide-ranging uses that could also develop unexpected and dangerous capabilities. By their determination, frontier models do not exist yet. But an influential white paper published in July and co-authored by a consortium of researchers, including representatives from most of those tech firms, suggests that these models could result from the further development of large language models—the technology underpinning ChatGPT
  • The threats of frontier models are nebulous, tied to speculation about how new skill sets could suddenly “emerge” in AI programs.
  • Among the proposals the authors offer, in their 51-page document, to get ahead of this problem: creating some kind of licensing process that requires companies to gain approval before they can release, or perhaps even develop, frontier AI. “We think that it is important to begin taking practical steps to regulate frontier AI today,” the authors write.
  • Microsoft, Google, OpenAI, and Anthropic subsequently launched the Frontier Model Forum, an industry group for producing research and recommendations on “safe and responsible” frontier-model development.
  • Shortly after the paper’s publication, the White House used some of the language and framing in its voluntary AI commitments, a set of guidelines for leading AI firms that are intended to ensure the safe deployment of the technology without sacrificing its supposed benefit
  • AI models advance rapidly, he reasoned, which necessitates forward thinking. “I don’t know what the next generation of models will be capable of, but I’m really worried about a situation where decisions about what models are put out there in the world are just up to these private companies,” he said.
  • For the four private companies at the center of discussions about frontier models, though, this kind of regulation could prove advantageous.
  • Convincing regulators to control frontier models could restrict the ability of Meta and any other firms to continue publishing and developing their best AI models through open-source communities on the internet; if the technology must be regulated, better for it to happen on terms that favor the bottom line.
  • The obsession with frontier models has now collided with mounting panic about China, fully intertwining ideas for the models’ regulation with national-security concerns. Over the past few months, members of Commerce have met with experts to hash out what controlling frontier models could look like and whether it would be feasible to keep them out of reach of Beijing
  • That the white paper took hold in this way speaks to a precarious dynamic playing out in Washington. The tech industry has been readily asserting its power, and the AI panic has made policy makers uniquely receptive to their messaging,
  • “Parts of the administration are grasping onto whatever they can because they want to do something,” Weinstein told me.
  • The department’s previous chip-export controls “really set the stage for focusing on AI at the cutting edge”; now export controls on frontier models could be seen as a natural continuation. Weinstein, however, called it “a weak strategy”; other AI and tech-policy experts I spoke with sounded their own warnings as well.
  • The decision would represent an escalation against China, further destabilizing a fractured relationship
  • Many Chinese AI researchers I’ve spoken with in the past year have expressed deep frustration and sadness over having their work—on things such as drug discovery and image generation—turned into collateral in the U.S.-China tech competition. Most told me that they see themselves as global citizens contributing to global technology advancement, not as assets of the state. Many still harbor dreams of working at American companies.
  • “If the export controls are broadly defined to include open-source, that would touch on a third-rail issue,” says Matt Sheehan, a Carnegie Endowment for International Peace fellow who studies global technology issues with a focus on China.
  • What’s frequently left out of considerations as well is how much this collaboration happens across borders in ways that strengthen, rather than detract from, American AI leadership. As the two countries that produce the most AI researchers and research in the world, the U.S. and China are each other’s No. 1 collaborator in the technology’s development.
  • Assuming they’re even enforceable, export controls on frontier models could thus “be a pretty direct hit” to the large community of Chinese developers who build on U.S. models and in turn contribute their own research and advancements to U.S. AI development,
  • Within a month of the Commerce Department announcing its blockade on powerful chips last year, the California-based chipmaker Nvidia announced a less powerful chip that fell right below the export controls’ technical specifications, and was able to continue selling to China. Bytedance, Baidu, Tencent, and Alibaba have each since placed orders for about 100,000 of Nvidia’s China chips to be delivered this year, and more for future delivery—deals that are worth roughly $5 billion, according to the Financial Times.
  • In some cases, fixating on AI models would serve as a distraction from addressing the root challenge: The bottleneck for producing novel biochemical weapons, for example, is not finding a recipe, says Weinstein, but rather obtaining the materials and equipment to actually synthesize the armaments. Restricting access to AI models would do little to solve that problem.
  • there could be another benefit to the four companies pushing for frontier-model regulation. Evoking the specter of future threats shifts the regulatory attention away from present-day harms of their existing models, such as privacy violations, copyright infringements, and job automation
  • “People overestimate how much this is in the interest of these companies,”
  • AI safety as a domain even a few years ago was much more heterogeneous,” West told me. Now? “We’re not talking about the effects on workers and the labor impacts of these systems. We’re not talking about the environmental concerns.” It’s no wonder: When resources, expertise, and power have concentrated so heavily in a few companies, and policy makers are seeped in their own cocktail of fears, the landscape of policy ideas collapses under pressure, eroding the base of a healthy democracy.
7More

AI could end independent UK news, Mail owner warns - 0 views

  • Artificial intelligence could destroy independent news organisations in Britain and potentially is an “existential threat to democracy”, the executive chairman of DMGT has warned.
  • “They have basically taken all our data, without permission and without even a consideration of the consequences. They are using it to train their models and to start producing content. They’re commercialising it,
  • AI had the potential to destroy independent news organisations “by ripping off all our content and then repurposing it to people … without any responsibility for the efficacy of that content”
  • ...4 more annotations...
  • there are huge consequences to this technology. And it’s not just the danger of ripping our industry apart, but also ripping other industries apart, all the creative industries. How many jobs are going to be lost? What’s the damage to the economy going to be if these rapacious organisations can continue to operate without any legal ramifications?
  • The danger is that these huge platforms end up in an arms race with each other. They’re like elephants fighting and then everybody else is like mice that get stamped on without them even realising the consequences of their actions.”
  • The risk was that the internet had become an echo chamber of stories produced by special interest groups and rogue states, he said.
  • Rothermere revealed that DMGT had experimented with using AI to help journalists to publish stories faster, but that it then took longer “to check the accuracy of what it comes up” than it would have done to write the article.
27More

What's Left for Tech? - Freddie deBoer - 0 views

  • I gave a talk to a class at Northeastern University earlier this month, concerning technology, journalism, and the cultural professions. The students were bright and inquisitive, though they also reflected the current dynamic in higher ed overall - three quarters of the students who showed up were women, and the men who were there almost all sat moodily in the back and didn’t engage at all while their female peers took notes and asked questions. I know there’s a lot of criticism of the “crisis for boys” narrative, but it’s often hard not to believe in it.
  • we’re actually living in a period of serious technological stagnation - that despite our vague assumption that we’re entitled to constant remarkable scientific progress, humanity has been living with real and valuable but decidedly small-scale technological growth for the past 50 or 60 or 70 years, after a hundred or so years of incredible growth from 1860ish to 1960ish, give or take a decade or two on either side
  • I will recommend Robert J. Gordon’s The Rise & Fall of American Growth for an exhaustive academic (and primarily economic) argument to this effect. Gordon persuasively demonstrates that from the mid-19th to mid-20th century, humanity leveraged several unique advancements that had remarkably outsized consequences for how we live and changed our basic existence in a way that never happened before and hasn’t since. Principal among these advances were the process of refining fossil fuels and using them to power all manner of devices and vehicles, the ability to harness electricity and use it to safely provide energy to homes (which practically speaking required the first development), and a revolution in medicine that came from the confluence of long-overdue acceptance of germ theory and basic hygienic principles, the discovery and refinement of antibiotics, and the modernization of vaccines.
  • ...24 more annotations...
  • The complication that Gordon and other internet-skeptical researchers like Ha-Joon Chang have introduced is to question just how meaningful those digital technologies have been for a) economic growth and b) the daily experience of human life. It can be hard for people who stare at their phones all day to consider the possibility that digital technology just isn’t that important. But ask yourself: if you were forced to live either without your iPhone or without indoor plumbing, could you really choose the latter?
  • Certainly the improvements in medical care in the past half-century feel very important to me as someone living now, and one saved life has immensely emotional and practical importance for many people. What’s more, advances in communication sciences and computer technology genuinely have been revolutionary; going from the Apple II to the iPhone in 30 years is remarkable.
  • we can always debate what constitutes major or revolutionary change
  • The question is, who in 2023 ever says to themselves “smartphone cameras just aren’t good enough”?
  • continued improvements in worldwide mortality in the past 75 years have been a matter of spreading existing treatments and practices to the developing world, rather than the result of new science.
  • When you got your first smartphone, and you thought about what the future would hold, were your first thoughts about more durable casing? I doubt it. I know mine weren’t.
  • Why is Apple going so hard on TITANIUM? Well, where else does smartphone development have to go?
  • The elephant in the room, obviously, is AI.
  • The processors will get faster. They’ll add more RAM. They’ll generally have more power. But for what? To run what? To do what? To run the games that we were once told would replace our PlayStation and Xbox games, but didn’t?
  • Smartphone development has been a good object lesson in the reality that cool ideas aren’t always practical or worthwhile
  • And as impressive as some new development in medicine has been, there’s no question that in simple terms of reducing preventable deaths, the advances seen from 1900 to 1950 dwarf those seen since. To a rem
  • We developed this technology for typewriters and terminals and desktops, it Just Works, and there’s no reason to try and “disrupt” it
  • Instead of one device to rule them all, we developed a norm of syncing across devices and cloud storage, which works well. (I always thought it was pretty funny, and very cynical, how Apple went from calling the iPhone an everything device to later marketing the iPad and iWatch.) In other words, we developed a software solution rather than a hardware one
  • I will always give it up to Google Maps and portable GPS technology; that’s genuinely life-altering, probably the best argument for smartphones as a transformative technology. But let me ask you, honestly: do you still go out looking for apps, with the assumption that you’re going to find something that really changes your life in a significant way?
  • some people are big VR partisans. I’m deeply skeptical. The brutal failures of Meta’s new “metaverse” is just one new example of a decades-long resistance to the technology among consumers
  • maybe I just don’t want VR to become popular, given the potential ugly social consequences. If you thought we had an incel problem now….
  • There were, in those breathless early days, a lot of talk about how people simply wouldn’t own laptops anymore, how your phone would do everything. But it turns out that, for one thing, the keyboard remains an input device of unparalleled convenience and versatility.
  • It’s not artificial intelligence. It thinks nothing like a human thinks. There is no reason whatsoever to believe that it has evolved sentience or consciousness. There is nothing at present that these systems can do that human being simply can’t. But they can potentially do some things in the world of bits faster and cheaper than human beings, and that might have some meaningful consequences. But there is no reasonable, responsible claim to be made that these systems are imminent threats to conventional human life as currently lived, whether for good or for bad. IMO.
  • Let’s mutually agree to consider immediate plausible human technological progress outside of AI or “AI.” What’s coming? What’s plausible?
  • The most consequential will be our efforts to address climate change, and we have the potential to radically change how we generate electricity, although electrifying heating and transportation are going to be harder than many seem to think, while solar and wind power have greater ecological costs than people want to admit. But, yes, that’s potentially very very meaningful
  • It’s another example of how technological growth will still leave us with continuity rather than with meaningful change.
  • I kept thinking was, privatizing space… to do what? A manned Mars mission might happen in my lifetime, which is cool. But a Mars colony is a distant dream
  • This is why I say we live in the Big Normal, the Big Boring, the Forever Now. We are tragic people: we were born just too late to experience the greatest flowering of human development the world has ever seen. We do, however, enjoy the rather hefty consolation prize that we get to live with the affordances of that period, such as not dying of smallpox.
  • I think we all need to learn to appreciate what we have now, in the world as it exists, at the time in which we actually live. Frankly, I don’t think we have any other choice.
7More

BOOM: Google Loses Antitrust Case - BIG by Matt Stoller - 0 views

  • It’s a long and winding road for Epic. The firm lost the Apple case, which is on appeal, but got the Google case to a jury, along with several other plaintiffs. Nearly every other firm challenging Google gradually dropped out of the case, getting special deals from the search giant in return for abandoning their claims. But Sweeney was righteous, and believed that Google helped ruined the internet. He didn’t ask for money or a special deal, instead seeking to have Judge James Donato force Google to make good on its “broken promise,” which he characterized as “an open, competitive Android ecosystem for all users and industry participants.”
  • Specifically, Sweeney asked for the right for firms to have their own app stores, and the ability to use their own billing systems. Basically, he wants to crush Google’s control over the Android phone system. And I suspect he just did. You can read the verdict here.
  • Google is likely to be in trouble now, because it is facing multiple antitrust cases, and these kinds of decisions have a bandwagon effect. The precedent is set, in every case going forward the firm will now be seen as presumed guilty, since a jury found Google has violated antitrust laws. Judges are cautious, and are generally afraid of being the first to make a precedent-setting decision. Now they won’t have to. In fact, judges and juries will now have to find a reason to rule for Google. If, say, Judge Amit Mehta in D.C., facing a very similar fact-pattern, chooses to let Google off the hook, well, he’ll look pretty bad.
  • ...4 more annotations...
  • There are a few important take-aways. First, this one didn’t come from the government, it was a private case by a video game maker that sued Google over its terms for getting access to the Google Play app store for Android, decided not by a fancy judge with an Ivy League degree but by a jury of ordinary people in San Francisco. In other words, private litigation, the ‘ambulance-chasing’ lawyers, are vital parts of our justice system.
  • Second, juries matter, even if they are riskier for everyone involved. It’s kind of like a mini poll, and the culture is ahead of the cautious legal profession. This quick decision is a sharp contrast with the 6-month delay to an opinion in the search case that Judge Mehta sought in the D.C. trial.
  • Third, tying claims, which is a specific antitrust violation, are good law. Tying means forcing someone to buy an unrelated product in order to access the actual product they want to buy. The specific legal claim here was about how Google forced firms relying on its Google Play app store to also use its Google Play billing service, which charges an inflated price of 30% of the price of an app. Tying is pervasive throughout the economy, so you can expect more suits along these lines.
  • And finally, big tech is not above the law. This loss isn’t just the first antitrust failure for Google, it’s the first antitrust loss for any big tech firm. I hear a lot from skeptics that the fix is in, that the powerful will always win, that justice in our system is a mirage. But that just isn’t true. A jury of our peers just made that clear.
28More

Opinion | How AI is transforming education at the University of Mississippi - The Washi... - 0 views

  • Perplexity AI “unlocks the power of knowledge with information discovery and sharing.” This, it turns out, means “does research.” Type something into it, and it spits out a comprehensive answer, always sourced and sometimes bulleted. You might say this is just Google on steroids — but really, it is Google with a bibliography.
  • Caleb Jackson, a 22-year-old junior at Ole Miss studying part time, is a fan. This way, he doesn’t have to spend hours between night shifts and online classes trawling the internet for sources. Perplexity can find them, and he can get to writing that much sooner.
  • What’s most important to Ole Miss faculty members is that students use these tools with integrity. If the university doesn’t have a campuswide AI honor code, and so far it doesn’t, individual classes should. And no matter whether professors permit all applications of AI, as some teachers have tried, or only the narrowest, students should have to disclose just how much help they had from robots.
  • ...25 more annotations...
  • “Write a five-paragraph essay on Virginia Woolf’s ‘To the Lighthouse.’” Too generic? Well, how about “Write a five-paragraph essay on the theme of loss in ‘To the Lighthouse’”? Too high-schoolish? “Add some bigger words, please.” The product might not be ready to turn in the moment it is born, fully formed, from ChatGPT’s head. But with enough tweaking — either by the student or by the machine at the student’s demand — chances are the output can muster at least a passing grade.
  • Which of these uses are okay? Which aren’t? The harnessing of an AI tool to create an annotated bibliography likely doesn’t rankle even librarians the way relying on that same tool to draft a reflection on Virginia Woolf offends the professor of the modern novel. Why? Because that kind of contemplation goes closer to the heart of what education is really about.
  • the core of the question colleges now face. They can’t really stop students from using AI in class. They might not be able to notice students have done so at all, and when they do think they’ve noticed they’ll be acting only on suspicion. But maybe teachers can control the ways in which students use AI in class.
  • Figuring out exactly what ways those ought to be requires educators to determine what they care about in essays — what they are desperate to hear. The purpose of these papers is for students to demonstrate what they’ve learned, from hard facts to compositional know-how, and for teachers to assess how their pupils are progressing. The answer to what teachers want to get from students in their written work depends on what they want to give to students.
  • ChatGPT is sort of in a class of its own, because it can be almost anything its users want it to be so long as they possess one essential skill: prompt engineering. This means, basically, manipulating the machine not only into giving you an answer but also into giving you the kind of answer you’re looking for.
  • The next concern is that students should use AI in a manner that improves not only their writing but also their thinking — in short, in a manner that enhances learning rather than bypasses the need to learn at all.
  • This simple principle makes for complicated practice. Certainly, no one is going to learn anything by letting AI write an essay in its entirety. What about letting AI brainstorm an idea, on the other hand, or write an outline, or gin up a counter-argument? Lyndsey Cook, a senior at Ole Miss planning a career in nursing, finds the brainstorming especially helpful: She’ll ask ChatGPT or another tool to identify the themes in a piece of literature, and then she’ll go back and look for them herself.
  • These shortcuts, on the one hand, might interfere with students’ learning to brainstorm, outline or see the other side of things on their own
  • But — here comes a human-generated counterargument — they may also aid students in surmounting obstacles in their composition that otherwise would have stopped them short. That’s particularly true of kids whose high schools didn’t send them to college already equipped with these capabilities.
  • Allow AI to boost you over these early hurdles, and suddenly the opportunity for deeper learning — the opportunity to really write — will open up. That’s how Caleb Jackson, the part-time student for whom Perplexity has been such a boon, sees it: His professor, he says , wanted them to “get away from the high-school paper and go further, to write something larger like a thesis.”
  • maybe, as one young Ole Miss faculty member put it to me, this risks “losing the value of the struggle.” That, she says, is what she is scared will go away.
  • All this invites the most important question there is: What is learning for?
  • Learning, in college, can be instrumental. According to this view, the aim of teaching is to prepare students to live in the real world, so all that really matters is whether they have the chops to field jobs that feed themselves and their families. Perhaps knowing how to use AI to do any given task for you, then, is one of the most valuable skills out there — the same way it pays to be quick with a calculator.
  • If you accept this line of argument, however, there are still drawbacks to robotic crutches. Some level of critical thinking is necessary to function as an adult, and if AI stymies its development even the instrumental aim of education is thwarted. The same goes for that “value of the struggle.” The real world is full of adversity, much of which the largest language model can’t tell you how to overcome.
  • more compelling is the idea, probably shared by most college professors, that learning isn’t only instrumental after all — that it has intrinsic value and that it is the end rather than merely a means to one.
  • Every step along the way that is skipped, the shorter the journey becomes, the less we will take in as we travel.
  • This glummest of outlooks suggests that AI will stunt personal growth even if it doesn’t harm professional prospects.
  • While that doesn’t mean it’s wise to prohibit every little application of the technology in class, it probably does mean discouraging those most closely related to critical thinking.
  • One approach is to alter standards for grading, so that the things the machines are worst at are also the things that earn the best marks: originality, say, or depth of feeling, or so-called metacognition — the process of thinking about one’s own thinking or one’s own learning.
  • Hopefully, these things are also the most valuable because they are what make us human.
  • Caleb Jackson only wants AI to help him write his papers — not to write them for him. “If ChatGPT will get you an A, and you yourself might get a C, it’s like, ‘Well, I earned that C.’” He pauses. “That might sound crazy.”
  • Dominic Tovar agrees. Let AI take charge of everything, and, “They’re not so much tools at that point. They’re just replacing you.”
  • Lyndsey Cook, too, believes that even if these systems could reliably find the answers to the most vexing research problems, “it would take away from research itself” — because scientific inquiry is valuable for its own sake. “To have AI say, ‘Hey, this is the answer …’” she trails off, sounding dispirited.
  • Claire Mischker, lecturer of composition and director of the Ole Miss graduate writing center, asked her students at the end of last semester to turn in short reflections on their experience in her class. She received submissions that she was near certain were produced by ChatGPT — “that,” she says as sarcastically as she does mournfully, “felt really good.
  • The central theme of the course was empathy.
13More

Science fiction's curious ability to predict the future | The Spectator - 0 views

  • how many policy decisions have been influenced by dystopian visions? And how often did these turn out to be wise ones?
  • The 1930s policy of appeasement, for example, was based partly on an exaggerated fear that the Luftwaffe could match H.G. Wells’s Martians in destroying London.
  • science fiction has been a source of inspiration, too. When Silicon Valley began thinking about how to use the internet, they turned to writers such as William Gibson and Neal Stephenson. Today, no discussion of artificial intelligence is complete without reference to 2001: A Space Odyssey, just as nearly all conversations about robotics include a mention of Philip K. Dick’s Do Androids Dream of Electric Sheep? or the movie it inspired, Blade Runner.
  • ...10 more annotations...
  • who got the future most right? For the truth is that dystopia is now, not in some future date.
  • Science fiction provides us with a large sample of imagined discontinuities that might not occur if we only looked backwards.
  • Fahrenheit 451 (published in 1953 but set in 1999) describes an illiberal America where books are banned and the job of firemen is to burn them. (Though the novel is sometimes interpreted as a critique of McCarthyism, Bradbury’s real message was that the preference of ordinary people for the vacuous entertainment of TV and the willingness of religious minorities to demand censorship together posed a creeping threat to the book as a form for serious content.)
  • In a remarkable letter written in October 1949, Aldous Huxley — who had been Orwell’s French teacher at Eton — warned him that he was capturing his own present rather than the likely future. ‘The philosophy of the ruling minority in Nineteen Eighty-Four,’ Huxley wrote, ‘is a sadism which has been carried to its logical conclusion… Whether in actual fact the policy of the boot-on-the-face can go on indefinitely seems doubtful. My own belief is that the ruling oligarchy will find less arduous and wasteful ways of governing and of satisfying its lust for power, and these ways will resemble those which I described in Brave New World’. Huxley’s Brave New World (1932) is a very different dystopia. Citizens submit to a caste system, conditioned to be content with physical pleasure. Self-medication (‘soma’), constant entertainment (the ‘feelies’), regular holidays and ubiquitous sexual titillation are the basis for mass compliance. Censorship and propaganda play a part, but overt coercion is rarely visible. The West today seems more Huxley than Orwell: a world more of corporate distraction than state brutality.
  • Yet none of these authors truly foresaw our networked world, which has combined the rising technological acceleration with a slackening of progress in other areas, such as nuclear energy, and a degeneration of governance. The real prophets are less known figures, like John Brunner, whose Stand on Zanzibar (1968) is set at a time — 2010 — when population pressure has caused social division and political extremism. Despite the threat of terrorism, US corporations are booming, thanks to a supercomputer. China is America’s new rival. Europe has united. Brunner envisaged affirmative action, genetic engineering, Viagra, Detroit’s collapse, satellite TV, in-flight video, gay marriage, laser printing, electric cars, the decriminalisation of marijuana and the decline of tobacco. There’s even a progressive president (albeit of the Africa state of Beninia, not America) named ‘Obomi’
  • With comparable prescience, William Gibson’s Neuromancer (1984) anticipates the world wide web and AI. Opening in the dystopian Japanese underworld of Chiba City, it imagines a global computer network in cyberspace called the ‘matrix’. Neal Stephenson’s Snow Crash (1992), which was especially popular among Facebook employees in the company’s early years, foresaw corporate overreach and virtual reality in an almost anarchic America. The state has withered away in California; everything has been privatised. Most people spend half their time in virtual reality, where their avatars have more fun than they themselves do in the real world. Meanwhile, flotillas of refugees approach via the Pacific. These cyberpunk Americas are much closer to the US in 2021 than the fascist dystopias of Lewis, Atwood or Roth.
  • Orwell and Huxley — have been outflanked when it comes to making sense of today’s totalitarian countries
  • Take China, which better resembles Yevgeny Zamyatin’s We: a book written in 1921, but suppressed by the Bolsheviks. It is set in a future ‘One State’ led by ‘the Benefactor’, where the ‘ciphers’ — who have numbers, not names, and wear standardised ‘unifs’ — are under constant surveillance. All apartments are made of glass, with curtains that can be drawn only when one is having state-licensed sex. Faced with insurrection, the omnipotent Benefactor orders the mass lobotomisation of ciphers, as the only way to preserve universal happiness is to abolish the imagination.
  • Chan Koonchung’s The Fat Years (2009) — which is banned in China. In this story, tap water is laced with drugs that render people docile, but at a cost. The month of February 2011 has been removed from public records and popular memory. This was when drastic emergency measures were introduced to stabilise the Chinese economy and assert China’s primacy in east Asia. Chan is one of a number of recent Chinese authors who have envisioned the decline of America, the corollary of China’s rise. The Fat Years is set in an imagined 2013, after a second western financial crisis makes China the world’s no. 1 economy.
  • Liu Cixin’s The Three-Body Problem (2006), a Chinese nanotechnology expert and a Beijing cop lead the global defence against an alien invasion that’s the fault of a misanthropic Chinese physicist.
7More

At Beijing Olympics, Question of Free Speech Looms Over Athletes - The New York Times - 0 views

  • As competitions began in a Winter Olympics overshadowed by controversy over China’s record on human rights, the issue of what participants can and cannot say has loomed larger than at any Olympics in years.
  • China’s Communist Party has also warned that athletes are subject not only to Olympic rules, but also to Chinese law. The warnings have been part of a crackdown in the weeks before Friday’s opening ceremony that, critics say, has had a chilling effect on dissent inside and outside the Olympic bubble.
  • Some national teams, including the United States and Canada, have warned their athletes there is potential legal jeopardy in speaking out — from both the International Olympic Committee and the Chinese judicial system.
  • ...4 more annotations...
  • Within the Olympic community, the limits of political speech have become increasingly contested, a debate that has intensified with the Games in China, which routinely ranks among the world’s most repressive in surveys on political, religious and other freedoms.
  • Political activism has surfaced at many international events, including the Tokyo Olympics last summer, but no other host nation has been as strict as China in policing political dissent.
  • In fact, protests among Olympic athletes are rare, even among those who may sympathize with human-rights causes. Most athletes are zealously focused on their sport, having devoted years of training to have the chance to compete at the highest level.
  • Beijing 2022’s organizers have pledged to honor the Olympic Charter’s spirit to allow freedom of speech. Within the “closed loop” bubbles erected around Olympic venues, the authorities have created an open internet not restricted by China’s censorship.
« First ‹ Previous 541 - 560 of 593 Next › Last »
Showing 20 items per page