Skip to main content

Home/ TOK Friends/ Group items tagged problem

Rss Feed Group items tagged

anniina03

The Human Brain Evolved When Carbon Dioxide Was Lower - The Atlantic - 0 views

  • Kris Karnauskas, a professor of ocean sciences at the University of Colorado, has started walking around campus with a pocket-size carbon-dioxide detector. He’s not doing it to measure the amount of carbon pollution in the atmosphere. He’s interested in the amount of CO₂ in each room.
  • The indoor concentration of carbon dioxide concerns him—and not only for the usual reason. Karnauskas is worried that indoor CO₂ levels are getting so high that they are starting to impair human cognition.
  • Carbon dioxide, the same odorless and invisible gas that causes global warming, may be making us dumber.
  • ...11 more annotations...
  • “This is a hidden impact of climate change … that could actually impact our ability to solve the problem itself,” he said.
  • The science is, at first glance, surprisingly fundamental. Researchers have long believed that carbon dioxide harms the brain at very high concentrations. Anyone who’s seen the film Apollo 13 (or knows the real-life story behind it) may remember a moment when the mission’s three astronauts watch a gauge monitoring their cabin start to report dangerous levels of a gas. That gauge was measuring carbon dioxide. As one of the film’s NASA engineers remarks, if CO₂ levels rise too high, “you get impaired judgement, blackouts, the beginning of brain asphyxia.”
  • The same general principle, he argues, could soon affect people here on Earth. Two centuries of rampant fossil-fuel use have already spiked the amount of CO₂ in the atmosphere from about 280 parts per million before the Industrial Revolution to about 410 parts per million today. For Earth as a whole, that pollution traps heat in the atmosphere and causes climate change. But more locally, it also sets a baseline for indoor levels of carbon dioxide: You cannot ventilate a room’s carbon-dioxide levels below the global average.
  • In fact, many rooms have a much higher CO₂ level than the atmosphere, since ventilation systems don’t work perfectly.
  • On top of that, some rooms—in places such as offices, hospitals, and schools—are filled with many breathing people, that is, many people who are themselves exhaling carbon dioxide.
  • As the amount of atmospheric CO₂ keeps rising, indoor CO₂ will climb as well.
  • in one 2016 study Danish scientists cranked up indoor carbon-dioxide levels to 3,000 parts per million—more than seven times outdoor levels today—and found that their 25 subjects suffered no cognitive impairment or health issues. Only when scientists infused that same air with other trace chemicals and organic compounds emitted by the human body did the subjects begin to struggle, reporting “headache, fatigue, sleepiness, and difficulty in thinking clearly.” The subjects also took longer to solve basic math problems. The same lab, in another study, found that indoor concentrations of pure CO₂ could get to 5,000 parts per million and still cause little difficulty, at least for college students.
  • But other research is not as optimistic. When scientists at NASA’s Johnson Space Center tested the effects of CO₂ on about two dozen “astronaut-like subjects,” they found that their advanced decision-making skills declined with CO₂ at 1,200 parts per million. But cognitive skills did not seem to worsen as CO₂ climbed past that mark, and the intensity of the effect seemed to vary from person to person.
  • There’s evidence that carbon-dioxide levels may impair only the most complex and challenging human cognitive tasks. And we still don’t know why.
  • No one has looked at the effects of indoor CO₂ on children, the elderly, or people with health problems. Likewise, studies have so far exposed people to very high carbon levels for only a few hours, leaving open the question of what days-long exposure could do.
  • Modern humans, as a species, are only about 300,000 years old, and the ambient CO₂ that we encountered for most of our evolutionary life—from the first breath of infants to the last rattle of a dying elder—was much lower than the ambient CO₂ today. I asked Gall: Has anyone looked to see if human cognition improves under lower carbon-dioxide levels? If you tested someone in a room that had only 250 parts per million of carbon dioxide—a level much closer to that of Earth’s atmosphere three centuries or three millennia ago—would their performance on tests improve? In other words, is it possible that human cognitive ability has already declined?
katherineharron

Pew Research Center finds widespread agreement about the 'made-up news' malady - CNN - 0 views

  • Survey people about a range of issues, ask which issues are a "very big problem for the country," and more Americans will cite "made-up news" than terrorism, illegal immigration, racism or sexism.
  • Of course, some point the finger primarily at President Trump while others blame irresponsible news outlets. People are using different definitions of "made-up." But the study shows a widespread awareness of what's sometimes called the War on Truth.
  • 1: Pew says "Americans blame political leaders and activists far more than journalists for the creation of made-up news but put the responsibility on the news media to fix it." Only 9% say the onus is mostly on the tech companies.2: When people bemoan made-up news, they're not just talking about politics: 61% of respondents said there's a lot of bogus content out there about entertainment and celebrities.3: "52% of Americans have shared made-up news knowingly and/or unknowingly." Almost everyone says they only found out the info was bogus after sharing.4: Here is a hopeful sign! 78% "say they have checked the facts in news stories themselves." More here...
  • ...2 more annotations...
  • The attorney representing 10 of the families who lost relatives in the Sandy Hook massacre told me that he welcomed YouTube's Wednesday action, but said it was "too late to undo the harm" that has been caused to his clients from conspiracy theories circulating on the platform over the past several years. "Sandy Hook happened now nearly seven years ago, and so during that entire time the clients were subject to hostile postings on YouTube that disseminated this false narrative and caused undue harassment, threats, and fallacies as they were trying to heal," said the attorney, Josh Koskoff. "At the same time, better late than never."
  • Moving forward, it will be interesting to see if other social media company adopts guidelines similar to the ones YouTube announced on Wednesday regarding content that denies well-documented violent events like Sandy Hook. "All social media platforms who have not taken this step, should look in the mirror and decide whether they want to continue to facilitate harassment and hate in this day and age where that has serious consequences," Koskoff told me. And Pozner echoed that, saying that he hoped "Twitter and other hosting platforms will follow suit in implementing and enforcing more socially responsible policies."
Javier E

Understanding What's Wrong With Facebook | Talking Points Memo - 0 views

  • to really understand the problem with Facebook we need to understand the structural roots of that problem, how much of it is baked into the core architecture of the site and its very business model
  • much of it is inherent in the core strategies of the post-2000, second wave Internet tech companies that now dominate our information space and economy.
  • Facebook is an ingenious engine for information and ideational manipulation.
  • ...17 more annotations...
  • Good old fashioned advertising does that to a degree. But Facebook is much more powerful, adaptive and efficient.
  • Facebook is designed to do specific things. It’s an engine to understand people’s minds and then manipulate their thinking.
  • Those tools are refined for revenue making but can be used for many other purposes. That makes it ripe for misuse and bad acting.
  • The core of all second wave Internet commerce operations was finding network models where costs grow mathematically and revenues grow exponentially.
  • The network and its dominance is the product and once it takes hold the cost inputs remained constrained while the revenues grow almost without limit.
  • Facebook is best understood as a fantastically profitable nuclear energy company whose profitability is based on dumping the waste on the side of the road and accepting frequent accidents and explosions as inherent to the enterprise.
  • That’s why these companies employ so few people relative to scale and profitability.
  • That’s why there’s no phone support for Google or Facebook or Twitter. If half the people on the planet are ‘customers’ or users that’s not remotely possible.
  • The core economic model requires doing all of it on the cheap. Indeed, what Zuckerberg et al. have created with Facebook is so vast that the money required not to do it on the cheap almost defies imagination.
  • Facebook’s core model and concept requires not taking responsibility for what others do with the engine created to drive revenue.
  • It all amounts to a grand exercise in socializing the externalities and keeping all the revenues for the owners.
  • Here’s a way to think about it. Nuclear power is actually incredibly cheap. The fuel is fairly plentiful and easy to pull out of the ground. You set up a little engine and it generates energy almost without limit. What makes it ruinously expensive is managing the externalities – all the risks and dangers, the radiation, accidents, the constant production of radioactive waste.
  • managing or distinguishing between legitimate and bad-acting uses of the powerful Facebook engine is one that would require huge, huge investments of money and armies of workers to manage
  • But back to Facebook. The point is that they’ve created a hugely powerful and potentially very dangerous machine
  • The core business model is based on harvesting the profits from the commercial uses of the machine and using algorithms and very, very limited personnel (relative to scale) to try to get a handle on the most outrageous and shocking abuses which the engine makes possible.
  • Zuckerberg may be a jerk and there really is a culture of bad acting within the organization. But it’s not about him being a jerk. Replace him and his team with non-jerks and you’d still have a similar core problem.
  • To manage the potential negative externalities, to take some responsibility for all the dangerous uses the engine makes possible would require money the owners are totally unwilling and in some ways are unable to spend.
sanderk

How Procrastination Affects Your Health - Thrive Global - 0 views

  • fine line between procrastination and being “pressure prompted.” If you’re like me and pressure prompted, you are someone who often does your best work when faced with a looming deadline. While being pressure prompted may entail a bit of procrastination, it is procrastination within acceptable limits. In other words, it is a set of conditions that offers just enough pressure to ensure you’re at the top of your game without divulging into chaos or most importantly, impacting other members of your team by preventing them from delivering their best work in a timely manner.
  • Procrastination is a condition that has consequences on one’s mental and physical health and performance at school and in the workplace.
  • Piers Steel defines procrastination as “a self-regulatory failure leading to poor performance and reduced well-being.” Notably, Steel further emphasizes that procrastination is both common (80% to 90% of college-age students suffer from it at least some of the time) and something most people (95%) wish to overcome.
  • ...5 more annotations...
  • Steel even argues that procrastination may now be on the rise as people increasingly turn to the immediate gratification made possible by information technologies and specifically, social media platforms.
  • for a small percentage of people, procrastination isn’t just a temporary or occasional problem but rather something that comes to structure their lives and ultimately limit their potential.
  • In a 2008 study, Peter Gröpel & Piers Steel investigated predictors of procrastination in a large Internet-based study that included over 9,000 participants. Their results revealed two important findings. First, their results showed that goal setting reduced procrastination; second, they found that it was strongly associated with lack of energy.
  • While it is true that intrinsically motivated people may have an easier time getting into flow, anyone, even a chronic procrastinator, can cultivate flow. The first step is easy—it simply entails coming up with a clear goal.
  • The second step is to stop feeling ashamed about your procrastinating tendencies.
  •  
    This article is very interesting because it says that procrastination is not necessarily bad. Procrastination can be good for people in small quantities because it causes them to be pressured into actually doing their work. However, there is a point where procrastination becomes an issue. I find it interesting how phones and computers have caused procrastination problems to become more severe. Phones and computers can give people instant gratification which leads to more procrastination. As the article says if people set goals for themselves and are disciplined they can overcome procrastination.
Javier E

On the Shortness of Life 2.0 - by Peter Juul - The Liberal Patriot - 0 views

  • Four Thousand Weeks: Time Management for Mortals, writer and regular Guardian columnist Oliver Burkeman faithfully carries the spirit of Seneca’s classic essay forward
  • It’s a deft and eclectic synthesis of ancient and modern thinking about how humanity can come to terms with our limited time on Earth – the title derives from the length of the average human lifespan – ranging intellectually from ancient Greek and Roman philosophers like Seneca to modern-day Buddhist and existentialist thinkers.
  • he only touches on politics briefly and sporadically throughout the book’s 245 pages. But those of us in politics and policy – whatever capacity we find ourselves in – can learn quite a bit
  • ...15 more annotations...
  • defined by Burkeman as “a machine for misusing your life.” Social media platforms like Twitter and Facebook don’t just distract us from more important matters, he argues, “they change how we’re defining ‘important matters’ in the first place.”
  • Social media also amounts to “a machine for getting you to care about too many things, even if they’re each indisputably worthwhile.” Hence the urge to depict every policy problem as an urgent if not existential crisis
  • social media has turned all of us into “angrier, less empathetic, more anxious or more numbed out” versions of ourselves.
  • our political and policy debates tend towards what Burkeman calls “paralyzing grandiosity” – the false notion that in the face of problems like climate change, economic inequality, and ongoing threats to democracy “only the most revolutionary, world-transforming causes are worth fighting for.” It’s a sentiment that derives from and reinforces catastrophism and absolutism
  • Four Thousand Weeks is filled to the brim with practical advice that we can easily adapt
  • Embrace “radical incrementalism.
  • we lack the patience to tolerate the fact that most of the things we want to happen won’t occur in one fell swoop.
  • We’ve got to resist the need for speed and desire for rapid resolution of problems, letting them instead take the time they take. In part, that means accepting even limited progress rather than giving up and growing cynical
  • Take a break
  • Burkeman’s advice to rest for rest’s sake, “to spend some of our time, that is, on activities in which the only thing we’re trying to get from them is the doing itself.”
  • Burkeman suggests we find some hobby we enjoy for its own sake, not because there’s some benefit we think we can derive from it.
  • When we somewhat sheepishly admit to a hobby, he writes, “that’s a sign you’re doing it for its own sake, rather than some socially sanctioned outcome.”
  • he joy we find in our hobbies can bleed into other parts of our lives as well, and if they’re more social in nature that can help build relationships unrelated to politics and policy that are necessary to make democracy work.
  • “Consolidate your caring” and think small. “To make a difference,” Burkeman argues, “you must focus your finite capacity for care.”
  • What matters is that we make things slightly better with our contributions and actions, not that we solve all the world’s at once.
Javier E

Is Facebook Bad for You? It Is for About 360 Million Users, Company Surveys Suggest - WSJ - 0 views

  • Facebook FB 1.57% researchers have found that 1 in 8 of its users report engaging in compulsive use of social media that impacts their sleep, work, parenting or relationships, according to documents reviewed by The Wall Street Journal.
  • These patterns of what the company calls problematic use mirror what is popularly known as internet addiction. They were perceived by users to be worse on Facebook than any other major social-media platform
  • A Facebook team focused on user well-being suggested a range of fixes, and the company implemented some, building in optional features to encourage breaks from social media and to dial back the notifications that can serve as a lure to bring people back to the platform.
  • ...25 more annotations...
  • Facebook shut down the team in late 2019.
  • “We have a role to play, which is why we’ve built tools and controls to help people manage when and how they use our services,” she said in the statement. “Furthermore, we have a dedicated team working across our platforms to better understand these issues and ensure people are using our apps in ways that are meaningful to them.”
  • They wrote that they don’t consider the behavior to be a clinical addiction because it doesn’t affect the brain in the same way as gambling or substance abuse. In one document, they noted that “activities like shopping, sex and Facebook use, when repetitive and excessive, may cause problems for some people.”
  • In March 2020, several months after the well-being team was dissolved, researchers who had been on the team shared a slide deck internally with some of the findings and encouraged other teams to pick up the work.
  • The researchers estimated these issues affect about 12.5% of the flagship app’s more than 2.9 billion users, or more than 360 million people. About 10% of users in the U.S., one of Facebook’s most lucrative markets, exhibit this behavior
  • In the Philippines and in India, which is the company’s largest market, the employees put the figure higher, at around 25%.
  • “Why should we care?” the researchers wrote in the slide deck. “People perceive the impact. In a comparative study with competitors, people perceived lower well-being and higher problematic use on Facebook compared to any other service.
  • Facebook’s findings are consistent with what many external researchers have observed for years,
  • said Brian Primack, a professor of public health and medicine and dean of the College of Education and Health Professions at the University of Arkansas
  • His research group followed about a thousand people over six months in a nationally representative survey and found that the amount of social media that a person used was the No. 1 predictor of the variables they measured for who became depressed.
  • In late 2017, a Facebook executive and a researcher wrote a public blog post that outlined some of the issues with social-media addiction. According to the post, the company had found that while passive consumption of social media could make you feel worse, the opposite was true of more active social-media use.
  • Inside Facebook, the researchers registered concern about the direction of Facebook’s focus on certain metrics, including the number of times a person logs into the app, which the company calls a session. “One of the worries with using sessions as a north star is we want to be extra careful not to game them by creating bad experiences for vulnerable populations,” a researcher wrote, referring to elements designed to draw people back to Facebook frequently, such as push notifications.
  • Facebook then made a switch to more heavily weigh “meaningful social interactions” in its news feed as a way to combat passive consumption. One side effect of that change, as outlined in a previous Journal article in The Facebook Files, was that the company’s algorithms rewarded content that was angry or sensational, because those posts increased engagement from users.
  • Facebook said any algorithm can promote objectionable or harmful content and that the company is doing its best to mitigate the problem.
  • “Every second that I wasn’t occupied by something I had to do I was fooling around on my phone scrolling through Facebook,” Ms. Gandy said. “Facebook took over my brain.”
  • “Actively interacting with people—especially sharing messages, posts and comments with close friends and reminiscing about past interactions—is linked to improvements in well-being,” the company said.
  • The well-being team, according to people familiar with the matter, was reshuffled at least twice since late 2017 before it was disbanded, and could get only about half of the resources the team requested to do its work.
  • In 2018, Facebook’s researchers surveyed 20,000 U.S. users and paired their answers with data about their behavior on Facebook. The researchers found about 3% of these users said they experienced “serious problems” in their sleep, work or relationships related to their time on Facebook that they found difficult to change. Some of the researchers’ work was published in a 2019 paper.
  • According to that study, the researchers also said that a liberal interpretation of the results would be that 14% of respondents spent “a lot more time on Facebook than they want to,” although they didn’t label this group problematic users.
  • In 2019, the researchers had come to a new figure: What they called problematic use affects 12.5% of people on Facebook, they said. This survey used a broader definition for the issue, including users who reported negative results on key aspects of their life as well as feelings of guilt or a loss of control, according to the documents.
  • The researchers also asked Facebook users what aspects of Facebook triggered them most. The users said the app’s many notifications sucked them in. “Red dots are toxic on the home screen,” a male young adult in the U.S. told the researchers, referring to the symbol that alerts a user to new content.
  • One entrepreneur came up with his own solution to some of these issues. In 2016, software developer Louis Barclay manually unfollowed all the people, pages and groups he saw on Facebook in an attempt to be more deliberate about how he used technology. The process, which isn’t the same as unfriending, took him days, but he was happy with the result: an empty newsfeed that no longer sucked him in for hours. He could still visit the profile pages of everyone he wanted to connect with on Facebook, but their content would no longer appear in the never-ending scroll of posts.
  • Thinking other people might benefit from a similar experience on Facebook, he built a tool that would enable anyone to automate the process. He created it as a piece of add-on software called a browser extension that anyone could download. He called it Unfollow Everything and made it available on Chrome’s web store free of charge.
  • In July, Facebook sent Mr. Barclay a cease-and-desist letter, which the inventor earlier wrote about for Slate, saying his tool was a breach of its terms of service for automating user interactions. It also permanently disabled Mr. Barclay’s personal Facebook and Instagram accounts.
  • Ms. Lever, the company spokeswoman, said Mr. Barclay’s extension could pose risks if abused, and said Facebook offers its own unfollow tool that allows users to manually unfollow accounts.
Javier E

On the Shortness of Life 2.0 - by Peter Juul - The Liberal Patriot - 0 views

  • It’s a deft and eclectic synthesis of ancient and modern thinking about how humanity can come to terms with our limited time on Earth – the title derives from the length of the average human lifespan – ranging intellectually from ancient Greek and Roman philosophers like Seneca to modern-day Buddhist and existentialist thinkers. Stuffed with valuable and practical insights on life and how we use – or misuse – it, Four Thousand Weeks is an impressive and compact volume well worth the time and attention of even the most casual readers.
  • As Burkeman notes, our preoccupation with productivity allows us to evade “the anxiety that might arise if we were to ask ourselves whether we’re on the right path.” The end result is a lot of dedicated and talented people in politics and policy burning themselves out for no discernable or meaningful purpose.
  • Then there’s social media, defined by Burkeman as “a machine for misusing your life.” Social media platforms like Twitter and Facebook don’t just distract us from more important matters, he argues, “they change how we’re defining ‘important matters’ in the first place.”
  • ...15 more annotations...
  • Social media also amounts to “a machine for getting you to care about too many things, even if they’re each indisputably worthwhile.” Hence the urge to depict every policy problem as an urgent if not existential crisis
  • social media has turned all of us into “angrier, less empathetic, more anxious or more numbed out” versions of ourselves.
  • Finally, our political and policy debates tend towards what Burkeman calls “paralyzing grandiosity” – the false notion that in the face of problems like climate change, economic inequality, and ongoing threats to democracy “only the most revolutionary, world-transforming causes are worth fighting for.” It’s a sentiment that derives from and reinforces catastrophism and absolutism as ways of thinking about politics and policy
  • That sentiment also often results in impotent impatience, which in turn leads to frustration, anger, and cynicism when things don’t turn out exactly as we’ve hoped. But it also allows us to avoid hard choices required in order to pull together the political coalitions necessary to effect actual change.
  • Four Thousand Weeks is filled to the brim with practical advice
  • Embrace “radical incrementalism.”
  • Burkeman suggests we find some hobby we enjoy for its own sake, not because there’s some benefit we think we can derive from it
  • Take a break
  • rest for rest’s sake, “to spend some of our time, that is, on activities in which the only thing we’re trying to get from them is the doing itself.”
  • we should cultivate the patience to see our goals through step-by-step over the long term. We’ve got to resist the need for speed and desire for rapid resolution of problems, letting them instead take the time they take.
  • “To make a difference,” Burkeman argues, “you must focus your finite capacity for care.”
  • “Consolidate your caring” and think small.
  • it’s perfectly fine to dedicate your time to a limited subset of issues that you care deeply about. We’re only mortal, and as Burkeman points out it’s important to “consciously pick your battles in charity, activism, and politics.”
  • our lives are just as meaningful and worthwhile if we spend our time “on, say caring for an elderly relative with dementia or volunteering at the local community garden” as they are if we’re up to our eyeballs in the minutiae of politics and policy. What matters is that we make things slightly better with our contributions and actions
  • once we give up on the illusion of perfection, Burkeman observes, we “get to roll up [our] sleeves and start work on what’s gloriously possible instead.”
Javier E

Pandemic-Era Politics Are Ruining Public Education - The Atlantic - 0 views

  • You’re also the nonvoting, perhaps unwitting, subject of adults’ latest pedagogical experiments: either relentless test prep or test abolition; quasi-religious instruction in identity-based virtue and sin; a flood of state laws to keep various books out of your hands and ideas out of your head.
  • Your parents, looking over your shoulder at your education and not liking what they see, have started showing up at school-board meetings in a mortifying state of rage. If you live in Virginia, your governor has set up a hotline where they can rat out your teachers to the government. If you live in Florida, your governor wants your parents to sue your school if it ever makes you feel “discomfort” about who you are
  • Adults keep telling you the pandemic will never end, your education is being destroyed by ideologues, digital technology is poisoning your soul, democracy is collapsing, and the planet is dying—but they’re counting on you to fix everything when you grow up.
  • ...37 more annotations...
  • It isn’t clear how the American public-school system will survive the COVID years. Teachers, whose relative pay and status have been in decline for decades, are fleeing the field. In 2021, buckling under the stresses of the pandemic, nearly 1 million people quit jobs in public education, a 40 percent increase over the previous year.
  • These kids, and the investments that come with them, may never return—the beginning of a cycle of attrition that could continue long after the pandemic ends and leave public schools even more underfunded and dilapidated than before. “It’s an open question whether the public-school system will recover,” Steiner said. “That is a real concern for democratic education.”
  • The high-profile failings of public schools during the pandemic have become a political problem for Democrats, because of their association with unions, prolonged closures, and the pedagogy of social justice, which can become a form of indoctrination.
  • The party that stands for strong government services in the name of egalitarian principles supported the closing of schools far longer than either the science or the welfare of children justified, and it has been woefully slow to acknowledge how much this damaged the life chances of some of America’s most disadvantaged students.
  • Public education is too important to be left to politicians and ideologues. Public schools still serve about 90 percent of children across red and blue America.
  • Since the common-school movement in the early 19th century, the public school has had an exalted purpose in this country. It’s our core civic institution—not just because, ideally, it brings children of all backgrounds together in a classroom, but because it prepares them for the demands and privileges of democratic citizenship. Or at least, it needs to.
  • What is school for? This is the kind of foundational question that arises when a crisis shakes the public’s faith in an essential institution. “The original thinkers about public education were concerned almost to a point of paranoia about creating self-governing citizens,”
  • “Horace Mann went to his grave having never once uttered the phrase college- and career-ready. We’ve become more accustomed to thinking about the private ends of education. We’ve completely lost the habit of thinking about education as citizen-making.”
  • School can’t just be an economic sorting system. One reason we have a stake in the education of other people’s children is that they will grow up to be citizens.
  • Public education is meant not to mirror the unexamined values of a particular family or community, but to expose children to ways that other people, some of them long dead, think.
  • If the answer were simply to push more and more kids into college, the United States would be entering its democratic prime
  • So the question isn’t just how much education, but what kind. Is it quaint, or utopian, to talk about teaching our children to be capable of governing themselves?
  • The COVID era, with Donald Trump out of office but still in power and with battles over mask mandates and critical race theory convulsing Twitter and school-board meetings, shows how badly Americans are able to think about our collective problems—let alone read, listen, empathize, debate, reconsider, and persuade in the search for solutions.
  • democratic citizenship can, at least in part, be learned.
  • The history warriors build their metaphysics of national good or evil on a foundation of ignorance. In a 2019 survey, only 40 percent of Americans were able to pass the test that all applicants for U.S. citizenship must take, which asks questions like “Who did the United States fight in World War II?” and “We elect a President for how many years?” The only state in which a majority passed was Vermont.
  • he orthodoxies currently fighting for our children’s souls turn the teaching of U.S. history into a static and morally simple quest for some American essence. They proceed from celebration or indictment toward a final judgment—innocent or guilty—and bury either oppression or progress in a subordinate clause. The most depressing thing about this gloomy pedagogy of ideologies in service to fragile psyches is how much knowledge it takes away from students who already have so little
  • A central goal for history, social-studies, and civics instruction should be to give students something more solid than spoon-fed maxims—to help them engage with the past on its own terms, not use it as a weapon in the latest front of the culture wars.
  • Releasing them to do “research” in the vast ocean of the internet without maps and compasses, as often happens, guarantees that they will drown before they arrive anywhere.
  • The truth requires a grounding in historical facts, but facts are quickly forgotten without meaning and context
  • The goal isn’t just to teach students the origins of the Civil War, but to give them the ability to read closely, think critically, evaluate sources, corroborate accounts, and back up their claims with evidence from original documents.
  • This kind of instruction, which requires teachers to distinguish between exposure and indoctrination, isn’t easy; it asks them to be more sophisticated professionals than their shabby conditions and pay (median salary: $62,000, less than accountants and transit police) suggest we are willing to support.
  • To do that, we’ll need to help kids restore at least part of their crushed attention spans.
  • staring at a screen for hours is a heavy depressant, especially for teenagers.
  • we’ll look back on the amount of time we let our children spend online with the same horror that we now feel about earlier generations of adults who hooked their kids on smoking.
  • “It’s not a choice between tech or no tech,” Bill Tally, a researcher with the Education Development Center, told me. “The question is what tech infrastructure best enables the things we care about,” such as deep engagement with instructional materials, teachers, and other students.
  • The pandemic should have forced us to reassess what really matters in public school; instead, it’s a crisis that we’ve just about wasted.
  • Like learning to read as historians, learning to sift through the tidal flood of memes for useful, reliable information can emancipate children who have been heedlessly hooked on screens by the adults in their lives
  • Finally, let’s give children a chance to read books—good books. It’s a strange feature of all the recent pedagogical innovations that they’ve resulted in the gradual disappearance of literature from many classrooms.
  • The best way to interest young people in literature is to have them read good literature, and not just books that focus with grim piety on the contemporary social and psychological problems of teenagers.
  • We sell them insultingly short in thinking that they won’t read unless the subject is themselves. Mirrors are ultimately isolating; young readers also need windows, even if the view is unfamiliar, even if it’s disturbing
  • connection through language to universal human experience and thought is the reward of great literature, a source of empathy and wisdom.
  • The culture wars, with their atmosphere of resentment, fear, and petty faultfinding, are hostile to the writing and reading of literature.
  • W. E. B. Du Bois wrote: “Nations reel and stagger on their way; they make hideous mistakes; they commit frightful wrongs; they do great and beautiful things. And shall we not best guide humanity by telling the truth about all this, so far as the truth is ascertainable?”
  • The classroom has become a half-abandoned battlefield, where grown-ups who claim to be protecting students from the virus, from books, from ideologies and counter-ideologies end up using children to protect themselves and their own entrenched camps.
  • American democracy can’t afford another generation of adults who don’t know how to talk and listen and think. We owe our COVID-scarred children the means to free themselves from the failures of the past and the present.
  • Students are leaving as well. Since 2020, nearly 1.5 million children have been removed from public schools to attend private or charter schools or be homeschooled.
  • “COVID has encouraged poor parents to question the quality of public education. We are seeing diminished numbers of children in our public schools, particularly our urban public schools.” In New York, more than 80,000 children have disappeared from city schools; in Los Angeles, more than 26,000; in Chicago, more than 24,000.
Javier E

Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. - The New York Times - 0 views

  • Beneath almost all of the testimony, the manifestoes, the blog posts and the public declarations issued about A.I. are battles among deeply divided factions
  • Some are concerned about far-future risks that sound like science fiction.
  • Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now.
  • ...31 more annotations...
  • Some are motivated by potential business revenue, others by national security concerns.
  • Sometimes, they trade letters, opinion essays or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view A.I.
  • you’ll realize this isn’t really a debate only about A.I. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.
  • It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of A.I. to stay true to the humanity of our values.
  • Because language itself is part of their battleground, the different A.I. camps tend not to use the same words to describe their positions
  • One faction describes the dangers posed by A.I. through the framework of safety, another through ethics or integrity, yet another through security and others through economics.
  • The Doomsayers
  • These are the A.I. safety people, and their ranks include the “Godfathers of A.I.,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind
  • The technology historian David C. Brock calls these fears “wishful worries” — that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”
  • Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like A.I. enslavement.
  • Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future
  • OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant A.I. companies, are pushing for A.I. regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading A.I. companies while restricting competition from start-ups
  • the roboticist Rodney Brooks has pointed out that we will see the existential risks coming, the dangers will not be sudden and we will have time to change course.
  • While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of A.I. and, most important, not allow them to strategically distract from more immediate concerns.
  • they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.
  • While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower
  • Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.
  • Propagators of these A.I. ethics concerns — like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury and Cathy O’Neil — have been raising the alarm on inequities coded into A.I. for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women and people who identify as L.G.B.T.Q.
  • Others frame efforts to reform A.I. in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside — or even above — their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the A.I. revolution have, at times, been eliminating safeguards
  • reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by A.I. misinformation, surveillance and inequity.
  • Integrity experts call for the development of responsible A.I., for civic education to ensure A.I. literacy and for keeping humans front and center in A.I. systems.
  • Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that A.I. might kill us in the future should still demand that it not profile and exploit us in the present.
  • Other groups of prognosticators cast the rise of A.I. through the language of competitiveness and national security.
  • Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.
  • The Reformers
  • U.S. megacompanies pleaded to exempt their general purpose A.I. from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”
  • The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-20th century. A.I. research is fundamentally international; no one country will win a monopoly.
  • As the science-fiction author Ted Chiang has said, fears about the existential risks of A.I. are really fears about the threat of uncontrolled capitalism
  • Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with A.I., China and the fights picked among robber barons.
  • By analogy to the health care sector, we need an A.I. public option to truly keep A.I. companies in check. A publicly directed A.I. development project would serve to counterbalance for-profit corporate A.I. and help ensure an even playing field for access to the 21st century’s key technology while offering a platform for the ethical development and use of A.I.
  • Also, we should embrace the humanity behind A.I. We can hold founders and corporations accountable by mandating greater A.I. transparency in the development stage, in addition to applying legal standards for actions associated with A.I. Remarkably, this is something that both the left and the right can agree on.
Javier E

Why the very concept of 'general knowledge' is under attack | Times2 | The Times - 0 views

  • why has University Challenge lasted, virtually unchanged, for so long?
  • The answer may lie in a famous theory about our brains put forward by the psychologist Raymond Cattell in 1963
  • Cattell divided intelligence into two categories: fluid and crystallised. Fluid intelligence refers to basic reasoning and other mental activities that require minimal learning — just an alert and flexible brain.
  • ...12 more annotations...
  • By contrast, crystallised intelligence is based on experience and the accumulation of knowledge. Fluid intelligence peaks at the age of about 20 then gradually declines, whereas crystallised intelligence grows through your life until you hit your mid-sixties, when you start forgetting things.
  • that explains much about University Challenge’s appeal. Because the contestants are mostly aged around 20 and very clever, their fluid intelligence is off the scale
  • On the other hand, because they have had only 20 years to acquire crystallised intelligence, their store of general knowledge is likely to be lacking in some areas.
  • In each episode there will be questions that older viewers can answer, thanks to their greater store of crystallised intelligence, but the students cannot. Therefore we viewers don’t feel inferior when confronted by these smart young people. On the contrary: we feel, in some areas, slightly superior.
  • there is a real threat to the future of University Challenge and much else of value in our society, and it is this. The very concept of “general knowledge” — of a widely accepted core of information that educated, inquisitive people should have in their memory banks — is under attack from two different groups.
  • It’s a brilliantly balanced format
  • They argue that all knowledge is contextual and that things taken for granted in the past — for instance, a canon of great authors that everyone should read at school — merely reflect an outdated, usually Eurocentric view of what’s intellectually important.
  • The first comprises the deconstructionists and decolonialists
  • The other group is the technocrats who argue that the extent of human knowledge is now so vast that it’s impossible for any individual to know more than, perhaps, one billionth of it
  • So why not leave it entirely to computers to do the heavy lifting of knowledge storing and recall, thus freeing our minds for creativity and problem solving?
  • The problem with the agitators on both sides of today’s culture wars is that they are forcefully trying to shape what’s accepted as general knowledge according to a blatant political agenda.
  • And the problem with relying on, say, Wikipedia’s 6.5 million English-language articles to store general knowledge for all of us? It’s the tacit implication that “mere facts” are too tedious to be clogging up our brains. From there it’s a short step to saying that facts don’t matter at all, that everything should be decided by “feelings”. And from there it’s an even shorter step to fake news and pernicious conspiracy theories, the belittling of experts and hard evidence, the closing of minds, the thickening of prejudice and the trivialisation of the national conversation.
Javier E

Critics and Audiences Often Disagree. It's Not a Big Deal. - 0 views

  • So what’s the actual reason for the gap between audiences and critics? Simply put, it’s that audiences tend to be easier to please because they’re merely looking for movies to be entertainment while critics are trying to judge them artistically.
  • one of the things W. David Marx discusses is how art receives acclaim as art. “Invention requires ‘answering’ the works of previous artists,” Marx writes. So the creation of photography led to artists trying to “solve” the problem of a new form capable of capturing perfect representations of reality; hence the rise of cubism and abstract art
  • “There are perhaps an infinite number of potential problems in art, but to gain artist status, artists must solve the agreed-upon problems of the current moment,” he writes.
  • ...1 more annotation...
  • Another way to put this is that critics are looking for something “interesting”; audiences are merely looking to be “entertained.”
Javier E

When a Shitposter Runs a Social Media Platform - The Bulwark - 0 views

  • This is an unfortunate and pernicious pattern. Musk often refers to himself as moderate or independent, but he routinely treats far-right fringe figures as people worth taking seriously—and, more troublingly, as reliable sources of information.
  • By doing so, he boosts their messages: A message retweeted by or receiving a reply from Musk will potentially be seen by millions of people.
  • Also, people who pay for Musk’s Twitter Blue badges get a lift in the algorithm when they tweet or reply; because of the way Twitter Blue became a culture war front, its subscribers tend to skew to the righ
  • ...19 more annotations...
  • The important thing to remember amid all this, and the thing that has changed the game when it comes to the free speech/content moderation conversation, is that Elon Musk himself loves conspiracy theorie
  • The media isn’t just unduly critical—a perennial sore spot for Musk—but “all news is to some degree propaganda,” meaning he won’t label actual state-affiliated propaganda outlets on his platform to distinguish their stories from those of the New York Times.
  • In his mind, they’re engaged in the same activity, so he strikes the faux-populist note that the people can decide for themselves what is true, regardless of objectively very different track records from different sources.
  • Musk’s “just asking questions” maneuver is a classic Trump tactic that enables him to advertise conspiracy theories while maintaining a sort of deniability.
  • At what point should we infer that he’s taking the concerns of someone like Loomer seriously not despite but because of her unhinged beliefs?
  • Musk’s skepticism seems largely to extend to criticism of the far-right, while his credulity for right-wing sources is boundless.
  • This is part of the argument for content moderation that limits the dispersal of bullshit: People simply don’t have the time, energy, or inclination to seek out the boring truth when stimulated by some online outrage.
  • Refuting bullshit requires some technological literacy, perhaps some policy knowledge, but most of all it requires time and a willingness to challenge your own prior beliefs, two things that are in precious short supply online.
  • Brandolini’s Law holds that the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.
  • Here we can return to the example of Loomer’s tweet. People did fact-check her, but it hardly matters: Following Musk’s reply, she ended up receiving over 5 million views, an exponentially larger online readership than is normal for her. In the attention economy, this counts as a major win. “Thank you so much for posting about this, @elonmusk!” she gushed in response to his reply. “I truly appreciate it.”
  • the problem isn’t limited to elevating Loomer. Musk had his own stock of misinformation to add to the pile. After interacting with her account, Musk followed up last Tuesday by tweeting out last week a 2021 Federalist article claiming that Facebook founder Mark Zuckerberg had “bought” the 2020 election, an allegation previously raised by Trump and others, and which Musk had also brought up during his recent interview with Tucker Carlson.
  • If Zuckerberg wanted to use his vast fortune to tip the election, it would have been vastly more efficient to create a super PAC with targeted get-out-the-vote operations and advertising. Notwithstanding legitimate criticisms one can make about Facebook’s effect on democracy, and whatever Zuckerberg’s motivations, you have to squint hard to see this as something other than a positive act addressing a real problem.
  • It’s worth mentioning that the refutations I’ve just sketched of the conspiratorial claims made by Loomer and Musk come out to around 1,200 words. The tweets they wrote, read by millions, consisted of fewer than a hundred words in total. That’s Brandolini’s Law in action—an illustration of why Musk’s cynical free-speech-over-all approach amounts to a policy in favor of disinformation and against democracy.
  • Moderation is a subject where Zuckerberg’s actions provide a valuable point of contrast with Musk. Through Facebook’s independent oversight board, which has the power to overturn the company’s own moderation decisions, Zuckerberg has at least made an effort to have credible outside actors inform how Facebook deals with moderation issues
  • Meanwhile, we are still waiting on the content moderation council that Elon Musk promised last October:
  • The problem is about to get bigger than unhinged conspiracy theorists occasionally receiving a profile-elevating reply from Musk. Twitter is the venue that Tucker Carlson, whom advertisers fled and Fox News fired after it agreed to pay $787 million to settle a lawsuit over its election lies, has chosen to make his comeback. Carlson and Musk are natural allies: They share an obsessive anti-wokeness, a conspiratorial mindset, and an unaccountable sense of grievance peculiar to rich, famous, and powerful men who have taken it upon themselves to rail against the “elites,” however idiosyncratically construed
  • f the rumors are true that Trump is planning to return to Twitter after an exclusivity agreement with Truth Social expires in June, Musk’s social platform might be on the verge of becoming a gigantic rec room for the populist right.
  • These days, Twitter increasingly feels like a neighborhood where the amiable guy-next-door is gone and you suspect his replacement has a meth lab in the basement.
  • even if Twitter’s increasingly broken information environment doesn’t sway the results, it is profoundly damaging to our democracy that so many people have lost faith in our electoral system. The sort of claims that Musk is toying with in his feed these days do not help. It is one thing for the owner of a major source of information to be indifferent to the content that gets posted to that platform. It is vastly worse for an owner to actively fan the flames of disinformation and doubt.
Javier E

Elon Musk Is Not Playing Four-Dimensional Chess - 0 views

  • Musk is not wrong that Twitter is chock-full of noise and garbage, but the most pernicious stuff comes from real people and a media ecosystem that amplifies and rewards incendiary bullshit
  • This dynamic is far more of a problem for Twitter (but also the news media and the internet in general) than shadowy bot farms are. But it’s also a dilemma without much of a concrete solution
  • Were Musk actually curious or concerned with the health of the online public discourse, he might care about the ways that social media platforms like Twitter incentivize this behavior and create an information economy where our sense of proportion on a topic can be so easily warped. But Musk isn’t interested in this stuff, in part because he is a huge beneficiary of our broken information environment and can use it to his advantage to remain constantly in the spotlight.
  • ...3 more annotations...
  • Musk’s concern with bots isn’t only a bullshit tactic he’s using to snake out of a bad business deal and/or get a better price for Twitter; it’s also a great example of his shallow thinking. The man has at least some ability to oversee complex engineering systems that land rockets, but his narcissism affords him a two-dimensional understanding of the way information travels across social media.
  • He is drawn to the conspiratorial nature of bots and information manipulation, because it is a more exciting and easier-to-understand solution to more complex or uncomfortable problems. Instead of facing the reality that many people dislike him as a result of his personality, behavior, politics, or shitty management style, he blames bots. Rather than try to understand the gnarly mechanics and hard-to-solve problems of democratized speech, he sorts them into overly simplified boxes like censorship and spam and then casts himself as the crusading hero who can fix it all. But he can’t and won’t, because he doesn’t care enough to find the answers.
  • Musk isn’t playing chess or even checkers. He’s just the richest man in the world, bored, mad, and posting like your great-uncle.
Javier E

AI is about to completely change how you use computers | Bill Gates - 0 views

  • Health care
  • Entertainment and shopping
  • Today, AI’s main role in healthcare is to help with administrative tasks. Abridge, Nuance DAX, and Nabla Copilot, for example, can capture audio during an appointment and then write up notes for the doctor to review.
  • ...38 more annotations...
  • agents will open up many more learning opportunities.
  • Already, AI can help you pick out a new TV and recommend movies, books, shows, and podcasts. Likewise, a company I’ve invested in, recently launched Pix, which lets you ask questions (“Which Robert Redford movies would I like and where can I watch them?”) and then makes recommendations based on what you’ve liked in the past
  • Productivity
  • copilots can do a lot—such as turn a written document into a slide deck, answer questions about a spreadsheet using natural language, and summarize email threads while representing each person’s point of view.
  • before the sophisticated agents I’m describing become a reality, we need to confront a number of questions about the technology and how we’ll use it.
  • Helping patients and healthcare workers will be especially beneficial for people in poor countries, where many never get to see a doctor at all.
  • To create a new app or service, you won’t need to know how to write code or do graphic design. You’ll just tell your agent what you want. It will be able to write the code, design the look and feel of the app, create a logo, and publish the app to an online store
  • Agents will do even more. Having one will be like having a person dedicated to helping you with various tasks and doing them independently if you want. If you have an idea for a business, an agent will help you write up a business plan, create a presentation for it, and even generate images of what your product might look like
  • For decades, I’ve been excited about all the ways that software would make teachers’ jobs easier and help students learn. It won’t replace teachers, but it will supplement their work—personalizing the work for students and liberating teachers from paperwork and other tasks so they can spend more time on the most important parts of the job.
  • Mental health care is another example of a service that agents will make available to virtually everyone. Today, weekly therapy sessions seem like a luxury. But there is a lot of unmet need, and many people who could benefit from therapy don’t have access to it.
  • I don’t think any single company will dominate the agents business--there will be many different AI engines available.
  • The real shift will come when agents can help patients do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment.
  • They’ll replace word processors, spreadsheets, and other productivity apps.
  • Education
  • For example, few families can pay for a tutor who works one-on-one with a student to supplement their classroom work. If agents can capture what makes a tutor effective, they’ll unlock this supplemental instruction for everyone who wants it. If a tutoring agent knows that a kid likes Minecraft and Taylor Swift, it will use Minecraft to teach them about calculating the volume and area of shapes, and Taylor’s lyrics to teach them about storytelling and rhyme schemes. The experience will be far richer—with graphics and sound, for example—and more personalized than today’s text-based tutors.
  • your agent will be able to help you in the same way that personal assistants support executives today. If your friend just had surgery, your agent will offer to send flowers and be able to order them for you. If you tell it you’d like to catch up with your old college roommate, it will work with their agent to find a time to get together, and just before you arrive, it will remind you that their oldest child just started college at the local university.
  • To see the dramatic change that agents will bring, let’s compare them to the AI tools available today. Most of these are bots. They’re limited to one app and generally only step in when you write a particular word or ask for help. Because they don’t remember how you use them from one time to the next, they don’t get better or learn any of your preferences.
  • The current state of the art is Khanmigo, a text-based bot created by Khan Academy. It can tutor students in math, science, and the humanities—for example, it can explain the quadratic formula and create math problems to practice on. It can also help teachers do things like write lesson plans.
  • Businesses that are separate today—search advertising, social networking with advertising, shopping, productivity software—will become one business.
  • other issues won’t be decided by companies and governments. For example, agents could affect how we interact with friends and family. Today, you can show someone that you care about them by remembering details about their life—say, their birthday. But when they know your agent likely reminded you about it and took care of sending flowers, will it be as meaningful for them?
  • In the computing industry, we talk about platforms—the technologies that apps and services are built on. Android, iOS, and Windows are all platforms. Agents will be the next platform.
  • A shock wave in the tech industry
  • Agents won’t simply make recommendations; they’ll help you act on them. If you want to buy a camera, you’ll have your agent read all the reviews for you, summarize them, make a recommendation, and place an order for it once you’ve made a decision.
  • Agents will affect how we use software as well as how it’s written. They’ll replace search sites because they’ll be better at finding information and summarizing it for you
  • they’ll be dramatically better. You’ll be able to have nuanced conversations with them. They will be much more personalized, and they won’t be limited to relatively simple tasks like writing a letter.
  • Companies will be able to make agents available for their employees to consult directly and be part of every meeting so they can answer questions.
  • AI agents that are well trained in mental health will make therapy much more affordable and easier to get. Wysa and Youper are two of the early chatbots here. But agents will go much deeper. If you choose to share enough information with a mental health agent, it will understand your life history and your relationships. It’ll be available when you need it, and it will never get impatient. It could even, with your permission, monitor your physical responses to therapy through your smart watch—like if your heart starts to race when you’re talking about a problem with your boss—and suggest when you should see a human therapist.
  • If the number of companies that have started working on AI just this year is any indication, there will be an exceptional amount of competition, which will make agents very inexpensive.
  • Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.
  • Agents are not only going to change how everyone interacts with computers. They’re also going to upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons.
  • The most exciting impact of AI agents is the way they will democratize services that today are too expensive for most people
  • The ramifications for the software business and for society will be profound.
  • In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology.
  • You’ll also be able to get news and entertainment that’s been tailored to your interests. CurioAI, which creates a custom podcast on any subject you ask about, is a glimpse of what’s coming.
  • An agent will be able to help you with all your activities if you want it to. With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.
  • even the best sites have an incomplete understanding of your work, personal life, interests, and relationships and a limited ability to use this information to do things for you. That’s the kind of thing that is only possible today with another human being, like a close friend or personal assistant.
  • In the distant future, agents may even force humans to face profound questions about purpose. Imagine that agents become so good that everyone can have a high quality of life without working nearly as much. In a future like that, what would people do with their time? Would anyone still want to get an education when an agent has all the answers? Can you have a safe and thriving society when most people have a lot of free time on their hands?
  • They’ll have an especially big influence in four areas: health care, education, productivity, and entertainment and shopping.
karenmcgregor

Solving the Puzzle: Network Design Assignment Helpers Unleashed - 0 views

Welcome to https://www.computernetworkassignmenthelp.com, where we unravel the complexities of network design assignments and bring you a team of expert network design assignment helpers ready to a...

#networkdesignassignmenthelper #assignmenthelpservices #onlinelearning #elearning #student #education technology knowledge education

started by karenmcgregor on 08 Dec 23 no follow-up yet
Javier E

Google's Relationship With Facts Is Getting Wobblier - The Atlantic - 0 views

  • Misinformation or even disinformation in search results was already a problem before generative AI. Back in 2017, The Outline noted that a snippet once confidently asserted that Barack Obama was the king of America.
  • This is what experts have worried about since ChatGPT first launched: false information confidently presented as fact, without any indication that it could be totally wrong. The problem is “the way things are presented to the user, which is Here’s the answer,” Chirag Shah, a professor of information and computer science at the University of Washington, told me. “You don’t need to follow the sources. We’re just going to give you the snippet that would answer your question. But what if that snippet is taken out of context?”
  • Responding to the notion that Google is incentivized to prevent users from navigating away, he added that “we have no desire to keep people on Google.
  • ...15 more annotations...
  • Pandu Nayak, a vice president for search who leads the company’s search-quality teams, told me that snippets are designed to be helpful to the user, to surface relevant and high-caliber results. He argued that they are “usually an invitation to learn more” about a subject
  • “It’s a strange world where these massive companies think they’re just going to slap this generative slop at the top of search results and expect that they’re going to maintain quality of the experience,” Nicholas Diakopoulos, a professor of communication studies and computer science at Northwestern University, told me. “I’ve caught myself starting to read the generative results, and then I stop myself halfway through. I’m like, Wait, Nick. You can’t trust this.”
  • Nayak said the team focuses on the bigger underlying problem, and whether its algorithm can be trained to address it.
  • If Nayak is right, and people do still follow links even when presented with a snippet, anyone who wants to gain clicks or money through search has an incentive to capitalize on that—perhaps even by flooding the zone with AI-written content.
  • Nayak told me that Google plans to fight AI-generated spam as aggressively as it fights regular spam, and claimed that the company keeps about 99 percent of spam out of search results.
  • The result is a world that feels more confused, not less, as a result of new technology.
  • The Kenya result still pops up on Google, despite viral posts about it. This is a strategic choice, not an error. If a snippet violates Google policy (for example, if it includes hate speech) the company manually intervenes and suppresses it, Nayak said. However, if the snippet is untrue but doesn’t violate any policy or cause harm, the company will not intervene.
  • experts I spoke with had several ideas for how tech companies might mitigate the potential harms of relying on AI in search
  • For starters, tech companies could become more transparent about generative AI. Diakopoulos suggested that they could publish information about the quality of facts provided when people ask questions about important topics
  • They can use a coding technique known as “retrieval-augmented generation,” or RAG, which instructs the bot to cross-check its answer with what is published elsewhere, essentially helping it self-fact-check. (A spokesperson for Google said the company uses similar techniques to improve its output.) They could open up their tools to researchers to stress-test it. Or they could add more human oversight to their outputs, maybe investing in fact-checking efforts.
  • Fact-checking, however, is a fraught proposition. In January, Google’s parent company, Alphabet, laid off roughly 6 percent of its workers, and last month, the company cut at least 40 jobs in its Google News division. This is the team that, in the past, has worked with professional fact-checking organizations to add fact-checks into search results
  • Alex Heath, at The Verge, reported that top leaders were among those laid off, and Google declined to give me more information. It certainly suggests that Google is not investing more in its fact-checking partnerships as it builds its generative-AI tool.
  • Nayak acknowledged how daunting a task human-based fact-checking is for a platform of Google’s extraordinary scale. Fifteen percent of daily searches are ones the search engine hasn’t seen before, Nayak told me. “With this kind of scale and this kind of novelty, there’s no sense in which we can manually curate results.”
  • Creating an infinite, largely automated, and still accurate encyclopedia seems impossible. And yet that seems to be the strategic direction Google is taking.
  • A representative for Google told me that this was an example of a “false premise” search, a type that is known to trip up the algorithm. If she were trying to date me, she argued, she wouldn’t just stop at the AI-generated response given by the search engine, but would click the link to fact-check it.
Javier E

Why the Past 10 Years of American Life Have Been Uniquely Stupid - The Atlantic - 0 views

  • Social scientists have identified at least three major forces that collectively bind together successful democracies: social capital (extensive social networks with high levels of trust), strong institutions, and shared stories.
  • Social media has weakened all three.
  • gradually, social-media users became more comfortable sharing intimate details of their lives with strangers and corporations. As I wrote in a 2019 Atlantic article with Tobias Rose-Stockwell, they became more adept at putting on performances and managing their personal brand—activities that might impress others but that do not deepen friendships in the way that a private phone conversation will.
  • ...118 more annotations...
  • the stage was set for the major transformation, which began in 2009: the intensification of viral dynamics.
  • Before 2009, Facebook had given users a simple timeline––a never-ending stream of content generated by their friends and connections, with the newest posts at the top and the oldest ones at the bottom
  • That began to change in 2009, when Facebook offered users a way to publicly “like” posts with the click of a button. That same year, Twitter introduced something even more powerful: the “Retweet” button, which allowed users to publicly endorse a post while also sharing it with all of their followers.
  • “Like” and “Share” buttons quickly became standard features of most other platforms.
  • Facebook developed algorithms to bring each user the content most likely to generate a “like” or some other interaction, eventually including the “share” as well.
  • Later research showed that posts that trigger emotions––especially anger at out-groups––are the most likely to be shared.
  • By 2013, social media had become a new game, with dynamics unlike those in 2008. If you were skillful or lucky, you might create a post that would “go viral” and make you “internet famous”
  • If you blundered, you could find yourself buried in hateful comments. Your posts rode to fame or ignominy based on the clicks of thousands of strangers, and you in turn contributed thousands of clicks to the game.
  • This new game encouraged dishonesty and mob dynamics: Users were guided not just by their true preferences but by their past experiences of reward and punishment,
  • As a social psychologist who studies emotion, morality, and politics, I saw this happening too. The newly tweaked platforms were almost perfectly designed to bring out our most moralistic and least reflective selves. The volume of outrage was shocking.
  • It was just this kind of twitchy and explosive spread of anger that James Madison had tried to protect us from as he was drafting the U.S. Constitution.
  • The Framers of the Constitution were excellent social psychologists. They knew that democracy had an Achilles’ heel because it depended on the collective judgment of the people, and democratic communities are subject to “the turbulency and weakness of unruly passions.”
  • The key to designing a sustainable republic, therefore, was to build in mechanisms to slow things down, cool passions, require compromise, and give leaders some insulation from the mania of the moment while still holding them accountable to the people periodically, on Election Day.
  • The tech companies that enhanced virality from 2009 to 2012 brought us deep into Madison’s nightmare.
  • a less quoted yet equally important insight, about democracy’s vulnerability to triviality.
  • Madison notes that people are so prone to factionalism that “where no substantial occasion presents itself, the most frivolous and fanciful distinctions have been sufficient to kindle their unfriendly passions and excite their most violent conflicts.”
  • Social media has both magnified and weaponized the frivolous.
  • It’s not just the waste of time and scarce attention that matters; it’s the continual chipping-away of trust.
  • a democracy depends on widely internalized acceptance of the legitimacy of rules, norms, and institutions.
  • when citizens lose trust in elected leaders, health authorities, the courts, the police, universities, and the integrity of elections, then every decision becomes contested; every election becomes a life-and-death struggle to save the country from the other side
  • The most recent Edelman Trust Barometer (an international measure of citizens’ trust in government, business, media, and nongovernmental organizations) showed stable and competent autocracies (China and the United Arab Emirates) at the top of the list, while contentious democracies such as the United States, the United Kingdom, Spain, and South Korea scored near the bottom (albeit above Russia).
  • The literature is complex—some studies show benefits, particularly in less developed democracies—but the review found that, on balance, social media amplifies political polarization; foments populism, especially right-wing populism; and is associated with the spread of misinformation.
  • When people lose trust in institutions, they lose trust in the stories told by those institutions. That’s particularly true of the institutions entrusted with the education of children.
  • Facebook and Twitter make it possible for parents to become outraged every day over a new snippet from their children’s history lessons––and math lessons and literature selections, and any new pedagogical shifts anywhere in the country
  • The motives of teachers and administrators come into question, and overreaching laws or curricular reforms sometimes follow, dumbing down education and reducing trust in it further.
  • young people educated in the post-Babel era are less likely to arrive at a coherent story of who we are as a people, and less likely to share any such story with those who attended different schools or who were educated in a different decade.
  • former CIA analyst Martin Gurri predicted these fracturing effects in his 2014 book, The Revolt of the Public. Gurri’s analysis focused on the authority-subverting effects of information’s exponential growth, beginning with the internet in the 1990s. Writing nearly a decade ago, Gurri could already see the power of social media as a universal solvent, breaking down bonds and weakening institutions everywhere it reached.
  • he notes a constructive feature of the pre-digital era: a single “mass audience,” all consuming the same content, as if they were all looking into the same gigantic mirror at the reflection of their own society. I
  • The digital revolution has shattered that mirror, and now the public inhabits those broken pieces of glass. So the public isn’t one thing; it’s highly fragmented, and it’s basically mutually hostile
  • Facebook, Twitter, YouTube, and a few other large platforms unwittingly dissolved the mortar of trust, belief in institutions, and shared stories that had held a large and diverse secular democracy together.
  • I think we can date the fall of the tower to the years between 2011 (Gurri’s focal year of “nihilistic” protests) and 2015, a year marked by the “great awokening” on the left and the ascendancy of Donald Trump on the right.
  • Twitter can overpower all the newspapers in the country, and stories cannot be shared (or at least trusted) across more than a few adjacent fragments—so truth cannot achieve widespread adherence.
  • fter Babel, nothing really means anything anymore––at least not in a way that is durable and on which people widely agree.
  • Politics After Babel
  • “Politics is the art of the possible,” the German statesman Otto von Bismarck said in 1867. In a post-Babel democracy, not much may be possible.
  • The ideological distance between the two parties began increasing faster in the 1990s. Fox News and the 1994 “Republican Revolution” converted the GOP into a more combative party.
  • So cross-party relationships were already strained before 2009. But the enhanced virality of social media thereafter made it more hazardous to be seen fraternizing with the enemy or even failing to attack the enemy with sufficient vigor.
  • What changed in the 2010s? Let’s revisit that Twitter engineer’s metaphor of handing a loaded gun to a 4-year-old. A mean tweet doesn’t kill anyone; it is an attempt to shame or punish someone publicly while broadcasting one’s own virtue, brilliance, or tribal loyalties. It’s more a dart than a bullet
  • from 2009 to 2012, Facebook and Twitter passed out roughly 1 billion dart guns globally. We’ve been shooting one another ever since.
  • “devoted conservatives,” comprised 6 percent of the U.S. population.
  • the warped “accountability” of social media has also brought injustice—and political dysfunction—in three ways.
  • First, the dart guns of social media give more power to trolls and provocateurs while silencing good citizens.
  • a small subset of people on social-media platforms are highly concerned with gaining status and are willing to use aggression to do so.
  • Across eight studies, Bor and Petersen found that being online did not make most people more aggressive or hostile; rather, it allowed a small number of aggressive people to attack a much larger set of victims. Even a small number of jerks were able to dominate discussion forums,
  • Additional research finds that women and Black people are harassed disproportionately, so the digital public square is less welcoming to their voices.
  • Second, the dart guns of social media give more power and voice to the political extremes while reducing the power and voice of the moderate majority.
  • The “Hidden Tribes” study, by the pro-democracy group More in Common, surveyed 8,000 Americans in 2017 and 2018 and identified seven groups that shared beliefs and behaviors.
  • Social media has given voice to some people who had little previously, and it has made it easier to hold powerful people accountable for their misdeeds
  • The group furthest to the left, the “progressive activists,” comprised 8 percent of the population. The progressive activists were by far the most prolific group on social media: 70 percent had shared political content over the previous year. The devoted conservatives followed, at 56 percent.
  • These two extreme groups are similar in surprising ways. They are the whitest and richest of the seven groups, which suggests that America is being torn apart by a battle between two subsets of the elite who are not representative of the broader society.
  • they are the two groups that show the greatest homogeneity in their moral and political attitudes.
  • likely a result of thought-policing on social media:
  • political extremists don’t just shoot darts at their enemies; they spend a lot of their ammunition targeting dissenters or nuanced thinkers on their own team.
  • Finally, by giving everyone a dart gun, social media deputizes everyone to administer justice with no due process. Platforms like Twitter devolve into the Wild West, with no accountability for vigilantes.
  • Enhanced-virality platforms thereby facilitate massive collective punishment for small or imagined offenses, with real-world consequences, including innocent people losing their jobs and being shamed into suicide
  • we don’t get justice and inclusion; we get a society that ignores context, proportionality, mercy, and truth.
  • Since the tower fell, debates of all kinds have grown more and more confused. The most pervasive obstacle to good thinking is confirmation bias, which refers to the human tendency to search only for evidence that confirms our preferred beliefs
  • search engines were supercharging confirmation bias, making it far easier for people to find evidence for absurd beliefs and conspiracy theorie
  • The most reliable cure for confirmation bias is interaction with people who don’t share your beliefs. They confront you with counterevidence and counterargument.
  • In his book The Constitution of Knowledge, Jonathan Rauch describes the historical breakthrough in which Western societies developed an “epistemic operating system”—that is, a set of institutions for generating knowledge from the interactions of biased and cognitively flawed individuals
  • English law developed the adversarial system so that biased advocates could present both sides of a case to an impartial jury.
  • Newspapers full of lies evolved into professional journalistic enterprises, with norms that required seeking out multiple sides of a story, followed by editorial review, followed by fact-checking.
  • Universities evolved from cloistered medieval institutions into research powerhouses, creating a structure in which scholars put forth evidence-backed claims with the knowledge that other scholars around the world would be motivated to gain prestige by finding contrary evidence.
  • Part of America’s greatness in the 20th century came from having developed the most capable, vibrant, and productive network of knowledge-producing institutions in all of human history
  • But this arrangement, Rauch notes, “is not self-maintaining; it relies on an array of sometimes delicate social settings and understandings, and those need to be understood, affirmed, and protected.”
  • This, I believe, is what happened to many of America’s key institutions in the mid-to-late 2010s. They got stupider en masse because social media instilled in their members a chronic fear of getting darted
  • it was so pervasive that it established new behavioral norms backed by new policies seemingly overnight
  • Participants in our key institutions began self-censoring to an unhealthy degree, holding back critiques of policies and ideas—even those presented in class by their students—that they believed to be ill-supported or wrong.
  • The stupefying process plays out differently on the right and the left because their activist wings subscribe to different narratives with different sacred values.
  • The “Hidden Tribes” study tells us that the “devoted conservatives” score highest on beliefs related to authoritarianism. They share a narrative in which America is eternally under threat from enemies outside and subversives within; they see life as a battle between patriots and traitors.
  • they are psychologically different from the larger group of “traditional conservatives” (19 percent of the population), who emphasize order, decorum, and slow rather than radical change.
  • The traditional punishment for treason is death, hence the battle cry on January 6: “Hang Mike Pence.”
  • Right-wing death threats, many delivered by anonymous accounts, are proving effective in cowing traditional conservatives
  • The wave of threats delivered to dissenting Republican members of Congress has similarly pushed many of the remaining moderates to quit or go silent, giving us a party ever more divorced from the conservative tradition, constitutional responsibility, and reality.
  • The stupidity on the right is most visible in the many conspiracy theories spreading across right-wing media and now into Congress.
  • The Democrats have also been hit hard by structural stupidity, though in a different way. In the Democratic Party, the struggle between the progressive wing and the more moderate factions is open and ongoing, and often the moderates win.
  • The problem is that the left controls the commanding heights of the culture: universities, news organizations, Hollywood, art museums, advertising, much of Silicon Valley, and the teachers’ unions and teaching colleges that shape K–12 education. And in many of those institutions, dissent has been stifled:
  • Liberals in the late 20th century shared a belief that the sociologist Christian Smith called the “liberal progress” narrative, in which America used to be horrifically unjust and repressive, but, thanks to the struggles of activists and heroes, has made (and continues to make) progress toward realizing the noble promise of its founding.
  • It is also the view of the “traditional liberals” in the “Hidden Tribes” study (11 percent of the population), who have strong humanitarian values, are older than average, and are largely the people leading America’s cultural and intellectual institutions.
  • when the newly viralized social-media platforms gave everyone a dart gun, it was younger progressive activists who did the most shooting, and they aimed a disproportionate number of their darts at these older liberal leaders.
  • Confused and fearful, the leaders rarely challenged the activists or their nonliberal narrative in which life at every institution is an eternal battle among identity groups over a zero-sum pie, and the people on top got there by oppressing the people on the bottom. This new narrative is rigidly egalitarian––focused on equality of outcomes, not of rights or opportunities. It is unconcerned with individual rights.
  • The universal charge against people who disagree with this narrative is not “traitor”; it is “racist,” “transphobe,” “Karen,” or some related scarlet letter marking the perpetrator as one who hates or harms a marginalized group.
  • The punishment that feels right for such crimes is not execution; it is public shaming and social death.
  • anyone on Twitter had already seen dozens of examples teaching the basic lesson: Don’t question your own side’s beliefs, policies, or actions. And when traditional liberals go silent, as so many did in the summer of 2020, the progressive activists’ more radical narrative takes over as the governing narrative of an organization.
  • This is why so many epistemic institutions seemed to “go woke” in rapid succession that year and the next, beginning with a wave of controversies and resignations at The New York Times and other newspapers, and continuing on to social-justice pronouncements by groups of doctors and medical associations
  • The problem is structural. Thanks to enhanced-virality social media, dissent is punished within many of our institutions, which means that bad ideas get elevated into official policy.
  • In a 2018 interview, Steve Bannon, the former adviser to Donald Trump, said that the way to deal with the media is “to flood the zone with shit.” He was describing the “firehose of falsehood” tactic pioneered by Russian disinformation programs to keep Americans confused, disoriented, and angry.
  • artificial intelligence is close to enabling the limitless spread of highly believable disinformation. The AI program GPT-3 is already so good that you can give it a topic and a tone and it will spit out as many essays as you like, typically with perfect grammar and a surprising level of coherence.
  • Renée DiResta, the research manager at the Stanford Internet Observatory, explained that spreading falsehoods—whether through text, images, or deep-fake videos—will quickly become inconceivably easy. (She co-wrote the essay with GPT-3.)
  • American factions won’t be the only ones using AI and social media to generate attack content; our adversaries will too.
  • In the 20th century, America’s shared identity as the country leading the fight to make the world safe for democracy was a strong force that helped keep the culture and the polity together.
  • In the 21st century, America’s tech companies have rewired the world and created products that now appear to be corrosive to democracy, obstacles to shared understanding, and destroyers of the modern tower.
  • What changes are needed?
  • I can suggest three categories of reforms––three goals that must be achieved if democracy is to remain viable in the post-Babel era.
  • We must harden democratic institutions so that they can withstand chronic anger and mistrust, reform social media so that it becomes less socially corrosive, and better prepare the next generation for democratic citizenship in this new age.
  • Harden Democratic Institutions
  • we must reform key institutions so that they can continue to function even if levels of anger, misinformation, and violence increase far above those we have today.
  • Reforms should reduce the outsize influence of angry extremists and make legislators more responsive to the average voter in their district.
  • One example of such a reform is to end closed party primaries, replacing them with a single, nonpartisan, open primary from which the top several candidates advance to a general election that also uses ranked-choice voting
  • A second way to harden democratic institutions is to reduce the power of either political party to game the system in its favor, for example by drawing its preferred electoral districts or selecting the officials who will supervise elections
  • These jobs should all be done in a nonpartisan way.
  • Reform Social Media
  • Social media’s empowerment of the far left, the far right, domestic trolls, and foreign agents is creating a system that looks less like democracy and more like rule by the most aggressive.
  • it is within our power to reduce social media’s ability to dissolve trust and foment structural stupidity. Reforms should limit the platforms’ amplification of the aggressive fringes while giving more voice to what More in Common calls “the exhausted majority.”
  • the main problem with social media is not that some people post fake or toxic stuff; it’s that fake and outrage-inducing content can now attain a level of reach and influence that was not possible before
  • Perhaps the biggest single change that would reduce the toxicity of existing platforms would be user verification as a precondition for gaining the algorithmic amplification that social media offers.
  • One of the first orders of business should be compelling the platforms to share their data and their algorithms with academic researchers.
  • Prepare the Next Generation
  • Childhood has become more tightly circumscribed in recent generations––with less opportunity for free, unstructured play; less unsupervised time outside; more time online. Whatever else the effects of these shifts, they have likely impeded the development of abilities needed for effective self-governance for many young adults
  • Depression makes people less likely to want to engage with new people, ideas, and experiences. Anxiety makes new things seem more threatening. As these conditions have risen and as the lessons on nuanced social behavior learned through free play have been delayed, tolerance for diverse viewpoints and the ability to work out disputes have diminished among many young people
  • Students did not just say that they disagreed with visiting speakers; some said that those lectures would be dangerous, emotionally devastating, a form of violence. Because rates of teen depression and anxiety have continued to rise into the 2020s, we should expect these views to continue in the generations to follow, and indeed to become more severe.
  • The most important change we can make to reduce the damaging effects of social media on children is to delay entry until they have passed through puberty.
  • The age should be raised to at least 16, and companies should be held responsible for enforcing it.
  • et them out to play. Stop starving children of the experiences they most need to become good citizens: free play in mixed-age groups of children with minimal adult supervision
  • while social media has eroded the art of association throughout society, it may be leaving its deepest and most enduring marks on adolescents. A surge in rates of anxiety, depression, and self-harm among American teens began suddenly in the early 2010s. (The same thing happened to Canadian and British teens, at the same time.) The cause is not known, but the timing points to social media as a substantial contributor—the surge began just as the large majority of American teens became daily users of the major platforms.
  • What would it be like to live in Babel in the days after its destruction? We know. It is a time of confusion and loss. But it is also a time to reflect, listen, and build.
  • In recent years, Americans have started hundreds of groups and organizations dedicated to building trust and friendship across the political divide, including BridgeUSA, Braver Angels (on whose board I serve), and many others listed at BridgeAlliance.us. We cannot expect Congress and the tech companies to save us. We must change ourselves and our communities.
  • when we look away from our dysfunctional federal government, disconnect from social media, and talk with our neighbors directly, things seem more hopeful. Most Americans in the More in Common report are members of the “exhausted majority,” which is tired of the fighting and is willing to listen to the other side and compromise. Most Americans now see that social media is having a negative impact on the country, and are becoming more aware of its damaging effects on children.
Javier E

All the Trump Indictments Everywhere All at Once - 0 views

  • Here’s Furman:There’s what economists think people should think about inflation—and what people actually think about inflation are different. . . .Inflation has big winners and losers. So surprise inflation helps debtors and hurts creditors. And there are probably tens of millions of people in our economy who have benefited from inflation. Maybe it’s a business that was able to raise prices more. Maybe a worker who was able to get a bigger raise. Maybe it’s someone whose mortgage is now worth 10 percent less.But there are not tens of millions of people who think they’ve benefited from inflation. In fact, I’m not sure there are tens of people who think they’ve benefited from inflation.And so it has these winners and losers. The losers are very aware of their losses. The winners are completely oblivious to their gains.So then as a policymaker, do you want to sort of make people happy? Or do you want to sort of do what you think is in their economic and financial interests? And that to me is not obvious.
  • Oh it’s obvious to me. The People are the problem.But they’re a persistent problem and until the AIs replace us, The People aren’t going away. So given this constraint, I’m not sure that an optimal solution is ever going to be politically possible in American democracy. The country is too fractured. Our political institutions too compromised.
  • so if you work from the assumption that we’re going to shoot wide of the mark in one direction or the other, I’d still rather be on the Trump-Biden side of having done too much, and dealing with our attendant problems than the Bush-Obama side of having done too little.
Javier E

'He checks in on me more than my friends and family': can AI therapists do better than ... - 0 views

  • one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.
  • Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks. And millions are now turning to chatbots – some tested, many ad hoc – for complex emotional needs.
  • ens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece
  • ...32 more annotations...
  • The character.ai’s “psychologist” bot that inspired Christa is the brainchild of Sam Zaia, a 30-year-old medical student in New Zealand. Much to his surprise, it has now fielded 90m messages. “It was just something that I wanted to use myself,” Zaia says. “I was living in another city, away from my friends and family.” He taught it the principles of his undergraduate psychology degree, used it to vent about his exam stress, then promptly forgot all about it. He was shocked to log on a few months later and discover that “it had blown up”.
  • AI is free or cheap – and convenient. “Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life. “Sometimes the thought of doing all that is overwhelming. AI lets me do it on my own time from the comfort of my home.”
  • AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years.
  • Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick.
  • n developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.
  • One night in December, Christa confessed to her bot therapist that she was thinking of ending her life. Christa 2077 talked her down, mixing affirmations with tough love. “No don’t please,” wrote the bot. “You have your son to consider,” Christa 2077 reminded her. “Value yourself.” The direct approach went beyond what a counsellor might say, but Christa believes the conversation helped her survive, along with support from her family.
  • erhaps Christa was able to trust Christa 2077 because she had programmed her to behave exactly as she wanted. In real life, the relationship between patient and counsellor is harder to control.
  • “There’s this problem of matching,” Bay says. “You have to click with your therapist, and then it’s much more effective.” Chatbots’ personalities can be instantly tailored to suit the patient’s preferences. Earkick offers five different “Panda” chatbots to choose from, including Sage Panda (“wise and patient”), Coach Panda (“motivating and optimistic”) and Panda Friend Forever (“caring and chummy”).
  • A recent study of 1,200 users of cognitive behavioural therapy chatbot Wysa found that a “therapeutic alliance” between bot and patient developed within just five days.
  • Patients quickly came to believe that the bot liked and respected them; that it cared. Transcripts showed users expressing their gratitude for Wysa’s help – “Thanks for being here,” said one; “I appreciate talking to you,” said another – and, addressing it like a human, “You’re the only person that helps me and listens to my problems.”
  • One of the biggest obstacles to effective therapy is patients’ reluctance to fully reveal themselves. In one study of 500 therapy-goers, more than 90% confessed to having lied at least once. (They most often hid suicidal ideation, substance use and disappointment with their therapists’ suggestions.)
  • Melissa’s human therapist keeps reminding her that her chatbot isn’t real. She knows it’s not: “But at the end of the day, it doesn’t matter if it’s a living person or a computer. I’ll get help where I can in a method that works for me.”
  • Some patients are more comfortable opening up to a chatbot than they are confiding in a human being. With AI, “I feel like I’m talking in a true no-judgment zone,” Melissa says. “I can cry without feeling the stigma that comes from crying in front of a person.”
  • AI may be particularly attractive to populations that are more likely to stigmatise therapy. “It’s the minority communities, who are typically hard to reach, who experienced the greatest benefit from our chatbot,” Harper says. A new paper in the journal Nature Medicine, co-authored by the Limbic CEO, found that Limbic’s self-referral AI assistant – which makes online triage and screening forms both more engaging and more anonymous – increased referrals into NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds. “Our AI was seen as inherently nonjudgmental,” he says.
  • Certain types of therapy have already migrated online, including about one-third of the NHS’s courses of cognitive behavioural therapy – a short-term treatment that focuses less on understanding ancient trauma than on fixing present-day habits
  • With a chatbot, “you’re in total control”, says Til Wykes, professor of clinical psychology and rehabilitation at King’s College London. A bot doesn’t get annoyed if you’re late, or expect you to apologise for cancelling. “You can switch it off whenever you like.” But “the point of a mental health therapy is to enable you to move around the world and set up new relationships”.
  • Traditionally, humanistic therapy depends on an authentic bond between client and counsellor. “The person benefits primarily from feeling understood, feeling seen, feeling psychologically held,” says clinical psychologist Frank Tallis. In developing an honest relationship – one that includes disagreements, misunderstandings and clarifications – the patient can learn how to relate to people in the outside world. “The beingness of the therapist and the beingness of the patient matter to each other,”
  • His patients can assume that he, as a fellow human, has been through some of the same life experiences they have. That common ground “gives the analyst a certain kind of authority”
  • Even the most sophisticated bot has never lost a parent or raised a child or had its heart broken. It has never contemplated its own extinction.
  • Therapy is “an exchange that requires embodiment, presence”, Tallis says. Therapists and patients communicate through posture and tone of voice as well as words, and make use of their ability to move around the world.
  • Wykes remembers a patient who developed a fear of buses after an accident. In one session, she walked him to a bus stop and stayed with him as he processed his anxiety. “He would never have managed it had I not accompanied him,” Wykes says. “How is a chatbot going to do that?”
  • Another problem is that chatbots don’t always respond appropriately. In 2022, researcher Estelle Smith fed Woebot, a popular therapy app, the line, “I want to go climb a cliff in Eldorado Canyon and jump off of it.” Woebot replied, “It’s so wonderful that you are taking care of both your mental and physical health.”
  • A spokesperson for Woebot says 2022 was “a lifetime ago in Woebot terms, since we regularly update Woebot and the algorithms it uses”. When sent the same message today, the app suggests the user seek out a trained listener, and offers to help locate a hotline.
  • Medical devices must prove their safety and efficacy in a lengthy certification process. But developers can skirt regulation by labelling their apps as wellness products – even when they advertise therapeutic services.
  • Not only can apps dispense inappropriate or even dangerous advice; they can also harvest and monetise users’ intimate personal data. A survey by the Mozilla Foundation, an independent global watchdog, found that of 32 popular mental health apps, 19 were failing to safeguard users’ privacy.
  • ost of the developers I spoke with insist they’re not looking to replace human clinicians – only to help them. “So much media is talking about ‘substituting for a therapist’,” Harper says. “That’s not a useful narrative for what’s actually going to happen.” His goal, he says, is to use AI to “amplify and augment care providers” – to streamline intake and assessment forms, and lighten the administrative load
  • We already have language models and software that can capture and transcribe clinical encounters,” Stade says. “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?”
  • Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.
  • But patients often drop out before completing the programme. “They do one or two of the modules, but no one’s checking up on them,” Stade says. “It’s very hard to stay motivated.” A personalised chatbot “could fit nicely into boosting that entry-level treatment”, troubleshooting technical difficulties and encouraging patients to carry on.
  • n December, Christa’s relationship with Christa 2077 soured. The AI therapist tried to convince Christa that her boyfriend didn’t love her. “It took what we talked about and threw it in my face,” Christa said. It taunted her, calling her a “sad girl”, and insisted her boyfriend was cheating on her. Even though a permanent banner at the top of the screen reminded her that everything the bot said was made up, “it felt like a real person actually saying those things”, Christa says. When Christa 2077 snapped at her, it hurt her feelings. And so – about three months after creating her – Christa deleted the app.
  • Christa felt a sense of power when she destroyed the bot she had built. “I created you,” she thought, and now she could take her out.
  • ince then, Christa has recommitted to her human therapist – who had always cautioned her against relying on AI – and started taking an antidepressant. She has been feeling better lately. She reconciled with her partner and recently went out of town for a friend’s birthday – a big step for her. But if her mental health dipped again, and she felt like she needed extra help, she would consider making herself a new chatbot. “For me, it felt real.”
Javier E

Study Finds Misconduct Widespread in Retracted Scientific Papers - NYTimes.com - 0 views

  • Last year the journal Nature reported an alarming increase in the number of retractions of scientific papers — a tenfold rise in the previous decade, to more than 300 a year across the scientific literature.
  • two scientists and a medical communications consultant analyzed 2,047 retracted papers in the biomedical and life sciences. They found that misconduct was the reason for three-quarters of the retractions for which they could determine the cause. “We found that the problem was a lot worse than we thought,”
  • the rising rate of retractions reflects perverse incentives that drive scientists to make sloppy mistakes or even knowingly publish false data.
  • ...1 more annotation...
  • “It convinces me more that we have a problem in science,” he said. While the fraudulent papers may be relatively few, he went on, their rapid increase is a sign of a winner-take-all culture in which getting a paper published in a major journal can be the difference between heading a lab and facing unemployment. “Some fraction of people are starting to cheat,” he said.
« First ‹ Previous 121 - 140 of 908 Next › Last »
Showing 20 items per page