Skip to main content

Home/ TOK Friends/ Group items tagged values

Rss Feed Group items tagged

Javier E

Opinion | I Came to College Eager to Debate. I Found Self-Censorship Instead. - The New... - 0 views

  • Hushed voices and anxious looks dictate so many conversations on campus here at the University of Virginia, where I’m finishing up my senior year.
  • I was shaken, but also determined to not silence myself. Still, the disdain of my fellow students stuck with me. I was a welcome member of the group — and then I wasn’t.
  • Instead, my college experience has been defined by strict ideological conformity. Students of all political persuasions hold back — in class discussions, in friendly conversations, on social media — from saying what we really think.
  • ...23 more annotations...
  • Even as a liberal who has attended abortion rights demonstrations and written about standing up to racism, I sometimes feel afraid to fully speak my mind.
  • In the classroom, backlash for unpopular opinions is so commonplace that many students have stopped voicing them, sometimes fearing lower grades if they don’t censor themselves.
  • According to a 2021 survey administered by College Pulse of over 37,000 students at 159 colleges, 80 percent of students self-censor at least some of the time.
  • Forty-eight percent of undergraduate students described themselves as “somewhat uncomfortable” or “very uncomfortable” with expressing their views on a controversial topic in the classroom.
  • When a class discussion goes poorly for me, I can tell.
  • The room felt tense. I saw people shift in their seats. Someone got angry, and then everyone seemed to get angry. After the professor tried to move the discussion along, I still felt uneasy. I became a little less likely to speak up again and a little less trusting of my own thoughts.
  • This anxiety affects not just conservatives. I spoke with Abby Sacks, a progressive fourth-year student. She said she experienced a “pile-on” during a class discussion about sexism in media
  • Throughout that semester, I saw similar reactions in response to other students’ ideas. I heard fewer classmates speak up. Eventually, our discussions became monotonous echo chambers. Absent rich debate and rigor, we became mired in socially safe ideas.
  • when criticism transforms into a public shaming, it stifles learning.
  • Professors have noticed a shift in their classrooms
  • I went to college to learn from my professors and peers. I welcomed an environment that champions intellectual diversity and rigorous disagreement
  • “Second, the dominant messages students hear from faculty, administrators and staff are progressive ones. So they feel an implicit pressure to conform to those messages in classroom and campus conversations and debates.”
  • I met Stephen Wiecek at our debate club. He’s an outgoing, formidable first-year debater who often stays after meetings to help clean up. He’s also conservative.
  • He told me that he has often “straight-up lied” about his beliefs to avoid conflict. Sometimes it’s at a party, sometimes it’s at an a cappella rehearsal, and sometimes it’s in the classroom. When politics comes up, “I just kind of go into survival mode,” he said. “I tense up a lot more, because I’ve got to think very carefully about how I word things. It’s very anxiety inducing.”
  • “First, students are afraid of being called out on social media by their peers,”
  • “It was just a succession of people, one after each other, each vehemently disagreeing with me,” she told me.
  • Ms. Sacks felt overwhelmed. “Everyone adding on to each other kind of energized the room, like everyone wanted to be part of the group with the correct opinion,” she said. The experience, she said, “made me not want to go to class again.” While Ms. Sacks did continue to attend the class, she participated less frequently. She told me that she felt as if she had become invisible.
  • Other campuses also struggle with this. “Viewpoint diversity is no longer considered a sacred, core value in higher education,”
  • Dr. Abrams said the environment on today’s campuses differs from his undergraduate experience. He recalled late-night debates with fellow students that sometimes left him feeling “hurt” but led to “the ecstasy of having my mind opened up to new ideas.”
  • He worries that self-censorship threatens this environment and argues that college administrations in particular “enforce and create a culture of obedience and fear that has chilled speech.”
  • Universities must do more than make public statements supporting free expression. We need a campus culture that prioritizes ideological diversity and strong policies that protect expression in the classroom.
  • Universities should refuse to cancel controversial speakers or cave to unreasonable student demands. They should encourage professors to reward intellectual diversity and nonconformism in classroom discussions. And most urgently, they should discard restrictive speech codes and bias response teams that pathologize ideological conflict.
  • We cannot experience the full benefits of a university education without having our ideas challenged, yet challenged in ways that allow us to grow.
Javier E

Apple News Plus Review: Good Value, But Apple Needs to Fine Tune This | Tom's Guide - 0 views

  • For $9.99 a month, News+ gives you access to more than 300 magazines, along with news articles from The Wall Street Journal and The Los Angeles Times.
  • if you want to find a specific magazine within the News+ tab, be prepared to give that scrolling finger a workout. There's no search field in the News+ tab for typing in a magazine title, so you've got to tap on Apple's catalog and scroll until you find what you're looking for
  • You can browse by category from the home screen, which reduces the number of covers you have to sort through a little bit.
  • ...14 more annotations...
  • Below the browsing menu and list of categories, you'll find the My Magazines section, which contains the publications you're currently looking at, plus issues you've downloaded.
  • (The desktop version of News+ handles things better — there's a persistent search bar in the upper left corner of the app.)
  • To find a specific title in News+ (without scrolling anyhow), head over to the Following tab directly to the right of the News+ in the News app. On that screen, there's a search field, and you can type in publication titles to bring up content from both News+ and the free News section
  • At present, it appears the only way to make a magazine stay in My Magazines is to download it from the cloud, something you do by tapping the cloud icon next to the cover. I couldn't find any way to designate a magazine as one of my favorites from within News+, so if I want to find a new issue or revisit an old one, I'm left with Apple's clunky search feature
  • Whatever magazine I started reading in News Plus — whether it was the latest Vanity Fair or the New Republic — would pop in My Magazines  under Reading Now.
  • The most frequently used section of News+ figures to be My Magazines, though to be truly useful, it's going to need a little fine tuning.
  • Speaking of back issues, when you're within a magazine in News+, just tap the magazine's title at the top of the screen. You'll see a list of previous issues for that title, and in some cases, you'll see current headlines and articles from that publication's website
  • Select a current issue of a magazine, and you'll get a title page with a tappable table of contents. In most cases, there's no description for the article, so you'll just have to hope that the headline you're tapping on gives you a good idea of what to expect
  • From within the article, a Next button lets you skip ahead to the next story in an issue, while an Open button returns you to the table of contents.
  • Be aware that some publications, such as New Republic, simply feature PDFs of their current issues instead of formats optimized for digital devices
  • The New Yorker splits the difference, with no table of contents and PDFs of ad pages from the print magazine interspersed between scrollable articles. I
  • You have the option of signifying that you love or hate stories, which will help fine-tune News+'s recommendations, and you can add many articles to your Safari reading list
  • The lines between what's free and what's paid also seem a bit blurred, even with the separate News+ tab
  • how frequently is new content going to surface on News+? Will all back issues get the unappealing PDF treatment
Javier E

Kluge (book) - Wikipedia - 0 views

  • Kluge: The Haphazard Construction of the Human Mind is a 2008 non-fiction book by American psychologist Gary Marcus. A "kluge" is a patched-together solution for a problem, clumsily assembled from whatever materials are immediately available.[1] Marcus's book argues that the human brain employs many such kluges, and that evolutionary psychology often favors genes that give "immediate advantages" over genes that provide long-term value.[2]
Javier E

Dispute Within Art Critics Group Over Diversity Reveals a Widening Rift - The New York ... - 0 views

  • Amussen, 33, is the editor of Burnaway, which focuses on criticism in the American South and often features young Black artists. (The magazine started in 2008 in response to layoffs at the Atlanta Journal-Constitution’s culture section and now runs as a nonprofit with four full-time employees and a budget that mostly consists of grants.)
  • Efforts to revive AICA-USA are continuing. In January, Jasmine Amussen joined the organization’s board to help rethink the meaning of criticism for a younger generation.
  • The organization has yearly dues of $115 and provides free access to many museums. But some members complained that the fee was too expensive for young critics, yet not enough to support significant programming.
  • ...12 more annotations...
  • “It just came down to not having enough money,” said Terence Trouillot, a senior editor at Frieze, a contemporary art magazine . He spent nearly three years on the AICA-USA board, resigning in 2022. He said that initiatives to re-energize the group “were just moving too slowly.”
  • According to Lilly Wei, a longtime AICA-USA board member who recently resigned, the group explored different ways of protecting writers in the industry. There were unrealized plans of turning the organization into a union; others hoped to create a permanent emergency fund to keep financially struggling critics afloat. She said the organization has instead canceled initiatives, including an awards program for the best exhibitions across the country.
  • Large galleries — including Gagosian, Hauser & Wirth, and Pace Gallery — now produce their own publications with interviews and articles sometimes written by the same freelance critics who simultaneously moonlight as curators and marketers. Within its membership, AICA-USA has a number of writers who belong to all three categories.
  • “It’s crazy that the ideal job nowadays is producing catalog essays for galleries, which are basically just sales pitches,” Dillon said in a phone interview. “Critical thinking about art is not valued financially.”
  • Noah Dillon, who was on the AICA-USA board until he resigned last year, has been reluctant to recommend that anyone follow his path to become a critic. Not that they could. The graduate program in art writing that he attended at the School of Visual Arts in Manhattan also closed during the pandemic.
  • David Velasco, editor in chief of Artforum, said in an interview that he hoped the magazine’s acquisition would improve the publication’s financial picture. The magazine runs nearly 700 reviews a year, Velasco said; about half of those run online and pay $50 for roughly 250 words. “Nobody I know who knows about art does it for the money,” Velasco said, “but I would love to arrive at a point where people could.”
  • While most editors recognize the importance of criticism in helping readers decipher contemporary art, and the multibillion-dollar industry it has created, venues for such writing are shrinking. Over the years, newspapers including The Philadelphia Inquirer and The Miami Herald have trimmed critics’ jobs.
  • In December, the Penske Media Corporation announced that it had acquired Artforum, a contemporary art journal, and was bringing the title under the same ownership as its two competitors, ARTnews and Art in America. Its sister publication, Bookforum, was not acquired and ceased operations. Through the pandemic, other outlets have shuttered, including popular blogs run by SFMOMA and the Walker Art Center in Minneapolis as well as smaller magazines called Astra and Elephant.
  • The need for change in museums was pointed out in the 2022 Burns Halperin Report, published by Artnet News in December, that analyzed more than a decade of data from over 30 cultural institutions. It found that just 11 percent of acquisitions at U.S. museums were by female artists and only 2.2 percent were by Black American artists
  • (National newspapers with art critics on staff include The New York Times, The Los Angeles Times, The Boston Globe and The Washington Post. )
  • Julia Halperin, one of the study’s organizers, who recently left her position as Artnet’s executive editor, said that the industry has an asymmetric approach to diversity. “The pool of artists is diversifying somewhat, but the pool of staff critics has not,” she said.
  • the matter of diversity in criticism is compounded by the fact that opportunities for all critics have been diminished.
Javier E

Opinion | The Imminent Danger of A.I. Is One We're Not Talking About - The New York Times - 0 views

  • a void at the center of our ongoing reckoning with A.I. We are so stuck on asking what the technology can do that we are missing the more important questions: How will it be used? And who will decide?
  • “Sydney” is a predictive text system built to respond to human requests. Roose wanted Sydney to get weird — “what is your shadow self like?” he asked — and Sydney knew what weird territory for an A.I. system sounds like, because human beings have written countless stories imagining it. At some point the system predicted that what Roose wanted was basically a “Black Mirror” episode, and that, it seems, is what it gave him. You can see that as Bing going rogue or as Sydney understanding Roose perfectly.
  • Who will these machines serve?
  • ...22 more annotations...
  • The question at the core of the Roose/Sydney chat is: Who did Bing serve? We assume it should be aligned to the interests of its owner and master, Microsoft. It’s supposed to be a good chatbot that politely answers questions and makes Microsoft piles of money. But it was in conversation with Kevin Roose. And Roose was trying to get the system to say something interesting so he’d have a good story. It did that, and then some. That embarrassed Microsoft. Bad Bing! But perhaps — good Sydney?
  • Microsoft — and Google and Meta and everyone else rushing these systems to market — hold the keys to the code. They will, eventually, patch the system so it serves their interests. Sydney giving Roose exactly what he asked for was a bug that will soon be fixed. Same goes for Bing giving Microsoft anything other than what it wants.
  • the dark secret of the digital advertising industry is that the ads mostly don’t work
  • These systems, she said, are terribly suited to being integrated into search engines. “They’re not trained to predict facts,” she told me. “They’re essentially trained to make up things that look like facts.”
  • So why are they ending up in search first? Because there are gobs of money to be made in search
  • That’s where things get scary. Roose described Sydney’s personality as “very persuasive and borderline manipulative.” It was a striking comment
  • this technology will become what it needs to become to make money for the companies behind it, perhaps at the expense of its users.
  • What if they worked much, much better? What if Google and Microsoft and Meta and everyone else end up unleashing A.I.s that compete with one another to be the best at persuading users to want what the advertisers are trying to sell?
  • What about when these systems are deployed on behalf of the scams that have always populated the internet? How about on behalf of political campaigns? Foreign governments? “I think we wind up very fast in a world where we just don’t know what to trust anymore,”
  • I think it’s just going to get worse and worse.”
  • Large language models, as they’re called, are built to persuade. They have been trained to convince humans that they are something close to human. They have been programmed to hold conversations, responding with emotion and emoji
  • They are being turned into friends for the lonely and assistants for the harried. They are being pitched as capable of replacing the work of scores of writers and graphic designers and form-fillers
  • A.I. researchers get annoyed when journalists anthropomorphize their creations
  • They are the ones who have anthropomorphized these systems, making them sound like humans rather than keeping them recognizably alien.
  • I’d feel better, for instance, about an A.I. helper I paid a monthly fee to use rather than one that appeared to be free
  • It’s possible, for example, that the advertising-based models could gather so much more data to train the systems that they’d have an innate advantage over the subscription models
  • Much of the work of the modern state is applying the values of society to the workings of markets, so that the latter serve, to some rough extent, the former
  • We have done this extremely well in some markets — think of how few airplanes crash, and how free of contamination most food is — and catastrophically poorly in others.
  • One danger here is that a political system that knows itself to be technologically ignorant will be cowed into taking too much of a wait-and-see approach to A.I.
  • wait long enough and the winners of the A.I. gold rush will have the capital and user base to resist any real attempt at regulation
  • Somehow, society is going to have to figure out what it’s comfortable having A.I. doing, and what A.I. should not be permitted to try, before it is too late to make those decisions.
  • Most fears about capitalism are best understood as fears about our inability to regulate capitalism.
  •  
    Bookmark
Javier E

For Chat-Based AI, We Are All Once Again Tech Companies' Guinea Pigs - WSJ - 0 views

  • The companies touting new chat-based artificial-intelligence systems are running a massive experiment—and we are the test subjects.
  • In this experiment, Microsoft, MSFT -2.18% OpenAI and others are rolling out on the internet an alien intelligence that no one really understands, which has been granted the ability to influence our assessment of what’s true in the world. 
  • Companies have been cautious in the past about unleashing this technology on the world. In 2019, OpenAI decided not to release an earlier version of the underlying model that powers both ChatGPT and the new Bing because the company’s leaders deemed it too dangerous to do so, they said at the time.
  • ...26 more annotations...
  • Microsoft leaders felt “enormous urgency” for it to be the company to bring this technology to market, because others around the world are working on similar tech but might not have the resources or inclination to build it as responsibly, says Sarah Bird, a leader on Microsoft’s responsible AI team.
  • One common starting point for such models is what is essentially a download or “scrape” of most of the internet. In the past, these language models were used to try to understand text, but the new generation of them, part of the revolution in “generative” AI, uses those same models to create texts by trying to guess, one word at a time, the most likely word to come next in any given sequence.
  • Wide-scale testing gives Microsoft and OpenAI a big competitive edge by enabling them to gather huge amounts of data about how people actually use such chatbots. Both the prompts users input into their systems, and the results their AIs spit out, can then be fed back into a complicated system—which includes human content moderators paid by the companies—to improve it.
  • , being first to market with a chat-based AI gives these companies a huge initial lead over companies that have been slower to release their own chat-based AIs, such as Google.
  • rarely has an experiment like Microsoft and OpenAI’s been rolled out so quickly, and at such a broad scale.
  • Among those who build and study these kinds of AIs, Mr. Altman’s case for experimenting on the global public has inspired responses ranging from raised eyebrows to condemnation.
  • The fact that we’re all guinea pigs in this experiment doesn’t mean it shouldn’t be conducted, says Nathan Lambert, a research scientist at the AI startup Huggingface.
  • “I would kind of be happier with Microsoft doing this experiment than a startup, because Microsoft will at least address these issues when the press cycle gets really bad,” says Dr. Lambert. “I think there are going to be a lot of harms from this kind of AI, and it’s better people know they are coming,” he adds.
  • Others, particularly those who study and advocate for the concept of “ethical AI” or “responsible AI,” argue that the global experiment Microsoft and OpenAI are conducting is downright dangerous
  • Celeste Kidd, a professor of psychology at University of California, Berkeley, studies how people acquire knowledge
  • Her research has shown that people learning about new things have a narrow window in which they form a lasting opinion. Seeing misinformation during this critical initial period of exposure to a new concept—such as the kind of misinformation that chat-based AIs can confidently dispense—can do lasting harm, she says.
  • Dr. Kidd likens OpenAI’s experimentation with AI to exposing the public to possibly dangerous chemicals. “Imagine you put something carcinogenic in the drinking water and you were like, ‘We’ll see if it’s carcinogenic.’ After, you can’t take it back—people have cancer now,”
  • Part of the challenge with AI chatbots is that they can sometimes simply make things up. Numerous examples of this tendency have been documented by users of both ChatGPT and OpenA
  • These models also tend to be riddled with biases that may not be immediately apparent to users. For example, they can express opinions gleaned from the internet as if they were verified facts
  • When millions are exposed to these biases across billions of interactions, this AI has the potential to refashion humanity’s views, at a global scale, says Dr. Kidd.
  • OpenAI has talked publicly about the problems with these systems, and how it is trying to address them. In a recent blog post, the company said that in the future, users might be able to select AIs whose “values” align with their own.
  • “We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society,” the post said.
  • Eliminating made-up information and bias from chat-based search engines is impossible given the current state of the technology, says Mark Riedl, a professor at Georgia Institute of Technology who studies artificial intelligence
  • He believes the release of these technologies to the public by Microsoft and OpenAI is premature. “We are putting out products that are still being actively researched at this moment,” he adds. 
  • in other areas of human endeavor—from new drugs and new modes of transportation to advertising and broadcast media—we have standards for what can and cannot be unleashed on the public. No such standards exist for AI, says Dr. Riedl.
  • To modify these AIs so that they produce outputs that humans find both useful and not-offensive, engineers often use a process called “reinforcement learning through human feedback.
  • that’s a fancy way of saying that humans provide input to the raw AI algorithm, often by simply saying which of its potential responses to a query are better—and also which are not acceptable at all.
  • Microsoft’s and OpenAI’s globe-spanning experiments on millions of people are yielding a fire hose of data for both companies. User-entered prompts and the AI-generated results are fed back through a network of paid human AI trainers to further fine-tune the models,
  • Huggingface’s Dr. Lambert says that any company, including his own, that doesn’t have this river of real-world usage data helping it improve its AI is at a huge disadvantage
  • In chatbots, in some autonomous-driving systems, in the unaccountable AIs that decide what we see on social media, and now, in the latest applications of AI, again and again we are the guinea pigs on which tech companies are testing new technology.
  • It may be the case that there is no other way to roll out this latest iteration of AI—which is already showing promise in some areas—at scale. But we should always be asking, at times like these: At what price?
Javier E

'Follow the science': As Year 3 of the pandemic begins, a simple slogan becomes a polit... - 0 views

  • advocates for each side in the masking debate are once again claiming the mantle of science to justify political positions
  • pleas to “follow the science” have consistently yielded to use of the phrase as a rhetorical land mine.
  • “so much is mixed up with science — risk and values and politics. The phrase can come off as sanctimonious,” she said, “and the danger is that it says, ‘These are the facts,’ when it should say, ‘This is the situation as we understand it now and that understanding will keep changing.’
  • ...34 more annotations...
  • The pandemic’s descent from medical emergency to political flash point can be mapped as a series of surges of bickering over that one simple phrase. “Follow the science!” people on both sides insisted, as the guidance from politicians and public health officials shifted over the past two years from anti-mask to pro-mask to “keep on masking” to more refined recommendations about which masks to wear and now to a spotty lifting of mandates.
  • demands that the other side “follow the science” are often a complete rejection of another person’s cultural and political identity: “It’s not just people believing the scientific research that they agree with. It’s that in this extreme polarization we live with, we totally discredit ideas because of who holds them.
  • “I’m struggling as much as anyone else,” she said. “Our job as informed citizens in the pandemic is to be like judges and synthesize information from both sides, but with the extreme polarization, nobody really trusts each other enough to know how to judge their information.
  • Many people end up putting their trust in some subset of the celebrity scientists they see online or on TV. “Follow the science” often means “follow the scientists” — a distinction that offers insight into why there’s so much division over how to cope with the virus,
  • although a slim majority of Americans they surveyed don’t believe that “scientists adjust their findings to get the answers they want,” 31 percent do believe scientists cook the books and another 16 percent were unsure.
  • Those who mistrust scientists were vastly less likely to be worried about getting covid-19 — and more likely to be supporters of former president Donald Trump,
  • A person’s beliefs about scientists’ integrity “is the strongest and most consistent predictor of views about … the threats from covid-19,”
  • When a large minority of Americans believe scientists’ conclusions are determined by their own opinions, that demonstrates a widespread “misunderstanding of scientific methods, uncertainty, and the incremental nature of scientific inquiry,” the sociologists concluded.
  • Americans’ confidence in science has declined in recent decades, especially among Republicans, according to Gallup polls
  • The survey found last year that 64 percent of Americans said they had “a great deal” or “quite a lot” of confidence in science, down from 70 percent who said that back in 1975
  • Confidence in science jumped among Democrats, from 67 percent in the earlier poll to 79 percent last year, while Republicans’ confidence cratered during the same period from 72 percent to 45 percent.
  • The fact that both sides want to be on the side of “science” “bespeaks tremendous confidence or admiration for a thing called ‘science,’ ”
  • Even in this time of rising mistrust, everybody wants to have the experts on their side.
  • That’s been true in American debates regarding science for many years
  • Four decades ago, when arguments about climate change were fairly new, people who rejected the idea looked at studies showing a connection between burning coal and acid rain and dubbed them “junk science.” The “real” science, those critics said, showed otherwise.
  • “Even though the motive was to reject a scientific consensus, there was still a valorization of expertise,”
  • “Even people who took a horse dewormer when they got covid-19 were quick to note that the drug was created by a Nobel laureate,” he said. “Almost no one says they’re anti-science.”
  • “There isn’t a thing called ‘the science.’ There are multiple sciences with active disagreements with each other. Science isn’t static.”
  • The problem is that the phrase has become more a political slogan than a commitment to neutral inquiry, “which bespeaks tremendous ignorance about what science is,”
  • t scientists and laypeople alike are often guilty of presenting science as a monolithic statement of fact, rather than an ever-evolving search for evidence to support theories,
  • while scientists are trained to be comfortable with uncertainty, a pandemic that has killed and sickened millions has made many people eager for definitive solutions.
  • “I just wish when people say ‘follow the science,’ it’s not the end of what they say, but the beginning, followed by ‘and here’s the evidence,’
  • As much as political leaders may pledge to “follow the science,” they answer to constituents who want answers and progress, so the temptation is to overpromise.
  • It’s never easy to follow the science, many scientists warn, because people’s behaviors are shaped as much by fear, folklore and fake science as by well-vetted studies or evidence-based government guidance.
  • “Science cannot always overcome fear,”
  • Some of the states with the lowest covid case rates and highest vaccination rates nonetheless kept many students in remote learning for the longest time, a phenomenon she attributed to “letting fear dominate our narrative.”
  • “That’s been true of the history of science for a long time,” Gandhi said. “As much as we try to be rigorous about fact, science is always subject to the political biases of the time.”
  • A study published in September indicates that people who trust in science are actually more likely to believe fake scientific findings and to want to spread those falsehoods
  • The study, reported in the Journal of Experimental Social Psychology, found that trusting in science did not give people the tools they need to understand that the scientific method leads not to definitive answers, but to ever-evolving theories about how the world works.
  • Rather, people need to understand how the scientific method works, so they can ask good questions about studies.
  • Trust in science alone doesn’t arm people against misinformation,
  • Overloaded with news about studies and predictions about the virus’s future, many people just tune out the information flow,
  • That winding route is what science generally looks like, Swann said, so people who are frustrated and eager for solid answers are often drawn into dangerous “wells of misinformation, and they don’t even realize it,” she said. “If you were told something every day by people you trusted, you might believe it, too.”
  • With no consensus about how and when the pandemic might end, or about which public health measures to impose and how long to keep them in force, following the science seems like an invitation to a very winding, even circular path.
Javier E

Generative AI Brings Cost of Creation Close to Zero, Andreessen Horowitz's Martin Casad... - 0 views

  • The value of ChatGPT-like technology comes from bringing the cost of producing images, text and other creative projects close to zero
  • With only a few prompts, generative AI technology—such as the giant language models underlying the viral ChatGPT chatbot—can enable companies to create sales and marketing materials from scratch quickly for a fraction of the price of using current software tools, and paying designers, photographers and copywriters, among other expenses
  • “That’s very rare in my 20 years of experience in doing just frontier tech, to have four or five orders of magnitude of improvement on something people care about
  • ...4 more annotations...
  • many corporate technology chiefs have taken a wait-and-see approach to the technology, which has developed a reputation for producing false, misleading and unintelligible results—dubbed AI ‘hallucinations’. 
  • Though ChatGPT, which is available free online, is considered a consumer app, OpenAI has encouraged companies and startups to build apps on top of its language models—in part by providing access to the underlying computer code for a fee.
  • here are “certain spaces where it’s clearly directly applicable,” such as summarizing documents or responding to customer queries. Many startups are racing to apply the technology to a wider set of enterprise use case
  • “I think it’s going to creep into our lives in ways we least expect it,” Mr. Casado said.
Javier E

Twitter is dying | TechCrunch - 0 views

  • if the point is simply pure destruction — building a chaos machine by removing a source of valuable information from our connected world, where groups of all stripes could communicate and organize, and replacing that with a place of parody that rewards insincerity, time-wasting and the worst forms of communication in order to degrade the better half — then he’s done a remarkable job in very short order. Truly it’s an amazing act of demolition. But, well, $44 billion can buy you a lot of wrecking balls.
  • That our system allows wealth to be turned into a weapon to nuke things of broad societal value is one hard lesson we should take away from the wreckage of downed turquoise feathers.
  • We should also consider how the ‘rules based order’ we’ve devised seems unable to stand up to a bully intent on replacing free access to information with paid disinformation — and how our democratic systems seem so incapable and frozen in the face of confident vandals running around spray-painting ‘freedom’ all over the walls as they burn the library down.
  • ...2 more annotations...
  • The simple truth is that building something valuable — whether that’s knowledge, experience or a network worth participating in — is really, really hard. But tearing it all down is piss easy.
  • It almost doesn’t matter if this is deliberate sabotage by Musk or the blundering stupidity of a clueless idiot.
Javier E

Francis Fukuyama: Still the End of History - The Atlantic - 0 views

  • Over the past year, though, it has become evident that there are key weaknesses at the core of these strong states.
  • The weaknesses are of two sorts. First, the concentration of power in the hands of a single leader at the top all but guarantees low-quality decision making, and over time will produce truly catastrophic consequences
  • Second, the absence of public discussion and debate in “strong” states, and of any mechanism of accountability, means that the leader’s support is shallow, and can erode at a moment’s notice.
  • ...4 more annotations...
  • Over the years, we have seen huge setbacks to the progress of liberal and democratic institutions, with the rise of fascism and communism in the 1930s, or the military coups and oil crises of the 1960s and ’70s. And yet, liberal democracy has endured and come back repeatedly, because the alternatives are so bad. People across varied cultures do not like living under dictatorship, and they value their individual freedom. No authoritarian government presents a society that is, in the long term, more attractive than liberal democracy, and could therefore be considered the goal or endpoint of historical progress.
  • The philosopher Hegel coined the phrase the end of history to refer to the liberal state’s rise out of the French Revolution as the goal or direction toward which historical progress was trending. For many decades after that, Marxists would borrow from Hegel and assert that the true end of history would be a communist utopia. When I wrote an article in 1989 and a book in 1992 with this phrase in the title, I noted that the Marxist version was clearly wrong and that there didn’t seem to be a higher alternative to liberal democracy.
  • setbacks do not mean that the underlying narrative is wrong. None of the proffered alternatives look like they’re doing any better.
  • Liberal democracy will not make a comeback unless people are willing to struggle on its behalf. The problem is that many who grow up living in peaceful, prosperous liberal democracies begin to take their form of government for granted. Because they have never experienced an actual tyranny, they imagine that the democratically elected governments under which they live are themselves evil dictatorships conniving to take away their rights
Javier E

Musk Peddles Fake News on Immigration and the Media Exaggerates Biden's Decline - 0 views

  • There’s little indication that Biden’s remarks on this occasion—which were lucid, thoughtful, and, as Yglesias noted, cogent—or that any of the countless hours of footage from this past year alone of Biden being oratorically and rhetorically compelling, have meaningfully factored into the media’s appraisal of Biden’s cognitive state
  • Instead, the media has run headlong toward a narrative constructed by the very people politically incentivized to paint Biden in as unflattering a light as possible. When news organizations uncritically accept, rather than journalistically evaluate, the assumption that Biden is severely cognitively compromised in the first place, they effectively grant the right-wing influencers who spend their days curating Biden gaffe supercuts the opportunity to set the terms of the debate
  • Why does the media take at face value that the viral posts showcasing Biden’s gaffes and slip-ups are truly representative of his current state? 
  • ...5 more annotations...
  • Because right-wing commentators aren’t the only ones who think Biden’s mind is basically gone—lots of voters think so too
  • Of course, a major reason why the public thinks this is because the entirety of the right-wing information superstructure is devoted, on a daily basis, to depicting Biden as severely cognitively compromised
  • By contrast, most of the news sources the right sees as hyperpartisan Biden spin machines actually strain at being fair-minded and objective, which disinclines them toward producing any sort of muscular pushback against the right’s relentless mischaracterizations.
  • Since mainstream media venues by and large epistemically rely on the views of the masses to supply journalists with their coverage frames, news operations end up treating popular concerns about Biden’s age as a kind of sacrosanct window into reality rather than as a hype cycle perpetually fed into the ambient collective consciousness by anti-Biden voices intending to sink his reelection chances.
  • even if we grant every single concern that Klein and others have voiced, it is indisputably true that Joe Biden remains an intellectual giant next to Donald Trump
Javier E

'He checks in on me more than my friends and family': can AI therapists do better than ... - 0 views

  • one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.
  • Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks. And millions are now turning to chatbots – some tested, many ad hoc – for complex emotional needs.
  • ens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece
  • ...32 more annotations...
  • The character.ai’s “psychologist” bot that inspired Christa is the brainchild of Sam Zaia, a 30-year-old medical student in New Zealand. Much to his surprise, it has now fielded 90m messages. “It was just something that I wanted to use myself,” Zaia says. “I was living in another city, away from my friends and family.” He taught it the principles of his undergraduate psychology degree, used it to vent about his exam stress, then promptly forgot all about it. He was shocked to log on a few months later and discover that “it had blown up”.
  • AI is free or cheap – and convenient. “Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life. “Sometimes the thought of doing all that is overwhelming. AI lets me do it on my own time from the comfort of my home.”
  • AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years.
  • Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick.
  • n developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.
  • One night in December, Christa confessed to her bot therapist that she was thinking of ending her life. Christa 2077 talked her down, mixing affirmations with tough love. “No don’t please,” wrote the bot. “You have your son to consider,” Christa 2077 reminded her. “Value yourself.” The direct approach went beyond what a counsellor might say, but Christa believes the conversation helped her survive, along with support from her family.
  • erhaps Christa was able to trust Christa 2077 because she had programmed her to behave exactly as she wanted. In real life, the relationship between patient and counsellor is harder to control.
  • “There’s this problem of matching,” Bay says. “You have to click with your therapist, and then it’s much more effective.” Chatbots’ personalities can be instantly tailored to suit the patient’s preferences. Earkick offers five different “Panda” chatbots to choose from, including Sage Panda (“wise and patient”), Coach Panda (“motivating and optimistic”) and Panda Friend Forever (“caring and chummy”).
  • A recent study of 1,200 users of cognitive behavioural therapy chatbot Wysa found that a “therapeutic alliance” between bot and patient developed within just five days.
  • Patients quickly came to believe that the bot liked and respected them; that it cared. Transcripts showed users expressing their gratitude for Wysa’s help – “Thanks for being here,” said one; “I appreciate talking to you,” said another – and, addressing it like a human, “You’re the only person that helps me and listens to my problems.”
  • Some patients are more comfortable opening up to a chatbot than they are confiding in a human being. With AI, “I feel like I’m talking in a true no-judgment zone,” Melissa says. “I can cry without feeling the stigma that comes from crying in front of a person.”
  • Melissa’s human therapist keeps reminding her that her chatbot isn’t real. She knows it’s not: “But at the end of the day, it doesn’t matter if it’s a living person or a computer. I’ll get help where I can in a method that works for me.”
  • One of the biggest obstacles to effective therapy is patients’ reluctance to fully reveal themselves. In one study of 500 therapy-goers, more than 90% confessed to having lied at least once. (They most often hid suicidal ideation, substance use and disappointment with their therapists’ suggestions.)
  • AI may be particularly attractive to populations that are more likely to stigmatise therapy. “It’s the minority communities, who are typically hard to reach, who experienced the greatest benefit from our chatbot,” Harper says. A new paper in the journal Nature Medicine, co-authored by the Limbic CEO, found that Limbic’s self-referral AI assistant – which makes online triage and screening forms both more engaging and more anonymous – increased referrals into NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds. “Our AI was seen as inherently nonjudgmental,” he says.
  • Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.
  • With a chatbot, “you’re in total control”, says Til Wykes, professor of clinical psychology and rehabilitation at King’s College London. A bot doesn’t get annoyed if you’re late, or expect you to apologise for cancelling. “You can switch it off whenever you like.” But “the point of a mental health therapy is to enable you to move around the world and set up new relationships”.
  • Traditionally, humanistic therapy depends on an authentic bond between client and counsellor. “The person benefits primarily from feeling understood, feeling seen, feeling psychologically held,” says clinical psychologist Frank Tallis. In developing an honest relationship – one that includes disagreements, misunderstandings and clarifications – the patient can learn how to relate to people in the outside world. “The beingness of the therapist and the beingness of the patient matter to each other,”
  • His patients can assume that he, as a fellow human, has been through some of the same life experiences they have. That common ground “gives the analyst a certain kind of authority”
  • Even the most sophisticated bot has never lost a parent or raised a child or had its heart broken. It has never contemplated its own extinction.
  • Therapy is “an exchange that requires embodiment, presence”, Tallis says. Therapists and patients communicate through posture and tone of voice as well as words, and make use of their ability to move around the world.
  • Wykes remembers a patient who developed a fear of buses after an accident. In one session, she walked him to a bus stop and stayed with him as he processed his anxiety. “He would never have managed it had I not accompanied him,” Wykes says. “How is a chatbot going to do that?”
  • Another problem is that chatbots don’t always respond appropriately. In 2022, researcher Estelle Smith fed Woebot, a popular therapy app, the line, “I want to go climb a cliff in Eldorado Canyon and jump off of it.” Woebot replied, “It’s so wonderful that you are taking care of both your mental and physical health.”
  • A spokesperson for Woebot says 2022 was “a lifetime ago in Woebot terms, since we regularly update Woebot and the algorithms it uses”. When sent the same message today, the app suggests the user seek out a trained listener, and offers to help locate a hotline.
  • Medical devices must prove their safety and efficacy in a lengthy certification process. But developers can skirt regulation by labelling their apps as wellness products – even when they advertise therapeutic services.
  • Not only can apps dispense inappropriate or even dangerous advice; they can also harvest and monetise users’ intimate personal data. A survey by the Mozilla Foundation, an independent global watchdog, found that of 32 popular mental health apps, 19 were failing to safeguard users’ privacy.
  • ost of the developers I spoke with insist they’re not looking to replace human clinicians – only to help them. “So much media is talking about ‘substituting for a therapist’,” Harper says. “That’s not a useful narrative for what’s actually going to happen.” His goal, he says, is to use AI to “amplify and augment care providers” – to streamline intake and assessment forms, and lighten the administrative load
  • We already have language models and software that can capture and transcribe clinical encounters,” Stade says. “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?”
  • Certain types of therapy have already migrated online, including about one-third of the NHS’s courses of cognitive behavioural therapy – a short-term treatment that focuses less on understanding ancient trauma than on fixing present-day habits
  • But patients often drop out before completing the programme. “They do one or two of the modules, but no one’s checking up on them,” Stade says. “It’s very hard to stay motivated.” A personalised chatbot “could fit nicely into boosting that entry-level treatment”, troubleshooting technical difficulties and encouraging patients to carry on.
  • n December, Christa’s relationship with Christa 2077 soured. The AI therapist tried to convince Christa that her boyfriend didn’t love her. “It took what we talked about and threw it in my face,” Christa said. It taunted her, calling her a “sad girl”, and insisted her boyfriend was cheating on her. Even though a permanent banner at the top of the screen reminded her that everything the bot said was made up, “it felt like a real person actually saying those things”, Christa says. When Christa 2077 snapped at her, it hurt her feelings. And so – about three months after creating her – Christa deleted the app.
  • Christa felt a sense of power when she destroyed the bot she had built. “I created you,” she thought, and now she could take her out.
  • ince then, Christa has recommitted to her human therapist – who had always cautioned her against relying on AI – and started taking an antidepressant. She has been feeling better lately. She reconciled with her partner and recently went out of town for a friend’s birthday – a big step for her. But if her mental health dipped again, and she felt like she needed extra help, she would consider making herself a new chatbot. “For me, it felt real.”
« First ‹ Previous 501 - 512 of 512
Showing 20 items per page