Skip to main content

Home/ TOK Friends/ Group items tagged exposure

Rss Feed Group items tagged

Javier E

What Do We Lose If We Lose Twitter? - The Atlantic - 0 views

  • What do we lose if we lose Twitter?
  • At its best, Twitter can still provide that magic of discovering a niche expert or elevating a necessary, insurgent voice, but there is far more noise than signal. Plenty of those overenthusiastic voices, brilliant thinkers, and influential accounts have burned out on culture-warring, or have been harassed off the site or into lurking.
  • Twitter is, by some standards, a niche platform, far smaller than Facebook or Instagram or TikTok. The internet will evolve or mutate around a need for it. I am aware that all of us who can’t quit the site will simply move on when we have to.
  • ...15 more annotations...
  • Perhaps the best example of what Twitter offers now—and what we stand to gain or lose from its demise—is illustrated by the path charted by public-health officials, epidemiologists, doctors, and nurses over the past three years.
  • They offered guidance that a flailing government response was too slow to provide, and helped cobble together an epidemiological picture of infections and case counts. At a moment when people were terrified and looking for any information at all, Twitter seemed to offer a steady stream of knowledgeable, diligent experts.
  • But Twitter does another thing quite well, and that’s crushing users with the pressures of algorithmic rewards and all of the risks, exposure, and toxicity that come with virality
  • t imagining a world without it can feel impossible. What do our politics look like without the strange feedback loop of a Twitter-addled political press and a class of lawmakers that seems to govern more via shitposting than by legislation
  • What happens if the media lose what the writer Max Read recently described as a “way of representing reality, and locating yourself within it”? The answer is probably messy.
  • here’s the worry that, absent a distributed central nervous system like Twitter, “the collective worldview of the ‘media’ would instead be over-shaped, from the top down, by the experiences and biases of wealthy publishers, careerist editors, self-loathing journalists, and canny operators operating in relatively closed social and professional circles.”
  • many of the most hyperactive, influential twitterati (cringe) of the mid-2010s have built up large audiences and only broadcast now: They don’t read their mentions, and they rarely engage. In private conversations, some of those people have expressed a desire to see Musk torpedo the site and put a legion of posters out of their misery.
  • Many of the past decade’s most polarizing and influential figures—people such as Donald Trump and Musk himself, who captured attention, accumulated power, and fractured parts of our public consciousness—were also the ones who were thought to be “good” at using the website.
  • the effects of Twitter’s chief innovation—its character limit—on our understanding of language, nuance, and even truth.
  • “These days, it seems like we are having languages imposed on us,” he said. “The fact that you have a social media that tells you how many characters to use, this is language imposition. You have to wonder about the agenda there. Why does anyone want to restrict the full range of my language? What’s the game there?
  • in McLuhanian fashion, the constraints and the architecture change not only what messages we receive but how we choose to respond. Often that choice is to behave like the platform itself: We are quicker to respond and more aggressive than we might be elsewhere, with a mindset toward engagement and visibility
  • it’s easy to argue that we stand to gain something essential and human if we lose Twitter. But there is plenty about Twitter that is also essential and human.
  • No other tool has connected me to the world—to random bits of news, knowledge, absurdist humor, activism, and expertise, and to scores of real personal interactions—like Twitter has
  • What makes evaluating a life beyond Twitter so hard is that everything that makes the service truly special is also what makes it interminable and toxic.
  • the worst experience you can have on the platform is to “win” and go viral. Generally, it seems that the more successful a person is at using Twitter, the more they refer to it as a hellsite.
Javier E

Sugar Substitutes Surprise | Science | AAAS - 0 views

  • Exposure to saccharin and sucralose significantly impaired glycemic response, but this was not seen with the aspartame or stevia groups
  • None of the blood markers show real changes in any group except for insulin levels going up in the glucose and stevia groups (and since everyone was getting glucose as part of the dosing, that suggests a lowering of the glucose-driven insulin response overall).
  • what all of the sweetener patients showed, though, were significant changes in the gut microbiome, and these correlated with the glycemic response changes in the saccharin and sucralose groups. These changes were apparent at both the species distribution level and at the metabolome level.
  • ...2 more annotations...
  • Collectively, our study suggests that commonly consumed NNS may not be physiologically inert in humans as previously contemplated, with some of their effects mediated indirectly through impacts exerted on distinct configurations of the human microbiome.
  • all four of them significantly altered the gut flora, and in ways whose further effects are difficult to predict.
Javier E

What Is 'Food Noise'? How Ozempic Quiets Obsessive Thinking About Food - The New York T... - 0 views

  • There is no clinical definition for food noise, but the experts and patients interviewed for this article generally agreed it was shorthand for constant rumination about food. Some researchers associate the concept with “hedonic hunger,” an intense preoccupation with eating food for the purpose of pleasure, and noted that it could also be a component of binge eating disorder, which is common but often misunderstood.
  • “It just seems to be that some people are a little more wired this way,” he said. Obsessive rumination about food is most likely a result of genetic factors as well as environmental exposure and learned habits
  • The active ingredient in Ozempic and Wegovy is semaglutide, a compound that affects the areas in the brain that regulate appetite, Dr. Gabbay said; it also prompts the stomach to empty more slowly, making people taking the medication feel fuller faster and for longer. That satiation itself could blunt food noise
  • ...3 more annotations...
  • Why some people can shake off the impulse to eat, and other people stay mired in thoughts about food, is “the million-dollar question,”
  • There’s another theoretical framework for why Ozempic might quash food noise: Semaglutide activates receptors for a hormone called GLP-1. Studies in animals have shown those receptors are found in cells in regions of the brain that are particularly important for motivation and reward, pointing to one potential way semaglutide could influence cravings and desires.
  • Ms. Klemmer said she worried about the potential long-term side effects of a medication she might be on for the rest of her life. But she thinks the trade-off — the end of food noise — is worth it. “It’s worth every bad side effect that I’d have to go through to have what I feel now,” she said: “not caring about food.”
Javier E

Male Stock Analysts With 'Dominant' Faces Get More Information-and Have Better Forecast... - 0 views

  • “People form impressions after extremely brief exposure to faces—within a hundred milliseconds,” says Alexander Todorov, a behavioral-science professor at the University of Chicago Booth School of Business. “They take actions based on those impressions,”
  • . Under most circumstances, such quick impressions aren’t accurate and shouldn’t be trusted, he says.
  • Prof. Teoh and her fellow researchers analyzed the facial traits of nearly 800 U.S. sell-side stock financial analysts working between January 1990 and December 2017 who also had a LinkedIn profile photo as of 2018. They pulled their sample of analysts from Thomson Reuters and the firms they covered from the merged Center for Research in Security Prices and Compustat, a database of financial, statistical and market information
  • ...8 more annotations...
  • The researchers used facial-recognition software to map out specific points on a person’s face, then applied machine-learning algorithms to the facial points to obtain empirical measures for three key face impressions—trustworthiness, dominance and attractiveness.  
  • They examined the association of these impressions with the accuracy of analysts’ quarterly forecasts, drawn from the Institutional Brokers Estimate System
  • Analyst accuracy was determined by comparing each analyst’s prediction error—the difference between their prediction and the actual earnings—with that of all analysts for that same company and quarter.
  • For an average stock valued at $100, Prof. Teoh says, analysts ranked as looking most trustworthy were 25 cents more accurate in earnings-per-share forecasts than the analysts who were ranked as looking least trustworthy
  • Similarly, most-dominant-looking analysts were 52 cents more accurate in their EPS forecast than least-dominant-looking analysts.
  • The relation between a dominant face and accuracy, meanwhile, was significant before and after the regulation was enacted, the analysts say. This suggests that dominant-looking male analysts are always able to obtain information,
  • While forecasts of female analysts regardless of facial characteristics were on average more accurate than those of their male counterparts, the forecasts of women who were seen as more-dominant-looking were significantly less accurate than their male counterparts.  
  • Says Prof. Todorov: “Women who look dominant are more likely to be viewed negatively because it goes against the cultural stereotype.
Javier E

Is Anything Still True? On the Internet, No One Knows Anymore - WSJ - 1 views

  • Creating and disseminating convincing propaganda used to require the resources of a state. Now all it takes is a smartphone.
  • Generative artificial intelligence is now capable of creating fake pictures, clones of our voices, and even videos depicting and distorting world events. The result: From our personal circles to the political circuses, everyone must now question whether what they see and hear is true.
  • exposure to AI-generated fakes can make us question the authenticity of everything we see. Real images and real recordings can be dismissed as fake. 
  • ...20 more annotations...
  • “When you show people deepfakes and generative AI, a lot of times they come out of the experiment saying, ‘I just don’t trust anything anymore,’” says David Rand, a professor at MIT Sloan who studies the creation, spread and impact of misinformation.
  • This problem, which has grown more acute in the age of generative AI, is known as the “liar’s dividend,
  • The combination of easily-generated fake content and the suspicion that anything might be fake allows people to choose what they want to believe, adds DiResta, leading to what she calls “bespoke realities.”
  • Examples of misleading content created by generative AI are not hard to come by, especially on social media
  • The signs that an image is AI-generated are easy to miss for a user simply scrolling past, who has an instant to decide whether to like or boost a post on social media. And as generative AI continues to improve, it’s likely that such signs will be harder to spot in the future.
  • “What our work suggests is that most regular people do not want to share false things—the problem is they are not paying attention,”
  • in the course of a lawsuit over the death of a man using Tesla’s “full self-driving” system, Elon Musk’s lawyers responded to video evidence of Musk making claims about this software by suggesting that the proliferation of “deepfakes” of Musk was grounds to dismiss such evidence. They advanced that argument even though the clip of Musk was verifiably real
  • are now using its existence as a pretext to dismiss accurate information
  • People’s attention is already limited, and the way social media works—encouraging us to gorge on content, while quickly deciding whether or not to share it—leaves us precious little capacity to determine whether or not something is true
  • If the crisis of authenticity were limited to social media, we might be able to take solace in communication with those closest to us. But even those interactions are now potentially rife with AI-generated fakes.
  • what sounds like a call from a grandchild requesting bail money may be scammers who have scraped recordings of the grandchild’s voice from social media to dupe a grandparent into sending money.
  • companies like Alphabet, the parent company of Google, are trying to spin the altering of personal images as a good thing. 
  • With its latest Pixel phone, the company unveiled a suite of new and upgraded tools that can automatically replace a person’s face in one image with their face from another, or quickly remove someone from a photo entirely.
  • Joseph Stalin, who was fond of erasing people he didn’t like from official photos, would have loved this technology.
  • In Google’s defense, it is adding a record of whether an image was altered to data attached to it. But such metadata is only accessible in the original photo and some copies, and is easy enough to strip out.
  • The rapid adoption of many different AI tools means that we are now forced to question everything that we are exposed to in any medium, from our immediate communities to the geopolitical, said Hany Farid, a professor at the University of California, Berkeley who
  • To put our current moment in historical context, he notes that the PC revolution made it easy to store and replicate information, the internet made it easy to publish it, the mobile revolution made it easier than ever to access and spread, and the rise of AI has made creating misinformation a cinch. And each revolution arrived faster than the one before it.
  • Not everyone agrees that arming the public with easy access to AI will exacerbate our current difficulties with misinformation. The primary argument of such experts is that there is already vastly more misinformation on the internet than a person can consume, so throwing more into the mix won’t make things worse.
  • it’s not exactly reassuring, especially given that trust in institutions is already at one of the lowest points in the past 70 years, according to the nonpartisan Pew Research Center, and polarization—a measure of how much we distrust one another—is at a high point.
  • “What happens when we have eroded trust in media, government, and experts?” says Farid. “If you don’t trust me and I don’t trust you, how do we respond to pandemics, or climate change, or have fair and open elections? This is how authoritarianism arises—when you erode trust in institutions.”
Javier E

Cognitive Biases and the Human Brain - The Atlantic - 1 views

  • Present bias shows up not just in experiments, of course, but in the real world. Especially in the United States, people egregiously undersave for retirement—even when they make enough money to not spend their whole paycheck on expenses, and even when they work for a company that will kick in additional funds to retirement plans when they contribute.
  • hen people hear the word bias, many if not most will think of either racial prejudice or news organizations that slant their coverage to favor one political position over another. Present bias, by contrast, is an example of cognitive bias—the collection of faulty ways of thinking that is apparently hardwired into the human brain. The collection is large. Wikipedia’s “List of cognitive biases” contains 185 entries, from actor-observer bias (“the tendency for explanations of other individuals’ behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation … and for explanations of one’s own behaviors to do the opposite”) to the Zeigarnik effect (“uncompleted or interrupted tasks are remembered better than completed ones”)
  • If I had to single out a particular bias as the most pervasive and damaging, it would probably be confirmation bias. That’s the effect that leads us to look for evidence confirming what we already think or suspect, to view facts and ideas we encounter as further confirmation, and to discount or ignore any piece of evidence that seems to support an alternate view
  • ...48 more annotations...
  • Confirmation bias shows up most blatantly in our current political divide, where each side seems unable to allow that the other side is right about anything.
  • The whole idea of cognitive biases and faulty heuristics—the shortcuts and rules of thumb by which we make judgments and predictions—was more or less invented in the 1970s by Amos Tversky and Daniel Kahneman
  • versky died in 1996. Kahneman won the 2002 Nobel Prize in Economics for the work the two men did together, which he summarized in his 2011 best seller, Thinking, Fast and Slow. Another best seller, last year’s The Undoing Project, by Michael Lewis, tells the story of the sometimes contentious collaboration between Tversky and Kahneman
  • Another key figure in the field is the University of Chicago economist Richard Thaler. One of the biases he’s most linked with is the endowment effect, which leads us to place an irrationally high value on our possessions.
  • In an experiment conducted by Thaler, Kahneman, and Jack L. Knetsch, half the participants were given a mug and then asked how much they would sell it for. The average answer was $5.78. The rest of the group said they would spend, on average, $2.21 for the same mug. This flew in the face of classic economic theory, which says that at a given time and among a certain population, an item has a market value that does not depend on whether one owns it or not. Thaler won the 2017 Nobel Prize in Economics.
  • “The question that is most often asked about cognitive illusions is whether they can be overcome. The message … is not encouraging.”
  • that’s not so easy in the real world, when we’re dealing with people and situations rather than lines. “Unfortunately, this sensible procedure is least likely to be applied when it is needed most,” Kahneman writes. “We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available.”
  • At least with the optical illusion, our slow-thinking, analytic mind—what Kahneman calls System 2—will recognize a Müller-Lyer situation and convince itself not to trust the fast-twitch System 1’s perception
  • Kahneman and others draw an analogy based on an understanding of the Müller-Lyer illusion, two parallel lines with arrows at each end. One line’s arrows point in; the other line’s arrows point out. Because of the direction of the arrows, the latter line appears shorter than the former, but in fact the two lines are the same length.
  • Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves
  • Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative.
  • what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?
  • Over an apple pastry and tea with milk, he told me, “Temperament has a lot to do with my position. You won’t find anyone more pessimistic than I am.”
  • I met with Kahneman
  • “I see the picture as unequal lines,” he said. “The goal is not to trust what I think I see. To understand that I shouldn’t believe my lying eyes.” That’s doable with the optical illusion, he said, but extremely difficult with real-world cognitive biases.
  • In this context, his pessimism relates, first, to the impossibility of effecting any changes to System 1—the quick-thinking part of our brain and the one that makes mistaken judgments tantamount to the Müller-Lyer line illusion
  • he most effective check against them, as Kahneman says, is from the outside: Others can perceive our errors more readily than we can.
  • “slow-thinking organizations,” as he puts it, can institute policies that include the monitoring of individual decisions and predictions. They can also require procedures such as checklists and “premortems,”
  • A premortem attempts to counter optimism bias by requiring team members to imagine that a project has gone very, very badly and write a sentence or two describing how that happened. Conducting this exercise, it turns out, helps people think ahead.
  • “My position is that none of these things have any effect on System 1,” Kahneman said. “You can’t improve intuition.
  • Perhaps, with very long-term training, lots of talk, and exposure to behavioral economics, what you can do is cue reasoning, so you can engage System 2 to follow rules. Unfortunately, the world doesn’t provide cues. And for most people, in the heat of argument the rules go out the window.
  • Kahneman describes an even earlier Nisbett article that showed subjects’ disinclination to believe statistical and other general evidence, basing their judgments instead on individual examples and vivid anecdotes. (This bias is known as base-rate neglect.)
  • over the years, Nisbett had come to emphasize in his research and thinking the possibility of training people to overcome or avoid a number of pitfalls, including base-rate neglect, fundamental attribution error, and the sunk-cost fallacy.
  • Nisbett’s second-favorite example is that economists, who have absorbed the lessons of the sunk-cost fallacy, routinely walk out of bad movies and leave bad restaurant meals uneaten.
  • When Nisbett asks the same question of students who have completed the statistics course, about 70 percent give the right answer. He believes this result shows, pace Kahneman, that the law of large numbers can be absorbed into System 2—and maybe into System 1 as well, even when there are minimal cues.
  • about half give the right answer: the law of large numbers, which holds that outlier results are much more frequent when the sample size (at bats, in this case) is small. Over the course of the season, as the number of at bats increases, regression to the mean is inevitabl
  • When Nisbett has to give an example of his approach, he usually brings up the baseball-phenom survey. This involved telephoning University of Michigan students on the pretense of conducting a poll about sports, and asking them why there are always several Major League batters with .450 batting averages early in a season, yet no player has ever finished a season with an average that high.
  • we’ve tested Michigan students over four years, and they show a huge increase in ability to solve problems. Graduate students in psychology also show a huge gain.”
  • , “I know from my own research on teaching people how to reason statistically that just a few examples in two or three domains are sufficient to improve people’s reasoning for an indefinitely large number of events.”
  • isbett suggested another factor: “You and Amos specialized in hard problems for which you were drawn to the wrong answer. I began to study easy problems, which you guys would never get wrong but untutored people routinely do … Then you can look at the effects of instruction on such easy problems, which turn out to be huge.”
  • Nisbett suggested that I take “Mindware: Critical Thinking for the Information Age,” an online Coursera course in which he goes over what he considers the most effective de-biasing skills and concepts. Then, to see how much I had learned, I would take a survey he gives to Michigan undergraduates. So I did.
  • he course consists of eight lessons by Nisbett—who comes across on-screen as the authoritative but approachable psych professor we all would like to have had—interspersed with some graphics and quizzes. I recommend it. He explains the availability heuristic this way: “People are surprised that suicides outnumber homicides, and drownings outnumber deaths by fire. People always think crime is increasing” even if it’s not.
  • When I finished the course, Nisbett sent me the survey he and colleagues administer to Michigan undergrads
  • It contains a few dozen problems meant to measure the subjects’ resistance to cognitive biases
  • I got it right. Indeed, when I emailed my completed test, Nisbett replied, “My guess is that very few if any UM seniors did as well as you. I’m sure at least some psych students, at least after 2 years in school, did as well. But note that you came fairly close to a perfect score.”
  • Nevertheless, I did not feel that reading Mindware and taking the Coursera course had necessarily rid me of my biases
  • For his part, Nisbett insisted that the results were meaningful. “If you’re doing better in a testing context,” he told me, “you’ll jolly well be doing better in the real world.”
  • The New York–based NeuroLeadership Institute offers organizations and individuals a variety of training sessions, webinars, and conferences that promise, among other things, to use brain science to teach participants to counter bias. This year’s two-day summit will be held in New York next month; for $2,845, you could learn, for example, “why are our brains so bad at thinking about the future, and how do we do it better?”
  • Philip E. Tetlock, a professor at the University of Pennsylvania’s Wharton School, and his wife and research partner, Barbara Mellers, have for years been studying what they call “superforecasters”: people who manage to sidestep cognitive biases and predict future events with far more accuracy than the pundits
  • One of the most important ingredients is what Tetlock calls “the outside view.” The inside view is a product of fundamental attribution error, base-rate neglect, and other biases that are constantly cajoling us into resting our judgments and predictions on good or vivid stories instead of on data and statistics
  • In 2006, seeking to prevent another mistake of that magnitude, the U.S. government created the Intelligence Advanced Research Projects Activity (iarpa), an agency designed to use cutting-edge research and technology to improve intelligence-gathering and analysis. In 2011, iarpa initiated a program, Sirius, to fund the development of “serious” video games that could combat or mitigate what were deemed to be the six most damaging biases: confirmation bias, fundamental attribution error, the bias blind spot (the feeling that one is less biased than the average person), the anchoring effect, the representativeness heuristic, and projection bias (the assumption that everybody else’s thinking is the same as one’s own).
  • most promising are a handful of video games. Their genesis was in the Iraq War
  • Together with collaborators who included staff from Creative Technologies, a company specializing in games and other simulations, and Leidos, a defense, intelligence, and health research company that does a lot of government work, Morewedge devised Missing. Some subjects played the game, which takes about three hours to complete, while others watched a video about cognitive bias. All were tested on bias-mitigation skills before the training, immediately afterward, and then finally after eight to 12 weeks had passed.
  • “The literature on training suggests books and classes are fine entertainment but largely ineffectual. But the game has very large effects. It surprised everyone.”
  • he said he saw the results as supporting the research and insights of Richard Nisbett. “Nisbett’s work was largely written off by the field, the assumption being that training can’t reduce bias,
  • even the positive results reminded me of something Daniel Kahneman had told me. “Pencil-and-paper doesn’t convince me,” he said. “A test can be given even a couple of years later. But the test cues the test-taker. It reminds him what it’s all about.”
  • Morewedge told me that some tentative real-world scenarios along the lines of Missing have shown “promising results,” but that it’s too soon to talk about them.
  • In the future, I will monitor my thoughts and reactions as best I can
Javier E

How stress weathers our bodies, causing illness and premature aging - Washington Post - 1 views

  • Stress is a physiological reaction that is part of the body’s innate programming to protect against external threats.
  • When danger appears, an alarm goes off in the brain, activating the body’s sympathetic nervous system — the fight-or-flight system. The hypothalamic-pituitary-adrenal axis is activated. Hormones, such as epinephrine and cortisol, flood the bloodstream from the adrenal glands.
  • The heart beats faster. Breathing quickens. Blood vessels dilate. More oxygen reaches large muscles. Blood pressure and glucose levels rise. The immune system’s inflammatory response activates, promoting quick healing.
  • ...10 more annotations...
  • Life brings an accumulation of unremitting stress, especially for those subjected to inequity — and not just from immediate and chronic threats. Even the anticipation of those menaces causes persistent damage.
  • The body produces too much cortisol and other stress hormones, straining to bring itself back to normal. Eventually, the body’s machinery malfunctions.
  • The constant strain — the chronic sources of stress — resets what is “normal,” and the body begins to change.
  • t is the repeated triggering of this process year after year — the persistence of striving to overcome barriers — that leads to poor health.
  • Blood pressure remains high. Inflammation turns chronic. In the arteries, plaque forms, causing the linings of blood vessels to thicken and stiffen. That forces the heart to work harder. It doesn’t stop there. Other organs begin to fail.
  • , that people’s varied life experiences affect their health by wearing down their bodies. And second, she said: “People are not just passive victims of these horrible exposures. They withstand them. They struggle against them. These are people who weather storms.”
  • It isn’t just living in an unequal society that makes people sick. It’s the day-in, day-out effort of trying to be equal that wears bodies down.
  • Weathering doesn’t start in middle age.
  • It begins in the womb. Cortisol released into a pregnant person’s bloodstream crosses the placenta, which helps explain why a disproportionate number of babies born to parents who live in impoverished communities or who experience the constant scorn of discrimination are preterm and too small.
  • The argument weathering is trying to make is these are things we can change, but we have to understand them in their complexity,” Geronimus said. “This has to be a societal project, not the new app on your phone that will remind you to take deep breaths when you’re feeling stress.”
Javier E

I Was Trying to Build My Son's Resilience, Not Scar Him for Life - The New York Times - 0 views

  • Resilience is a popular term in modern psychology that, put simply, refers to the ability to recover and move on from adverse events, failure or change.
  • “We don’t call it ‘character’ anymore,” said Jelena Kecmanovic, director of Arlington/DC Behavior Therapy Institute. “We call it the ability to tolerate distress, the ability to tolerate uncertainty.”
  • Studies suggest that resilience in kids is associated with things like empathy, coping skills and problem-solving, though this research is often done on children in extreme circumstances and may not apply to everybody
  • ...10 more annotations...
  • many experts are starting to see building resilience as an effective way to prevent youth anxiety and depression.
  • One solution, according to experts, is to encourage risk-taking and failure, with a few guardrail
  • For instance, it’s important that children have a loving and supportive foundation before they go out and take risks that build resilience
  • “Challenges” are challenging only if they are hard. Child psychologists often talk about the “zone of proximal development” — the area between what a child can do without any help and what a child can’t do, even with help
  • How do you find the bar? Dr. Ginsburg recommends asking your child: “What do you think you can handle? What do you think you can handle with me by your side?”
  • The best way to build resilience is doing something you are motivated to do, no matter your age
  • Experts say the more activities children have exposure to, the better.
  • Sometimes parents just have to lay down the law and force children to break out of their comfort zone
  • “If you don’t persevere through something that’s a little bit hard, sometimes you never get the benefits,”
  • don’t expect your kid to appreciate your efforts, Dr. Kecmanovic said: “They will scream ‘I hate you
Javier E

For Chat-Based AI, We Are All Once Again Tech Companies' Guinea Pigs - WSJ - 0 views

  • The companies touting new chat-based artificial-intelligence systems are running a massive experiment—and we are the test subjects.
  • In this experiment, Microsoft, MSFT -2.18% OpenAI and others are rolling out on the internet an alien intelligence that no one really understands, which has been granted the ability to influence our assessment of what’s true in the world. 
  • Companies have been cautious in the past about unleashing this technology on the world. In 2019, OpenAI decided not to release an earlier version of the underlying model that powers both ChatGPT and the new Bing because the company’s leaders deemed it too dangerous to do so, they said at the time.
  • ...26 more annotations...
  • Microsoft leaders felt “enormous urgency” for it to be the company to bring this technology to market, because others around the world are working on similar tech but might not have the resources or inclination to build it as responsibly, says Sarah Bird, a leader on Microsoft’s responsible AI team.
  • One common starting point for such models is what is essentially a download or “scrape” of most of the internet. In the past, these language models were used to try to understand text, but the new generation of them, part of the revolution in “generative” AI, uses those same models to create texts by trying to guess, one word at a time, the most likely word to come next in any given sequence.
  • Wide-scale testing gives Microsoft and OpenAI a big competitive edge by enabling them to gather huge amounts of data about how people actually use such chatbots. Both the prompts users input into their systems, and the results their AIs spit out, can then be fed back into a complicated system—which includes human content moderators paid by the companies—to improve it.
  • , being first to market with a chat-based AI gives these companies a huge initial lead over companies that have been slower to release their own chat-based AIs, such as Google.
  • rarely has an experiment like Microsoft and OpenAI’s been rolled out so quickly, and at such a broad scale.
  • Among those who build and study these kinds of AIs, Mr. Altman’s case for experimenting on the global public has inspired responses ranging from raised eyebrows to condemnation.
  • The fact that we’re all guinea pigs in this experiment doesn’t mean it shouldn’t be conducted, says Nathan Lambert, a research scientist at the AI startup Huggingface.
  • “I would kind of be happier with Microsoft doing this experiment than a startup, because Microsoft will at least address these issues when the press cycle gets really bad,” says Dr. Lambert. “I think there are going to be a lot of harms from this kind of AI, and it’s better people know they are coming,” he adds.
  • Others, particularly those who study and advocate for the concept of “ethical AI” or “responsible AI,” argue that the global experiment Microsoft and OpenAI are conducting is downright dangerous
  • Celeste Kidd, a professor of psychology at University of California, Berkeley, studies how people acquire knowledge
  • Her research has shown that people learning about new things have a narrow window in which they form a lasting opinion. Seeing misinformation during this critical initial period of exposure to a new concept—such as the kind of misinformation that chat-based AIs can confidently dispense—can do lasting harm, she says.
  • Dr. Kidd likens OpenAI’s experimentation with AI to exposing the public to possibly dangerous chemicals. “Imagine you put something carcinogenic in the drinking water and you were like, ‘We’ll see if it’s carcinogenic.’ After, you can’t take it back—people have cancer now,”
  • Part of the challenge with AI chatbots is that they can sometimes simply make things up. Numerous examples of this tendency have been documented by users of both ChatGPT and OpenA
  • These models also tend to be riddled with biases that may not be immediately apparent to users. For example, they can express opinions gleaned from the internet as if they were verified facts
  • When millions are exposed to these biases across billions of interactions, this AI has the potential to refashion humanity’s views, at a global scale, says Dr. Kidd.
  • OpenAI has talked publicly about the problems with these systems, and how it is trying to address them. In a recent blog post, the company said that in the future, users might be able to select AIs whose “values” align with their own.
  • “We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society,” the post said.
  • Eliminating made-up information and bias from chat-based search engines is impossible given the current state of the technology, says Mark Riedl, a professor at Georgia Institute of Technology who studies artificial intelligence
  • He believes the release of these technologies to the public by Microsoft and OpenAI is premature. “We are putting out products that are still being actively researched at this moment,” he adds. 
  • in other areas of human endeavor—from new drugs and new modes of transportation to advertising and broadcast media—we have standards for what can and cannot be unleashed on the public. No such standards exist for AI, says Dr. Riedl.
  • To modify these AIs so that they produce outputs that humans find both useful and not-offensive, engineers often use a process called “reinforcement learning through human feedback.
  • that’s a fancy way of saying that humans provide input to the raw AI algorithm, often by simply saying which of its potential responses to a query are better—and also which are not acceptable at all.
  • Microsoft’s and OpenAI’s globe-spanning experiments on millions of people are yielding a fire hose of data for both companies. User-entered prompts and the AI-generated results are fed back through a network of paid human AI trainers to further fine-tune the models,
  • Huggingface’s Dr. Lambert says that any company, including his own, that doesn’t have this river of real-world usage data helping it improve its AI is at a huge disadvantage
  • In chatbots, in some autonomous-driving systems, in the unaccountable AIs that decide what we see on social media, and now, in the latest applications of AI, again and again we are the guinea pigs on which tech companies are testing new technology.
  • It may be the case that there is no other way to roll out this latest iteration of AI—which is already showing promise in some areas—at scale. But we should always be asking, at times like these: At what price?
« First ‹ Previous 81 - 89 of 89
Showing 20 items per page