Skip to main content

Home/ TOK Friends/ Group items tagged bing

Rss Feed Group items tagged

Javier E

TikTok Brain Explained: Why Some Kids Seem Hooked on Social Video Feeds - WSJ - 0 views

  • Remember the good old days when kids just watched YouTube all day? Now that they binge on 15-second TikToks, those YouTube clips seem like PBS documentaries.
  • Many parents tell me their kids can’t sit through feature-length films anymore because to them the movies feel painfully slow. Others have observed their kids struggling to focus on homework. And reading a book? Forget about it.
  • What is happening to kids’ brains?
  • ...27 more annotations...
  • “It is hard to look at increasing trends in media consumption of all types, media multitasking and rates of ADHD in young people and not conclude that there is a decrease in their attention span,
  • Emerging research suggests that watching short, fast-paced videos makes it harder for kids to sustain activities that don’t offer instant—and constant—gratification.
  • One of the few studies specifically examining TikTok-related effects on the brain focused on Douyin, the TikTok equivalent in China, made by the same Chinese parent company, ByteDance Ltd. It found that the personalized videos the app’s recommendation engine shows users activate the reward centers of the brain, as compared with the general-interest videos shown to new users.
  • Brain scans of Chinese college students showed that areas involved in addiction were highly activated in those who watched personalized videos.
  • It also found some people have trouble controlling when to stop watching.
  • attention. “If kids’ brains become accustomed to constant changes, the brain finds it difficult to adapt to a nondigital activity where things don’t move quite as fast,”
  • A TikTok spokeswoman said the company wants younger teens to develop positive digital habits early on, and that it recently made some changes aimed at curbing extensive app usage. For example, TikTok won’t allow users ages 13 to 15 to receive push notifications after 9 p.m. TikTok also periodically reminds users to take a break to go outside or grab a snack.
  • Kids have a hard time pulling away from videos on YouTube, too, and Google has made several changes to help limit its use, including turning off autoplay by default on accounts of people under 18.
  • When kids do things that require prolonged focus, such as reading or solving math problems, they’re using directed attention
  • This function starts in the prefrontal cortex, the part of the brain responsible for decision making and impulse control.
  • “Directed attention is the ability to inhibit distractions and sustain attention and to shift attention appropriately. It requires higher-order skills like planning and prioritizing,”
  • Kids generally have a harder time doing this—and putting down their videogame controllers—because the prefrontal cortex isn’t fully developed until age 25.
  • “We speculate that individuals with lower self-control ability have more difficulty shifting attention away from favorite video stimulation,
  • “In the short-form snackable world, you’re getting quick hit after quick hit, and as soon as it’s over, you have to make a choice,” said Mass General’s Dr. Marci, who wrote the new book “Rewired: Protecting Your Brain in the Digital Age.” The more developed the prefrontal cortex, the better the choices.
  • Dopamine is a neurotransmitter that gets released in the brain when it’s expecting a reward. A flood of dopamine reinforces cravings for something enjoyable, whether it’s a tasty meal, a drug or a funny TikTok video.
  • “TikTok is a dopamine machine,” said John Hutton, a pediatrician and director of the Reading & Literacy Discovery Center at Cincinnati Children’s Hospital. “If you want kids to pay attention, they need to practice paying attention.”
  • Researchers are just beginning to conduct long-term studies on digital media’s effects on kids’ brains. The National Institutes of Health is funding a study of nearly 12,000 adolescents as they grow into adulthood to examine the impact that many childhood experiences—from social media to smoking—have on cognitive development.
  • she predicts they will find that when brains repeatedly process rapid, rewarding content, their ability to process less-rapid, less-rewarding things “may change or be harmed.”
  • “It’s like we’ve made kids live in a candy store and then we tell them to ignore all that candy and eat a plate of vegetables,”
  • “We have an endless flow of immediate pleasures that’s unprecedented in human history.”
  • Parents and kids can take steps to boost attention, but it takes effort
  • Swap screen time for real time. Exercise and free play are among the best ways to build attention during childhood,
  • “Depriving kids of tech doesn’t work, but simultaneously reducing it and building up other things, like playing outside, does,”
  • Practice restraint.
  • “When you practice stopping, it strengthens those connections in the brain to allow you to stop again next time.”
  • Use tech’s own tools. TikTok has a screen-time management setting that allows users to cap their app usage.
  • Ensure good sleep. Teens are suffering from a sleep deficit.
Javier E

Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. - The New York Times - 0 views

  • Beneath almost all of the testimony, the manifestoes, the blog posts and the public declarations issued about A.I. are battles among deeply divided factions
  • Some are concerned about far-future risks that sound like science fiction.
  • Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now.
  • ...31 more annotations...
  • Some are motivated by potential business revenue, others by national security concerns.
  • Sometimes, they trade letters, opinion essays or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view A.I.
  • you’ll realize this isn’t really a debate only about A.I. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.
  • It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of A.I. to stay true to the humanity of our values.
  • Because language itself is part of their battleground, the different A.I. camps tend not to use the same words to describe their positions
  • One faction describes the dangers posed by A.I. through the framework of safety, another through ethics or integrity, yet another through security and others through economics.
  • The Doomsayers
  • These are the A.I. safety people, and their ranks include the “Godfathers of A.I.,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind
  • Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future
  • Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like A.I. enslavement.
  • The technology historian David C. Brock calls these fears “wishful worries” — that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”
  • OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant A.I. companies, are pushing for A.I. regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading A.I. companies while restricting competition from start-ups
  • the roboticist Rodney Brooks has pointed out that we will see the existential risks coming, the dangers will not be sudden and we will have time to change course.
  • While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of A.I. and, most important, not allow them to strategically distract from more immediate concerns.
  • The Reformers
  • While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower
  • Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.
  • Propagators of these A.I. ethics concerns — like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury and Cathy O’Neil — have been raising the alarm on inequities coded into A.I. for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women and people who identify as L.G.B.T.Q.
  • Others frame efforts to reform A.I. in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside — or even above — their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the A.I. revolution have, at times, been eliminating safeguards
  • reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by A.I. misinformation, surveillance and inequity.
  • Integrity experts call for the development of responsible A.I., for civic education to ensure A.I. literacy and for keeping humans front and center in A.I. systems.
  • Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that A.I. might kill us in the future should still demand that it not profile and exploit us in the present.
  • Other groups of prognosticators cast the rise of A.I. through the language of competitiveness and national security.
  • Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.
  • they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.
  • U.S. megacompanies pleaded to exempt their general purpose A.I. from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”
  • The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-20th century. A.I. research is fundamentally international; no one country will win a monopoly.
  • As the science-fiction author Ted Chiang has said, fears about the existential risks of A.I. are really fears about the threat of uncontrolled capitalism
  • Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with A.I., China and the fights picked among robber barons.
  • By analogy to the health care sector, we need an A.I. public option to truly keep A.I. companies in check. A publicly directed A.I. development project would serve to counterbalance for-profit corporate A.I. and help ensure an even playing field for access to the 21st century’s key technology while offering a platform for the ethical development and use of A.I.
  • Also, we should embrace the humanity behind A.I. We can hold founders and corporations accountable by mandating greater A.I. transparency in the development stage, in addition to applying legal standards for actions associated with A.I. Remarkably, this is something that both the left and the right can agree on.
Javier E

Opinion | It's Time to Stop Living the American Scam - The New York Times - 0 views

  • people aren’t trying to sell busyness as a virtue anymore, not even to themselves. A new generation has grown to adulthood that’s never known capitalism as a functioning economic system. My generation, X, was the first postwar cohort to be downwardly mobile, but millennials were the first to know it going in.
  • Our country’s oligarchs forgot to maintain the crucial Horatio Alger fiction that anyone can get ahead with hard work — or maybe they just dropped it, figuring we no longer had any choice.
  • Through the internet, we could peer enviously at our neighbors in civilized countries, who get monthlong vacations, don’t have to devote decades to paying for their college degrees, and aren’t terrified of going broke if they get sick. To young people, America seems less like a country than an inescapable web of scams, and “hard work” less like a virtue than a propaganda slogan, inane as “Just say no.”
  • ...11 more annotations...
  • I think people are enervated not just by the Sisyphean pointlessness of their individual labors but also by the fact that they’re working in and for a society in which, increasingly, they have zero faith or investment. The future their elders are preparing to bequeath to them is one that reflects the fondest hopes of the same ignorant bigots a lot of them fled their hometowns to escape.
  • It turns out that millions of people never actually needed to waste days of their lives sitting in traffic or pantomime “work” under managerial scrutiny eight hours a day
  • We learned that nurses, cashiers, truckers and delivery people (who’ve always been too busy to brag about it) actually ran the world and the rest of us were mostly useless supernumeraries. The brutal hierarchies of work shifted, for the first time in recent memory, in favor of labor, and the outraged whines of former social Darwinists were a pleasure to savor.
  • Of course, everyone is still busy — worse than busy, exhausted, too wiped at the end of the day to do more than stress-eat, binge-watch and doomscroll — but no one’s calling it anything other than what it is anymore: an endless, frantic hamster wheel for survival.
  • The pandemic was the bomb cyclone of our discontents
  • American conservatism, which is demographically terminal and knows it, is acting like a moribund billionaire adding sadistic codicils to his will.
  • An increasingly popular retirement plan is figuring civilization will collapse before you have to worry about it
  • Midcentury science fiction writers assumed that the increased productivity brought on by mechanization would give workers an oppressive amount of leisure time, that our greatest threats would be boredom and ennui. But these authors’ prodigious imaginations were hobbled by their humanity and rationality; they’d forgotten that the world is ordered not by reason or decency but by rapacious avarice.
  • In the past few decades, capitalism has exponentially increased the creation of wealth for the already incredibly wealthy at the negligible expense of the well-being, dignity and happiness of most of humanity, plus the nominal cost of a mass extinction and the destruction of the biosphere — like cutting out the inefficient business of digestion and metabolism by pouring a fine bottle of wine directly into the toilet, thereby eliminating the middleman of you.
  • Everyone knows how productive you can be when you’re avoiding something. We are currently experiencing the civilizational equivalent of that anxiety you feel when you have something due the next day that you haven’t even started thinking about and yet still you sit there, helplessly watching whole seasons of mediocre TV or compulsively clicking through quintillions of memes even as your brain screams at you — the same way we scream at our politicians about guns and abortion and climate change — to do something.
  • Enough with the busywork already. We’ve been “productive” enough — produced way too much, in fact. And there is too much that urgently needs to be done: a republic to salvage, a civilization to reimagine and its infrastructure to reinvent, innumerable species to save, a world to restore and millions who are impoverished, imprisoned, illiterate, sick or starving. All while we waste our time at work.
Javier E

For Chat-Based AI, We Are All Once Again Tech Companies' Guinea Pigs - WSJ - 0 views

  • The companies touting new chat-based artificial-intelligence systems are running a massive experiment—and we are the test subjects.
  • In this experiment, Microsoft, MSFT -2.18% OpenAI and others are rolling out on the internet an alien intelligence that no one really understands, which has been granted the ability to influence our assessment of what’s true in the world. 
  • Companies have been cautious in the past about unleashing this technology on the world. In 2019, OpenAI decided not to release an earlier version of the underlying model that powers both ChatGPT and the new Bing because the company’s leaders deemed it too dangerous to do so, they said at the time.
  • ...26 more annotations...
  • Microsoft leaders felt “enormous urgency” for it to be the company to bring this technology to market, because others around the world are working on similar tech but might not have the resources or inclination to build it as responsibly, says Sarah Bird, a leader on Microsoft’s responsible AI team.
  • One common starting point for such models is what is essentially a download or “scrape” of most of the internet. In the past, these language models were used to try to understand text, but the new generation of them, part of the revolution in “generative” AI, uses those same models to create texts by trying to guess, one word at a time, the most likely word to come next in any given sequence.
  • Wide-scale testing gives Microsoft and OpenAI a big competitive edge by enabling them to gather huge amounts of data about how people actually use such chatbots. Both the prompts users input into their systems, and the results their AIs spit out, can then be fed back into a complicated system—which includes human content moderators paid by the companies—to improve it.
  • , being first to market with a chat-based AI gives these companies a huge initial lead over companies that have been slower to release their own chat-based AIs, such as Google.
  • rarely has an experiment like Microsoft and OpenAI’s been rolled out so quickly, and at such a broad scale.
  • Among those who build and study these kinds of AIs, Mr. Altman’s case for experimenting on the global public has inspired responses ranging from raised eyebrows to condemnation.
  • The fact that we’re all guinea pigs in this experiment doesn’t mean it shouldn’t be conducted, says Nathan Lambert, a research scientist at the AI startup Huggingface.
  • “I would kind of be happier with Microsoft doing this experiment than a startup, because Microsoft will at least address these issues when the press cycle gets really bad,” says Dr. Lambert. “I think there are going to be a lot of harms from this kind of AI, and it’s better people know they are coming,” he adds.
  • Others, particularly those who study and advocate for the concept of “ethical AI” or “responsible AI,” argue that the global experiment Microsoft and OpenAI are conducting is downright dangerous
  • Celeste Kidd, a professor of psychology at University of California, Berkeley, studies how people acquire knowledge
  • Her research has shown that people learning about new things have a narrow window in which they form a lasting opinion. Seeing misinformation during this critical initial period of exposure to a new concept—such as the kind of misinformation that chat-based AIs can confidently dispense—can do lasting harm, she says.
  • Dr. Kidd likens OpenAI’s experimentation with AI to exposing the public to possibly dangerous chemicals. “Imagine you put something carcinogenic in the drinking water and you were like, ‘We’ll see if it’s carcinogenic.’ After, you can’t take it back—people have cancer now,”
  • Part of the challenge with AI chatbots is that they can sometimes simply make things up. Numerous examples of this tendency have been documented by users of both ChatGPT and OpenA
  • These models also tend to be riddled with biases that may not be immediately apparent to users. For example, they can express opinions gleaned from the internet as if they were verified facts
  • When millions are exposed to these biases across billions of interactions, this AI has the potential to refashion humanity’s views, at a global scale, says Dr. Kidd.
  • OpenAI has talked publicly about the problems with these systems, and how it is trying to address them. In a recent blog post, the company said that in the future, users might be able to select AIs whose “values” align with their own.
  • “We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society,” the post said.
  • Eliminating made-up information and bias from chat-based search engines is impossible given the current state of the technology, says Mark Riedl, a professor at Georgia Institute of Technology who studies artificial intelligence
  • He believes the release of these technologies to the public by Microsoft and OpenAI is premature. “We are putting out products that are still being actively researched at this moment,” he adds. 
  • in other areas of human endeavor—from new drugs and new modes of transportation to advertising and broadcast media—we have standards for what can and cannot be unleashed on the public. No such standards exist for AI, says Dr. Riedl.
  • To modify these AIs so that they produce outputs that humans find both useful and not-offensive, engineers often use a process called “reinforcement learning through human feedback.
  • that’s a fancy way of saying that humans provide input to the raw AI algorithm, often by simply saying which of its potential responses to a query are better—and also which are not acceptable at all.
  • Microsoft’s and OpenAI’s globe-spanning experiments on millions of people are yielding a fire hose of data for both companies. User-entered prompts and the AI-generated results are fed back through a network of paid human AI trainers to further fine-tune the models,
  • Huggingface’s Dr. Lambert says that any company, including his own, that doesn’t have this river of real-world usage data helping it improve its AI is at a huge disadvantage
  • In chatbots, in some autonomous-driving systems, in the unaccountable AIs that decide what we see on social media, and now, in the latest applications of AI, again and again we are the guinea pigs on which tech companies are testing new technology.
  • It may be the case that there is no other way to roll out this latest iteration of AI—which is already showing promise in some areas—at scale. But we should always be asking, at times like these: At what price?
Javier E

GPT-4 has arrived. It will blow ChatGPT out of the water. - The Washington Post - 0 views

  • GPT-4, in contrast, is a state-of-the-art system capable of creating not just words but describing images in response to a person’s simple written commands.
  • When shown a photo of a boxing glove hanging over a wooden seesaw with a ball on one side, for instance, a person can ask what will happen if the glove drops, and GPT-4 will respond that it would hit the seesaw and cause the ball to fly up.
  • an AI program, known as a large language model, that early testers had claimed was remarkably advanced in its ability to reason and learn new things
  • ...22 more annotations...
  • hose promises have also fueled anxiety over how people will be able to compete for jobs outsourced to eerily refined machines or trust the accuracy of what they see online.
  • Officials with the San Francisco lab said GPT-4’s “multimodal” training across text and images would allow it to escape the chat box and more fully emulate a world of color and imagery, surpassing ChatGPT in its “advanced reasoning capabilities.”
  • A person could upload an image and GPT-4 could caption it for them, describing the objects and scene.
  • AI language models often confidently offer wrong answers because they are designed to spit out cogent phrases, not actual facts. And because they have been trained on internet text and imagery, they have also learned to emulate human biases of race, gender, religion and class.
  • GPT-4 still makes many of the errors of previous versions, including “hallucinating” nonsense, perpetuating social biases and offering bad advice. It also lacks knowledge of events that happened after about September 2021, when its training data was finalized, and “does not learn from its experience,” limiting people’s ability to teach it new things.
  • Microsoft has invested billions of dollars in OpenAI in the hope its technology will become a secret weapon for its workplace software, search engine and other online ambitions. It has marketed the technology as a super-efficient companion that can handle mindless work and free people for creative pursuits, helping one software developer to do the work of an entire team or allowing a mom-and-pop shop to design a professional advertising campaign without outside help.
  • it could lead to business models and creative ventures no one can predict.
  • sparked criticism that the companies are rushing to exploit an untested, unregulated and unpredictable technology that could deceive people, undermine artists’ work and lead to real-world harm.
  • the company held back the feature to better understand potential risks. As one example, she said, the model might be able to look at an image of a big group of people and offer up known information about them, including their identities — a possible facial recognition use case that could be used for mass surveillance.
  • OpenAI researchers wrote, “As GPT-4 and AI systems like it are adopted more widely,” they “will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in.”
  • “We can agree as a society broadly on some harms that a model should not contribute to,” such as building a nuclear bomb or generating child sexual abuse material, she said. “But many harms are nuanced and primarily affect marginalized groups,” she added, and those harmful biases, especially across other languages, “cannot be a secondary consideration in performance.”
  • OpenAI said its new model would be able to handle more than 25,000 words of text, a leap forward that could facilitate longer conversations and allow for the searching and analysis of long documents.
  • OpenAI developers said GPT-4 was more likely to provide factual responses and less likely to refuse harmless requests
  • Duolingo, the language learning app, has already used GPT-4 to introduce new features, such as an AI conversation partner and a tool that tells users why an answer was incorrect.
  • The company did not share evaluations around bias that have become increasingly common after pressure from AI ethicists.
  • GPT-4 will have competition in the growing field of multisensory AI. DeepMind, an AI firm owned by Google’s parent company Alphabet, last year released a “generalist” model named Gato that can describe images and play video games. And Google this month released a multimodal system, PaLM-E, that folded AI vision and language expertise into a one-armed robot on wheels: If someone told it to go fetch some chips, for instance, it could comprehend the request, wheel over to a drawer and choose the right bag.
  • The systems, though — as critics and the AI researchers are quick to point out — are merely repeating patterns and associations found in their training data without a clear understanding of what it’s saying or when it’s wrong.
  • GPT-4, the fourth “generative pre-trained transformer” since OpenAI’s first release in 2018, relies on a breakthrough neural-network technique in 2017 known as the transformer that rapidly advanced how AI systems can analyze patterns in human speech and imagery.
  • The systems are “pre-trained” by analyzing trillions of words and images taken from across the internet: news articles, restaurant reviews and message-board arguments; memes, family photos and works of art.
  • Giant supercomputer clusters of graphics processing chips are mapped out their statistical patterns — learning which words tended to follow each other in phrases, for instance — so that the AI can mimic those patterns, automatically crafting long passages of text or detailed images, one word or pixel at a time.
  • In 2019, the company refused to publicly release GPT-2, saying it was so good they were concerned about the “malicious applications” of its use, from automated spam avalanches to mass impersonation and disinformation campaigns.
  • Altman has also marketed OpenAI’s vision with the aura of science fiction come to life. In a blog post last month, he said the company was planning for ways to ensure that “all of humanity” benefits from “artificial general intelligence,” or AGI — an industry term for the still-fantastical idea of an AI superintelligence that is generally as smart as, or smarter than, the humans themselves.
‹ Previous 21 - 25 of 25
Showing 20 items per page