Skip to main content

Home/ TOK Friends/ Group items tagged misinformation

Rss Feed Group items tagged

Javier E

How 2020 Forced Facebook and Twitter to Step In - The Atlantic - 0 views

  • mainstream platforms learned their lesson, accepting that they should intervene aggressively in more and more cases when users post content that might cause social harm.
  • During the wildfires in the American West in September, Facebook and Twitter took down false claims about their cause, even though the platforms had not done the same when large parts of Australia were engulfed in flames at the start of the year
  • Twitter, Facebook, and YouTube cracked down on QAnon, a sprawling, incoherent, and constantly evolving conspiracy theory, even though its borders are hard to delineate.
  • ...15 more annotations...
  • Content moderation comes to every content platform eventually, and platforms are starting to realize this faster than ever.
  • Nothing symbolizes this shift as neatly as Facebook’s decision in October (and Twitter’s shortly after) to start banning Holocaust denial. Almost exactly a year earlier, Zuckerberg had proudly tied himself to the First Amendment in a widely publicized “stand for free expression” at Georgetown University.
  • The evolution continues. Facebook announced earlier this month that it will join platforms such as YouTube and TikTok in removing, not merely labeling or down-ranking, false claims about COVID-19 vaccines.
  • the pandemic also showed that complete neutrality is impossible. Even though it’s not clear that removing content outright is the best way to correct misperceptions, Facebook and other platforms plainly want to signal that, at least in the current crisis, they don’t want to be seen as feeding people information that might kill them.
  • When internet platforms announce new policies, assessing whether they can and will enforce them consistently has always been difficult. In essence, the companies are grading their own work. But too often what can be gleaned from the outside suggests that they’re failing.
  • It tweaked its algorithm to boost authoritative sources in the news feed and turned off recommendations to join groups based around political or social issues. Facebook is reversing some of these steps now, but it cannot make people forget this toolbox exists in the future
  • Even before the pandemic, YouTube had begun adjusting its recommendation algorithm to reduce the spread of borderline and harmful content, and is introducing pop-up nudges to encourage user
  • Platforms don’t deserve praise for belatedly noticing dumpster fires that they helped create and affixing unobtrusive labels to them
  • Warning labels for misinformation might make some commentators feel a little better, but whether labels actually do much to contain the spread of false information is still unknown.
  • News reporting suggests that insiders at Facebook knew they could and should do more about misinformation, but higher-ups vetoed their ideas. YouTube barely acted to stem the flood of misinformation about election results on its platform.
  • As platforms grow more comfortable with their power, they are recognizing that they have options beyond taking posts down or leaving them up. In addition to warning labels, Facebook implemented other “break glass” measures to stem misinformation as the election approached.
  • And if 2020 finally made clear to platforms the need for greater content moderation, it also exposed the inevitable limits of content moderation.
  • Down-ranking, labeling, or deleting content on an internet platform does not address the social or political circumstances that caused it to be posted in the first place
  • even the most powerful platform will never be able to fully compensate for the failures of other governing institutions or be able to stop the leader of the free world from constructing an alternative reality when a whole media ecosystem is ready and willing to enable him. As Renée DiResta wrote in The Atlantic last month, “reducing the supply of misinformation doesn’t eliminate the demand.”
  • Even so, this year’s events showed that nothing is innate, inevitable, or immutable about platforms as they currently exist. The possibilities for what they might become—and what role they will play in society—are limited more by imagination than any fixed technological constraint, and the companies appear more willing to experiment than ever.
Javier E

Twitter and TikTok Lead in Amplifying Misinformation, Report Finds - The New York Times - 0 views

  • It is well known that social media amplifies misinformation and other harmful content. The Integrity Institute, an advocacy group, is now trying to measure exactly how much
  • The institute’s initial report, posted online, found that a “well-crafted lie” will get more engagements than typical, truthful content and that some features of social media sites and their algorithms contribute to the spread of misinformation.
  • Twitter, the analysis showed, has what the institute called the great misinformation amplification factor, in large part because of its feature allowing people to share, or “retweet,” posts easily.
  • ...6 more annotations...
  • It was followed by TikTok, the Chinese-owned video site, which uses machine-learning models to predict engagement and make recommendations to users.
  • “We see a difference for each platform because each platform has different mechanisms for virality on it,” said Jeff Allen, a former integrity officer at Facebook and a founder and the chief research officer at the Integrity Institute. “The more mechanisms there are for virality on the platform, the more we see misinformation getting additional distribution.”
  • The institute calculated its findings by comparing posts that members of the International Fact-Checking Network have identified as false with the engagement of previous posts that were not flagged from the same accounts
  • Facebook, according to the sample that the institute has studied so far, had the most instances of misinformation but amplified such claims to a lesser degree, in part because sharing posts requires more steps. But some of its newer features are more prone to amplify misinformation, the institute found
  • Facebook’s amplification factor of video content alone is closer to TikTok’s, the institute found. That’s because the platform’s Reels and Facebook Watch, which are video features, “both rely heavily on algorithmic content recommendations” based on engagements, according to the institute’s calculations.
  • Instagram, which like Facebook is owned by Meta, had the lowest amplification rate. There was not yet sufficient data to make a statistically significant estimate for YouTube, according to the institute.
Javier E

How to avoid covid-19 hoax stories? - The Washington Post - 1 views

  • How good are people at sifting out fake news?
  • we’ve been investigating whether ordinary individuals who encounter news when it first appears online — before fact-checkers like Snopes and PolitiFacts have an opportunity to issue reports about an article’s veracity — are able to identify whether articles contain true or false information.
  • Unfortunately, it seems quite difficult for people to identify false or misleading news, and the limited number of coronavirus news stories in our collection are no exception
  • ...14 more annotations...
  • Over a 13-week period, our study allowed us to capture people’s assessments of fresh news articles in real time. Each day of the study, we relied on a fixed, pre-registered process to select five popular articles published within the previous 24 hours
  • The five articles were balanced between conservative, liberal and non-partisan sources, as well as from mainstream news websites and from websites known to produce fake news. In total, we sent 150 total articles to 90 survey respondents each
  • We also sent these articles separately to six independent fact checkers, and treated their most common response — true, false/misleading, or cannot determine — for each article as the “correct’’ answer for that article.
  • When shown an article that was rated “true” by the professional fact checkers, respondents correctly identified the article as true 62 percent of the time. When the source of the true news story was a mainstream news source, respondents correctly identified the article as true 73 percent of the time.
  • However, for each article the professional fact checkers rated “false/misleading,” the study participants were as likely to say it was true as they were to say it was false or misleading. And roughly one-third of the time they told us they were unable to determine the veracity of the article. In other words, people on the whole were unable to correctly classify false or misleading news.
  • four of the articles in our study that fact checkers rated as false or misleading were related to the coronavirus.
  • All four articles promoted the unfounded rumor that the virus was intentionally developed in a laboratory. Although accidental releases of pathogens from labs have previously caused significant morbidity and mortality, in the current pandemic multiple pieces of evidence suggest this virus is of natural origin. There’s little evidence that the virus was manufactured or altered.
  • Only 30 percent of participants correctly classified them as false or misleading.
  • respondents seemed to have more trouble deciding what to think about false covid-19 stories, leading to a higher proportion of “could not determine” responses than we saw for the stories on other topics our professional fact checkers rated as “false/misleading.” This finding suggests that it may be particularly difficult to identify misinformation in newly emerging topics
  • Study participants with higher levels of education did better on identifying both fake news overall and coronavirus-related fake news — but were far from being able to correctly weed out misinformation all of the time
  • In fact, no group, regardless of education level, was able to correctly identify the stories that the professional fact checkers had labeled as false or misleading more than 40 percent of the time.
  • Taken together, our findings suggest that there is widespread potential for vulnerability to misinformation when it first appears online. This is especially worrying during the current pandemic
  • In the current environment, misinformation has the potential to undermine social distancing efforts, to lead people to hoard supplies, or to promote the adoption of potentially dangerous fake cures.
  • our findings suggest that non-trivial numbers of people will believe false information to be true when they first encounter it. And it suggests that efforts to remove coronavirus-related misinformation will need to be swift — and implemented early in an article’s life-cycle — to stop the spread of something else that’s dangerous: misinformation.
peterconnelly

How Some States Are Combating Election Misinformation Ahead of Midterms - The New York ... - 0 views

  • Ahead of the 2020 elections, Connecticut confronted a bevy of falsehoods about voting that swirled around online. One, widely viewed on Facebook, wrongly said absentee ballots had been sent to dead people. On Twitter, users spread a false post that a tractor-trailer carrying ballots had crashed on Interstate 95, sending thousands of voter slips into the air and across the highway.
  • the state plans to spend nearly $2 million on marketing to share factual information about voting, and to create its first-ever position for an expert in combating misinformation.
  • With a salary of $150,000, the person is expected to comb fringe sites like 4chan, far-ri
  • ...7 more annotations...
  • ght social networks like Gettr and Rumble, and mainstream social media sites to root out early misinformation narratives about voting before they go viral, and then urge the companies to remove or flag the posts that contain false information.
  • These states, most of them under Democratic control, have been acting as voter confidence in election integrity has plummeted.
  • In an ABC/Ipsos poll from January, only 20 percent of respondents said they were “very confident” in the integrity of the election system and 39 percent said they felt “somewhat confident.”
  • Some conservatives and civil rights groups are almost certain to complain that the efforts to limit misinformation could restrict free speech.
  • “State and local governments are well situated to reduce harms from dis- and misinformation by providing timely, accurate and trustworthy information,” said Rachel Goodman
  • “Facts still exist, and lies are being used to chip away at our fundamental freedoms,” Ms. Griswold said.
  • Officials said they would prefer candidates fluent in both English and Spanish, to address the spread of misinformation in both languages. The officer would track down viral misinformation posts on Facebook, Instagram, Twitter and YouTube, and look for emerging narratives and memes, especially on fringe social media platforms and the dark web.
Javier E

Is Anything Still True? On the Internet, No One Knows Anymore - WSJ - 1 views

  • Creating and disseminating convincing propaganda used to require the resources of a state. Now all it takes is a smartphone.
  • Generative artificial intelligence is now capable of creating fake pictures, clones of our voices, and even videos depicting and distorting world events. The result: From our personal circles to the political circuses, everyone must now question whether what they see and hear is true.
  • exposure to AI-generated fakes can make us question the authenticity of everything we see. Real images and real recordings can be dismissed as fake. 
  • ...20 more annotations...
  • “When you show people deepfakes and generative AI, a lot of times they come out of the experiment saying, ‘I just don’t trust anything anymore,’” says David Rand, a professor at MIT Sloan who studies the creation, spread and impact of misinformation.
  • This problem, which has grown more acute in the age of generative AI, is known as the “liar’s dividend,
  • The combination of easily-generated fake content and the suspicion that anything might be fake allows people to choose what they want to believe, adds DiResta, leading to what she calls “bespoke realities.”
  • Examples of misleading content created by generative AI are not hard to come by, especially on social media
  • The signs that an image is AI-generated are easy to miss for a user simply scrolling past, who has an instant to decide whether to like or boost a post on social media. And as generative AI continues to improve, it’s likely that such signs will be harder to spot in the future.
  • “What our work suggests is that most regular people do not want to share false things—the problem is they are not paying attention,”
  • People’s attention is already limited, and the way social media works—encouraging us to gorge on content, while quickly deciding whether or not to share it—leaves us precious little capacity to determine whether or not something is true
  • are now using its existence as a pretext to dismiss accurate information
  • in the course of a lawsuit over the death of a man using Tesla’s “full self-driving” system, Elon Musk’s lawyers responded to video evidence of Musk making claims about this software by suggesting that the proliferation of “deepfakes” of Musk was grounds to dismiss such evidence. They advanced that argument even though the clip of Musk was verifiably real
  • If the crisis of authenticity were limited to social media, we might be able to take solace in communication with those closest to us. But even those interactions are now potentially rife with AI-generated fakes.
  • what sounds like a call from a grandchild requesting bail money may be scammers who have scraped recordings of the grandchild’s voice from social media to dupe a grandparent into sending money.
  • companies like Alphabet, the parent company of Google, are trying to spin the altering of personal images as a good thing. 
  • With its latest Pixel phone, the company unveiled a suite of new and upgraded tools that can automatically replace a person’s face in one image with their face from another, or quickly remove someone from a photo entirely.
  • Joseph Stalin, who was fond of erasing people he didn’t like from official photos, would have loved this technology.
  • In Google’s defense, it is adding a record of whether an image was altered to data attached to it. But such metadata is only accessible in the original photo and some copies, and is easy enough to strip out.
  • The rapid adoption of many different AI tools means that we are now forced to question everything that we are exposed to in any medium, from our immediate communities to the geopolitical, said Hany Farid, a professor at the University of California, Berkeley who
  • To put our current moment in historical context, he notes that the PC revolution made it easy to store and replicate information, the internet made it easy to publish it, the mobile revolution made it easier than ever to access and spread, and the rise of AI has made creating misinformation a cinch. And each revolution arrived faster than the one before it.
  • Not everyone agrees that arming the public with easy access to AI will exacerbate our current difficulties with misinformation. The primary argument of such experts is that there is already vastly more misinformation on the internet than a person can consume, so throwing more into the mix won’t make things worse.
  • it’s not exactly reassuring, especially given that trust in institutions is already at one of the lowest points in the past 70 years, according to the nonpartisan Pew Research Center, and polarization—a measure of how much we distrust one another—is at a high point.
  • “What happens when we have eroded trust in media, government, and experts?” says Farid. “If you don’t trust me and I don’t trust you, how do we respond to pandemics, or climate change, or have fair and open elections? This is how authoritarianism arises—when you erode trust in institutions.”
peterconnelly

Covid Vaccine Misinformation Still Fuels Fears Surrounding Pregnancy, a New Study Finds... - 0 views

  • A steady bombardment of coronavirus misinformation during the pandemic has left nearly one-third of American women who are pregnant, or who plan to become pregnant, believing at least one falsehood about coronavirus vaccinations and pregnancy, according to a new study. A higher share were unsure whether to believe the myths.
  • “Pregnancy is a time where a lot of women are seeking information on a variety of pregnancy-related topics, but many pregnancy forums are filled with misinformation,” said Tara Kirk Sell
  • The misinformation is so pervasive that it has even sown doubts in segments of the population that generally believe in the coronavirus vaccines’ safety for adults, like Democratic voters and people who have been fully vaccinated.
  • ...5 more annotations...
  • “There are certain things that increase perception of risks,” Dr. Sell said. “One of these is risks to future generations. So rumors related to pregnancy are particularly gripping.
  • “We know pregnant individuals are at an increased risk when it comes to Covid-19, but they absolutely should not and do not have to die from it,” said Dr. Christopher Zahn
  • 60 percent believed that pregnant women should not get the vaccine, or were unsure if this was true;
  • One reason misinformation about the vaccines and pregnancy may have gained so much traction, experts say, is that the earliest clinical trials of the coronavirus vaccines excluded pregnant women. The lack of trial data led the C.D.C. and World Health Organization to initially give different recommendations to pregnant women, though neither explicitly forbade, nor encouraged, immunizing pregnant women. Other health organizations chose to wait for more safety data from later trials before making an official recommendation for pregnant women to get vaccinated.
  • “At the root of this problem is trust, or really, it’s a lack of trust,” Dr. Sell said.
Javier E

Facebook and Twitter Dodge a 2016 Repeat, and Ignite a 2020 Firestorm - The New York Times - 1 views

  • It’s true that banning links to a story published by a 200-year-old American newspaper — albeit one that is now a Rupert Murdoch-owned tabloid — is a more dramatic step than cutting off WikiLeaks or some lesser-known misinformation purveyor. Still, it’s clear that what Facebook and Twitter were actually trying to prevent was not free expression, but a bad actor using their services as a conduit for a damaging cyberattack or misinformation.
  • These decisions get made quickly, in the heat of the moment, and it’s possible that more contemplation and debate would produce more satisfying choices. But time is a luxury these platforms don’t always have. In the past, they have been slow to label or remove dangerous misinformation about Covid-19, mail-in voting and more, and have only taken action after the bad posts have gone viral, defeating the purpose.
  • That left the companies with three options, none of them great. Option A: They could treat the Post’s article as part of a hack-and-leak operation, and risk a backlash if it turned out to be more innocent. Option B: They could limit the article’s reach, allowing it to stay up but choosing not to amplify it until more facts emerged. Or, Option C: They could do nothing, and risk getting played again by a foreign actor seeking to disrupt an American election.
  • ...8 more annotations...
  • On Wednesday, several prominent Republicans, including Mr. Trump, repeated their calls for Congress to repeal Section 230 of the Communications Decency Act, a law that shields tech platforms from many lawsuits over user-generated content.
  • That leaves the companies in a precarious spot. They are criticized when they allow misinformation to spread. They are also criticized when they try to prevent it.
  • Perhaps the strangest idea to emerge in the past couple of days, though, is that these services are only now beginning to exert control over what we see. Representative Doug Collins, Republican of Georgia, made this point in a letter to Mark Zuckerberg, the chief executive of Facebook, in which he derided the social network for using “its monopoly to control what news Americans have access to.”
  • The truth, of course, is that tech platforms have been controlling our information diets for years, whether we realized it or not. Their decisions were often buried in obscure “community standards” updates, or hidden in tweaks to the black-box algorithms that govern which posts users see.
  • Their leaders have always been editors masquerading as engineers.
  • What’s happening now is simply that, as these companies move to rid their platforms of bad behavior, their influence is being made more visible.
  • Rather than letting their algorithms run amok (which is an editorial choice in itself), they’re making high-stakes decisions about flammable political misinformation in full public view, with human decision makers who can be debated and held accountable for their choices.
  • After years of inaction, Facebook and Twitter are finally starting to clean up their messes. And in the process, they’re enraging the powerful people who have thrived under the old system.
peterconnelly

Opinion: Texas' new social media law affects all of us - CNN - 0 views

  • Earlier this month, a federal appeals court ruled that a Texas law, which allows residents to sue social media companies for "censoring" what they post, could go into effect.
  • The biggest challenge facing social media companies today is doing exactly what HB 20 seems to disallow: removing misinformation and hate speech.
  • They can remove toxic content like misinformation and hate speech and get tied up in bottomless, costly lawsuits. They can let their platforms turn into cesspools of hate and misinformation and watch people stop using them altogether. Or they can just stop offering their services in Texas, which also exposes them to potential liability since the law makes it illegal for social media platforms to discriminate against Texans based on their location.
  • ...3 more annotations...
  • The state law, referred to as HB 20, makes it illegal for large social networks like Facebook and Twitter to "block, ban, remove, de-platform, demonetize, de-boost, restrict, deny equal access or visibility to, or otherwise discriminate against expression."
  • HB 20 does carve out exemptions, including those that allow social networks to remove content that "directly incites criminal activity or consists of specific threats of violence targeted against a person or group" based on certain characteristics, or that "is unlawful expression."
  • We need to fix social networks by removing toxic content. This month's appeals court ruling does the exact opposite and could even deal a fatal blow to social media as we know it. The only thing worse than not fixing the social platforms we have now would be to see them be subject to a constant slew of lawsuits or devolve into platforms that become bastions of hate speech and misinformation. Let's hope Congress doesn't let us down.
peterconnelly

Twitter launches a crisis misinformation policy - CNN - 0 views

  • Washington (CNN Business)Twitter will now apply warning labels to — and cease recommending — claims that outside experts have identified as misinformation during fast-moving times of crisis, the social media company said Thursday.
  • The platform's new crisis misinformation policy is designed to slow the spread of viral falsehoods during natural disasters, armed conflict and public health emergencies, the company announced.
  • "To determine whether claims are misleading, we require verification from multiple credible, publicly available sources, including evidence from conflict monitoring groups, humanitarian organizations, open-source investigators, journalists, and more," Twitter's head of safety and integrity, Yoel Roth, wrote in a blog post.
  • ...1 more annotation...
  • It also comes amid an ongoing, global battle over the future of platform moderation, with officials in Europe seeking to heighten standards surrounding tech companies' content decision-making and lawmakers in many US states seeking to force platforms to moderate less.
sissij

How Inoculation Can Help Prevent Pseudoscience | Big Think - 2 views

  • It is easier to fool a person than it is to convince a person that they’ve been fooled. This is one of the great curses of humanity.
  • Given the incredible amount of information we process each day, it is difficult for any of us to critically analyze all of it.
  • The state of Minnesota is battling a measles outbreak caused by anti-vaccination propaganda. And Discussion over the effects of misinformation on recent elections in Austria, Germany, and the United States is still ongoing.
  • ...3 more annotations...
  • A recent set of experiments shows us that there is a way to help reduce the effects of misinformation on people: the authors amusingly call it the “inoculation.”
  • which even then were heavily influenced by their pre-existing worldviews.
  • teaching about misconceptions leads to greater learning overall then just telling somebody the truth.
  •  
    Fake news and alternative facts are things that mess up our perception a lot. As we learned in TOK, there are a lot of fallacies in human reasoning. People tend to stick with their pre-existing worldview or ideas. I found it very interesting that people reduce the effect of misinformation by having an "inoculation". I think our TOK class is like the "inoculation" in a way that it asks us question and challenge us with the idea that everything might not seem as definite or absolute as it seems. TOK class can definitely help us to be immune of the fake news. --Sissi (5/25/2017)
Javier E

Building a Nation of Know-Nothings - NYTimes.com - 1 views

  • It’s not just that 47 percent of Republicans believe the lie that Obama is a Muslim, or that 27 percent in the party doubt that the president of the United States is a citizen. But fully half of them believe falsely that the big bailout of banks and insurance companies under TARP was enacted by Obama, and not by President Bush.
  • Take a look at Tuesday night’s box score in the baseball game between New York and Toronto. The Yankees won, 11-5. Now look at the weather summary, showing a high of 71 for New York. The score and temperature are not subject to debate. Yet a president’s birthday or whether he was even in the White House on the day TARP was passed are apparently open questions. A growing segment of the party poised to take control of Congress has bought into denial of the basic truths of Barack Obama’s life. What’s more, this astonishing level of willful ignorance has come about largely by design, and has been aided by a press afraid to call out the primary architects of the lies.
  • It would be nice to dismiss the stupid things that Americans believe as harmless, the price of having such a large, messy democracy.
  • ...1 more annotation...
  • So what if one-in-five believe the sun revolves around the earth, or aren’t sure from which country the United States gained its independence? But false belief in weapons of mass-destruction led the United States to a trillion-dollar war. And trust in rising home value as a truism as reliable as a sunrise was a major contributor to the catastrophic collapse of the economy. At its worst extreme, a culture of misinformation can produce something like Iran, which is run by a Holocaust denier.
  •  
    A major part of the US population now accepts denies basic facts, influenced by a deliberate partisan misinformation campaign tolerated by the press.
Javier E

With Dr. Stella Immanuel's viral video, this was the week America lost the war on misin... - 0 views

  • With nearly 150,000 dead from covid-19, we’ve not only lost the public-health war, we’ve lost the war for truth. Misinformation and lies have captured the castle.
  • And the bad guys’ most powerful weapon? Social media — in particular, Facebook
  • new research, out just this morning from Pew, tells us in painstaking numerical form exactly what’s going on, and it’s not pretty: Americans who rely on social media as their pathway to news are more ignorant and more misinformed than those who come to news through print, a news app on their phones or network TV.
  • ...6 more annotations...
  • nd that group is growing.
  • “Even as Americans who primarily turn to social media for political news are less aware and knowledgeable about a wide range of events and issues in the news, they are more likely than other Americans to have heard about a number of false or unproven claims.”
  • Specifically, they’ve been far more exposed to the conspiracy theory that powerful people intentionally planned the pandemic. Yet this group, says Pew, is also less concerned about the impact of made-up news like this than the rest of the U.S. population.
  • They’re absorbing fake news, but they don’t see it as a problem. In a society that depends on an informed citizenry to make reasonably intelligent decisions about self-governance, this is the worst kind of trouble.
  • In a sweeping piece on disinformation and the 2020 campaign in February — in the pre-pandemic era — the Atlantic’s McKay Coppins concluded with a telling quote from the political theorist Hannah Arendt that bears repetition now. Through an onslaught of lies, which may be debunked before the cycle is repeated, totalitarian leaders are able to instill in their followers “a mixture of gullibility and cynicism,” she warned.
  • Over time, people are conditioned to “believe everything and nothing, think that everything was possible and that nothing was true.” And then such leaders can do pretty much whatever they wish
carolinewren

Spann proves media bias includes weather: 'They never let facts get in the way of a goo... - 0 views

  • Meteorologist James Spann’s no-nonsense, yet enthusiastic approach to making sure Alabamians know the latest weather information in our severe-weather prone state has made him quite the pop culture favorite, especially on social media
  • Spann is also not afraid to call people out when they spread misinformation.
  • The suspendered-Spann, who boasts nearly 200,000 followers on Twitter, did exactly that in a recent article titled “The Age of Disinformation” for national website Medium.com
  • ...13 more annotations...
  • “Since my debut on television in 1979, I have been an eyewitness to the many changes in technology, society, and how we communicate. I am one who embraces change, and celebrates the higher quality of life we enjoy now thanks to this progress.
  • I realize the instant communication platforms we enjoy now do have some negatives that are troubling. Just a few examples in recent days…”
  • “This is a lenticular cloud. They have always been around, and quite frankly aren’t that unusual (although it is an anomaly to see one away from a mountain range). The one thing that is different today is that almost everyone has a camera phone, and almost everyone shares pictures of weather events. You didn’t see these often in earlier decades because technology didn’t allow it. Lenticular clouds are nothing new. But, yes, they are cool to see.”
  • This age of misinformation can lead to dangerous consequences, and promote an agenda, he warns.
  • “The Houston flooding is a great example. We are being told this is unprecedented’… Houston is ‘under water… and it is due to manmade global warming. “Yes, the flooding in Houston yesterday was severe, and a serious threat to life and property. A genuine weather disaster that has brought on suffering.
  • this was not ‘unprecedented.’ Flooding from Tropical Storm Allison in 2001 was more widespread, and flood waters were deeper. There is no comparison.”
  • “Those on the right, and those on the left hang out in ‘echo chambers,’ listening to those with similar world views refusing to believe anything else could be true
  • “Everyone knows the climate is changing; it always has, and always will. I do not know of a single ‘climate denier.’ I am still waiting to meet one.
  • “The debate involves the anthropogenic impact, and this is not why I am writing this piece. Let’s just say the Houston flood this week is weather, and not climate, and leave it at that.”
  • Spann lays much of the blame on the mainstream media and social media “hype and misinformation.”
  • “They will be sure to let you know that weather events they are reporting on are unprecedented,’ there are ‘millions and millions in the path,’ it is caused by a ‘monster storm,’ and ‘the worst is yet to come since these events are becoming more ‘frequent.’
  • “You will never hear about the low tornado count in recent years, the lack of major hurricane landfalls on U.S. coasts over the past 10 years, or the low number of wildfires this year. It doesn’t fit their story.
  • never let facts get in the way of a good story…. there will ALWAYS be a heat wave, flood, wildfire, tornado, tyhpoon, cold wave, and snow storm somewhere. And, trust me, they will find them, and it will probably lead their newscasts
martinelligi

It's not just a social media problem - how search engines spread misinformation - St Ge... - 0 views

  • Ad-driven search engines, like social media platforms, are designed to reward clicking on enticing links because it helps the search companies boost their business metrics. As researchers who study the search and recommendation systems, my colleagues and I show that this dangerous combination of corporate profit motive and individual susceptibility makes the problem difficult to fix.
  • It is in the search engine companies’ best interest to give you things that you want to read, watch or simply click. Therefore, as a search engine or any recommendation system creates a list of items to present, it calculates the likelihood that you’ll click on the items.
  • Similar to problematic social media algorithms, search engines learn to serve you what you and others have clicked on before. Because people are drawn to the sensational, this dance between algorithms and human nature can foster the spread of misinformatio
  • ...2 more annotations...
  • Search engine companies, like most online services, make money not only by selling ads but also by tracking users and selling their data through real-time bidding on it. People are often led to misinformation by their desire for sensational and entertaining news as well as information that is either controversial or confirms their views.
  • This pattern of thrilling and unverified stories emerging and people clicking on them continues, with people apparently either being unconcerned with the truth or believing that if a trusted service such as Google Search is showing these stories to them then the stories must be true. More recently, a disproven report claiming China let the coronavirus leak from a lab gained traction on search engines because of this vicious cycle.
ilanaprincilus06

Why the Brain is Resistant to Truth | Time.com - 1 views

  • the agricultural age gave us easier access to nutrition, and the industrial age dramatically increased our quality of life, no other era has provided so much stimulation for our brains as the information age.
  • every day we produce approximately 2.5 billion gigabytes of data and perform 4 billion Google searches. In the short time it took you to read the last sentence, approximately 530,243 new ones were executed.
  • As information about the world became readily accessible, people were still inclined to argue about the facts
  • ...9 more annotations...
  • in the face of a publicly available birth certificate of the 44th President of the United States, there are diverse opinions regarding his birthplace.
  • But what actually determines whether someone will be persuaded by our argument or whether we will be ignored?
  • In one study, for example, my colleague Micha Edelson and I, together with others, recorded people’s brain activity while we exposed them to misinformation. A week later we invited everyone back to our lab and told them that the information we gave them before was randomly generated. About half the time our volunteers were able to correct the false beliefs we induced in them, but about half the time they continued believing misinformation.
    • ilanaprincilus06
       
      This manipulation has a great affect on our long term memory recollection which we will carry with us until we are able to be persuaded.
  • Science has shown that waiting just a couple of minutes before making judgments reduces the likelihood that they will be based solely on instinct.
  • good news was more likely to impact people’s beliefs than bad news. But that under stress, negative information, such as learning about unexpectedly high rates of disease and violent acts, is more likely to alter people’s beliefs.
    • ilanaprincilus06
       
      When we are in a good mood, we don't want anything to ruin it. However, when we are in a bad mood, we surround ourselves with negative energy because we feel that our bad moods could not possible get worse.
  • Twitter is perfectly designed to engage our emotions because its features naturally call on our affective system; messages are fast, short and transferred within a social context.
  • information on Twitter (and other social platforms that use short and fast messages) is particularly likely to be evaluated based on emotional responses with little input from higher cognitive functions.
  • This structure is called the amygdala and it is important for producing emotional arousal. We found that if the amygdala was activated when people were first exposed to misinformation, it was less likely we would be able to correct their judgments later.
  • people’s feelings, hopes and fears that play a central role in whether a piece of evidence will influence their beliefs.
adonahue011

Opinion | Take a Social Media Break Until You've Voted - The New York Times - 0 views

    • adonahue011
       
      This is an interesting idea to me because people are so set in their views, how staying on social media leading up to the election will affect their vote is a bit confusing.
    • adonahue011
       
      I think the idea of believing what you see on social media could be a logical fallacy. The idea of following the mases on a specific idea, or an authority figure
    • adonahue011
       
      I think the idea of believing what you see on social media could be a logical fallacy. The idea of following the mases on a specific idea, or an authority figure
  • Americans who rely the most on social media to get their news are also far less likely to have accurate or complete knowledge of political events
  • ...19 more annotations...
  • 60 percent of people who primarily get their news from social media had minimal knowledge of current political events, according to the study, compared with 23 percent who primarily get their news directly from news websites or apps
    • adonahue011
       
      Very interesting statistic. I think social media allows for too much individual opinion when it comes to news. Our brains are very deceptive so I find it easy to believe things I personally read on social media.
  • 18- to 29-year-olds, 48 percent get most of their political news from social media sites
  • are breeding a generation of the misinformed — a situation that has only grown more dire at a time when the president spreads falsehoods about public health and the election.
    • adonahue011
       
      I don't agree with this at all. I think the younger generation is seeing the older generation use social media as a news outlet many times. This is the logical fallacy I previously mentioned.
  • while false information flows unimpeded through Facebook groups, user posts and advertisements.
  • The company says it will limit political advertising in the week before Election Day — but with huge numbers of mail-in ballots already being sent in around the country, that will amount to too little, too late.
    • adonahue011
       
      The writer of this article is very bias on this topic, at least facebook is trying to help.
  • “I don’t think there is any question at this point voters will be more informed by seeking out news brands they trust rather than spending their time on social media where it’s less than clear,”
  • Twitter sometimes forces users to first click through a warning that a tweet violates its rules on election integrity,
  • The problem with such posts is that they are widely spread, echoed and believed — and that happens far more quickly than moderators can react with a warning label.
    • adonahue011
       
      This is an important point which is why when we look at media we need to try and analyze it, and not allow ourselves to believe everything we read.
  • Mr. Trump that falsely claimed the seasonal flu is responsible for more deaths than coronavirus
  • here are many positives to social media, of course — particularly as millions of Americans struggle to stay connected during the coronavirus pandemic.
  • available more reliably elsewhere, from your local board of elections website and from good government groups
  • People believe them.
  • Social media is a cesspool
  • however, aren’t taking the threat of spreading misinformation seriously enough ahead of the election.
  • Stay off social media at least until you’ve voted.
huffem4

How to Use Critical Thinking to Separate Fact From Fiction Online | by Simon Spichak | ... - 2 views

  • Critical thinking helps us frame everyday problems, teaches us to ask the correct questions, and points us towards intelligent solutions.
  • Critical thinking is a continuing practice that involves an open mind and methods for synthesizing and evaluating the quality of knowledge and evidence, as well as an understanding of human errors.
  • Step 1. What We Believe Depends on How We Feel
  • ...33 more annotations...
  • One of the first things I ask myself when I read a headline or find a claim about a product is if the phrase is emotionally neutral. Some headlines generate outrage or fear, indicating that there is a clear bias. When we read something that exploits are emotions, we must be careful.
  • misinformation tends to play on our emotions a lot better than factual reporting or news.
  • When I’m trying to figure out whether a claim is factual, there are a few questions I always ask myself.Does the headline, article, or information evoke fear, anger, or other strong negative emotions?Where did you hear about the information? Does it cite any direct evidence?What is the expert consensus on this information?
  • Step 2. Evidence Synthesis and EvaluationSometimes I’m still feeling uncertain if there’s any truth to a claim. Even after taking into account the emotions it evokes, I need to find the evidence of a claim and evaluate its quality
  • Often, the information that I want to check is either political or scientific. There are different questions I ask myself, depending on the nature of these claims.
  • Political claims
  • Looking at multiple different outlets, each with its own unique biases, helps us get a picture of the issue.
  • I use multiple websites specializing in fact-checking. They provide primary sources of evidence for different types of claims. Here is a list of websites where I do my fact-checking:
  • SnopesPolitifactFactCheckMedia Bias/Fact Check (a bias assessor for fact-checking websites)Simply type in some keywords from the claim to find out if it’s verified with primary sources, misleading, false, or unproven.
  • Science claims
  • Often we tout science as the process by which we uncover absolute truths about the universe. Once many scientists agree on something, it gets disseminated in the news. Confusion arises once this science changes or evolves, as is what happened throughout the coronavirus pandemic. In addition to fear and misinformation, we have to address a fundamental misunderstanding of the way science works when practicing critical thinking.
  • It is confusing to hear about certain drugs found to cure the coronavirus one moment, followed by many other scientists and researchers saying that they don’t. How do we collect and assess these scientific claims when there are discrepancies?
  • A big part of these scientific findings is difficult to access for the public
  • Sometimes the distinction between scientific coverage and scientific articles isn’t clear. When this difference is clear, we might still find findings in different academic journals that disagree with each other. Sometimes, research that isn’t peer-reviewed receives plenty of coverage in the media
  • Correlation and causation: Sometimes a claim might present two factors that appear correlated. Consider recent misinformation about 5G Towers and the spread of coronavirus. While there might appear to be associations, it doesn’t necessarily mean that there is a causative relationship
  • To practice critical thinking with these kinds of claims, we must ask the following questions:Does this claim emerge from a peer-reviewed scientific article? Has this paper been retracted?Does this article appear in a reputable journal?What is the expert consensus on this article?
  • The next examples I want to bring up refer to retracted articles from peer-reviewed journals. Since science is a self-correcting process, rather than a decree of absolutes, mistakes and fraud are corrected.
  • Briefly, I will show you exactly how to tell if the resource you are reading is an actual, peer-reviewed scientific article.
  • How does science go from experiments to the news?
  • researchers outline exactly how they conducted their experiments so other researchers can replicate them, build upon them, or provide quality assurance for them. This scientific report does not go straight to the nearest science journalist. Websites and news outlets like Scientific American or The Atlantic do not publish scientific articles.
  • Here is a quick checklist that will help you figure out if you’re viewing a scientific paper.
  • Once it’s written up, researchers send this manuscript to a journal. Other experts in the field then provide comments, feedback, and critiques. These peer reviewers ask researchers for clarification or even more experiments to strengthen their results. Peer review often takes months or sometimes years.
  • Some peer-reviewed scientific journals are Science and Nature; other scientific articles are searchable through the PubMed database. If you’re curious about a topic, search for scientific papers.
  • Peer-review is crucial! If you’re assessing the quality of evidence for claims, peer-reviewed research is a strong indicator
  • Finally, there are platforms for scientists to review research even after publication in a peer-reviewed journal. Although most scientists conduct experiments and interpret their data objectively, they may still make errors. Many scientists use Twitter and PubPeer to perform a post-publication review
  • Step 3. Are You Practicing Objectivity?
  • To finish off, I want to discuss common cognitive errors that we tend to make. Finally, there are some framing questions to ask at the end of our research to help us with assessing any information that we find.
  • Dunning-Kruger effect: Why do we rely on experts? In 1999, David Dunning and Justin Kruger published “Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments.” They found that the less a person understands about a topic, the more confident of their abilities or knowledge they will be
  • How does this relate to critical thinking? If you’re reading a claim sourced or written by somebody who lacks expertise in a field, they are underestimating its complexity. Whenever possible, look for an authoritative source when synthesizing and evaluating evidence for a claim.
  • Survivorship bias: Ever heard someone argue that we don’t need vaccines or seatbelts? After all, they grew up without either of them and are still alive and healthy!These arguments are appealing at first, but they don’t account for any cases of failures. They are attributing a misplaced sense of optimism and safety by ignoring the deaths that occurred resultant from a lack of vaccinations and seatbelts
  • When you’re still unsure, follow the consensus of the experts within the field. Scientists pointed out flaws within this pre-print article leading to its retraction. The pre-print was removed from the server because it did not hold up to proper scientific standards or scrutiny.
  • Now with all the evidence we’ve gathered, we ask ourselves some final questions. There are plenty more questions you will come up with yourself, case-by-case.Who is making the original claim?Who supports these claims? What are their qualifications?What is the evidence used for these claims?Where is this evidence published?How was the evidence gathered?Why is it important?
  • “even if some data is supporting a claim, does it make sense?” Some claims are deceptively true but fall apart when accounting for this bias.
Javier E

New research explores how conservative media misinformation may have intensified corona... - 0 views

  • In recent weeks, three studies have focused on conservative media’s role in fostering confusion about the seriousness of the coronavirus. Taken together, they paint a picture of a media ecosystem that amplifies misinformation, entertains conspiracy theories and discourages audiences from taking concrete steps to protect themselves and others.
  • The end result, according to one of the studies, is that infection and mortality rates are higher in places where one pundit who initially downplayed the severity of the pandemic — Fox News’ Sean Hannity — reaches the largest audiences.
  • “We are receiving an incredible number of studies and solid data showing that consuming far-right media and social media content was strongly associated with low concern about the virus at the onset of the pandemic,”
  • ...5 more annotations...
  • Administering a nationally representative phone survey with 1,008 respondents, they found that people who got most of their information from mainstream print and broadcast outlets tended to have an accurate assessment of the severity of the pandemic and their risks of infection.
  • But those who relied on conservative sources, such as Fox News and Rush Limbaugh, were more likely to believe in conspiracy theories or unfounded rumors, such as the belief that taking vitamin C could prevent infection, that the Chinese government had created the virus, and that the U.S. Centers for Disease Control and Prevention was exaggerating the pandemic’s threat “to damage the Trump presidency.”
  • “The effect that we measure could be driven by the long-term message of Fox News, which is that the mainstream media often report ‘fake news’ and have a political agenda,” Simonov said. “This could result in lowering trust in institutions and experts, including health experts in the case of the pandemic.”
  • Our results indicate that a one standard deviation increase in relative viewership of Hannity relative to Tucker Carlson Tonight is associated with approximately 32 percent more COVID-19 cases on March 14 and approximately 23 percent more COVID-19 deaths on March 28,
  • “If the results hold, the research demonstrates the influence that broadcast media can have on behavior,”
Javier E

Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. - The New York Times - 0 views

  • Beneath almost all of the testimony, the manifestoes, the blog posts and the public declarations issued about A.I. are battles among deeply divided factions
  • Some are concerned about far-future risks that sound like science fiction.
  • Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now.
  • ...31 more annotations...
  • Some are motivated by potential business revenue, others by national security concerns.
  • Sometimes, they trade letters, opinion essays or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view A.I.
  • you’ll realize this isn’t really a debate only about A.I. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.
  • It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of A.I. to stay true to the humanity of our values.
  • Because language itself is part of their battleground, the different A.I. camps tend not to use the same words to describe their positions
  • One faction describes the dangers posed by A.I. through the framework of safety, another through ethics or integrity, yet another through security and others through economics.
  • The Doomsayers
  • These are the A.I. safety people, and their ranks include the “Godfathers of A.I.,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind
  • The technology historian David C. Brock calls these fears “wishful worries” — that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”
  • Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like A.I. enslavement.
  • Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future
  • OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant A.I. companies, are pushing for A.I. regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading A.I. companies while restricting competition from start-ups
  • the roboticist Rodney Brooks has pointed out that we will see the existential risks coming, the dangers will not be sudden and we will have time to change course.
  • While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of A.I. and, most important, not allow them to strategically distract from more immediate concerns.
  • they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.
  • While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower
  • Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.
  • Propagators of these A.I. ethics concerns — like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury and Cathy O’Neil — have been raising the alarm on inequities coded into A.I. for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women and people who identify as L.G.B.T.Q.
  • Others frame efforts to reform A.I. in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside — or even above — their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the A.I. revolution have, at times, been eliminating safeguards
  • reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by A.I. misinformation, surveillance and inequity.
  • Integrity experts call for the development of responsible A.I., for civic education to ensure A.I. literacy and for keeping humans front and center in A.I. systems.
  • Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that A.I. might kill us in the future should still demand that it not profile and exploit us in the present.
  • Other groups of prognosticators cast the rise of A.I. through the language of competitiveness and national security.
  • Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.
  • The Reformers
  • U.S. megacompanies pleaded to exempt their general purpose A.I. from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”
  • The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-20th century. A.I. research is fundamentally international; no one country will win a monopoly.
  • As the science-fiction author Ted Chiang has said, fears about the existential risks of A.I. are really fears about the threat of uncontrolled capitalism
  • Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with A.I., China and the fights picked among robber barons.
  • By analogy to the health care sector, we need an A.I. public option to truly keep A.I. companies in check. A publicly directed A.I. development project would serve to counterbalance for-profit corporate A.I. and help ensure an even playing field for access to the 21st century’s key technology while offering a platform for the ethical development and use of A.I.
  • Also, we should embrace the humanity behind A.I. We can hold founders and corporations accountable by mandating greater A.I. transparency in the development stage, in addition to applying legal standards for actions associated with A.I. Remarkably, this is something that both the left and the right can agree on.
1 - 20 of 101 Next › Last »
Showing 20 items per page