Skip to main content

Home/ Media in Middle East & North Africa/ Group items tagged algorithms

Rss Feed Group items tagged

Ed Webb

The Making of a YouTube Radical - The New York Times - 0 views

  • Mr. Cain, 26, recently swore off the alt-right nearly five years after discovering it, and has become a vocal critic of the movement. He is scarred by his experience of being radicalized by what he calls a “decentralized cult” of far-right YouTube personalities, who convinced him that Western civilization was under threat from Muslim immigrants and cultural Marxists, that innate I.Q. differences explained racial disparities, and that feminism was a dangerous ideology.
  • Over years of reporting on internet culture, I’ve heard countless versions of Mr. Cain’s story: an aimless young man — usually white, frequently interested in video games — visits YouTube looking for direction or distraction and is seduced by a community of far-right creators. Some young men discover far-right videos by accident, while others seek them out. Some travel all the way to neo-Nazism, while others stop at milder forms of bigotry.
  • YouTube and its recommendation algorithm, the software that determines which videos appear on users’ home pages and inside the “Up Next” sidebar next to a video that is playing. The algorithm is responsible for more than 70 percent of all time spent on the site
  • ...22 more annotations...
  • YouTube has inadvertently created a dangerous on-ramp to extremism by combining two things: a business model that rewards provocative videos with exposure and advertising dollars, and an algorithm that guides users down personalized paths meant to keep them glued to their screens
  • “If I’m YouTube and I want you to watch more, I’m always going to steer you toward Crazytown.”
  • 94 percent of Americans ages 18 to 24 use YouTube, a higher percentage than for any other online service
  • YouTube has been a godsend for hyper-partisans on all sides. It has allowed them to bypass traditional gatekeepers and broadcast their views to mainstream audiences, and has helped once-obscure commentators build lucrative media businesses
  • Bellingcat, an investigative news site, analyzed messages from far-right chat rooms and found that YouTube was cited as the most frequent cause of members’ “red-pilling” — an internet slang term for converting to far-right beliefs
  • The internet was an escape. Mr. Cain grew up in postindustrial Appalachia and was raised by his conservative Christian grandparents. He was smart, but shy and socially awkward, and he carved out an identity during high school as a countercultural punk. He went to community college, but dropped out after three semesters. Broke and depressed, he resolved to get his act together. He began looking for help in the same place he looked for everything: YouTube.
  • they rallied around issues like free speech and antifeminism, portraying themselves as truth-telling rebels doing battle against humorless “social justice warriors.” Their videos felt like episodes in a long-running soap opera, with a constant stream of new heroes and villains. To Mr. Cain, all of this felt like forbidden knowledge — as if, just by watching some YouTube videos, he had been let into an exclusive club. “When I found this stuff, I felt like I was chasing uncomfortable truths,” he told me. “I felt like it was giving me power and respect and authority.”
  • YouTube’s executives announced that the recommendation algorithm would give more weight to watch time, rather than views. That way, creators would be encouraged to make videos that users would finish, users would be more satisfied and YouTube would be able to show them more ads.
  • A month after its algorithm tweak, YouTube changed its rules to allow all video creators to run ads alongside their videos and earn a portion of the revenue they generated.
  • Many right-wing creators already made long video essays, or posted video versions of their podcasts. Their inflammatory messages were more engaging than milder fare. And now that they could earn money from their videos, they had a financial incentive to churn out as much material as possible.
  • Several current and former YouTube employees, who would speak only on the condition of anonymity because they had signed confidentiality agreements, said company leaders were obsessed with increasing engagement during those years. The executives, the people said, rarely considered whether the company’s algorithms were fueling the spread of extreme and hateful political content.
  • Google Brain’s researchers wondered if they could keep YouTube users engaged for longer by steering them into different parts of YouTube, rather than feeding their existing interests. And they began testing a new algorithm that incorporated a different type of A.I., called reinforcement learning. The new A.I., known as Reinforce, was a kind of long-term addiction machine. It was designed to maximize users’ engagement over time by predicting which recommendations would expand their tastes and get them to watch not just one more video but many more.
  • YouTube’s recommendations system is not set in stone. The company makes many small changes every year, and has already introduced a version of its algorithm that is switched on after major news events to promote videos from “authoritative sources” over conspiracy theories and partisan content. This past week, the company announced that it would expand that approach, so that a person who had watched a series of conspiracy theory videos would be nudged toward videos from more authoritative news sources. It also said that a January change to its algorithm to reduce the spread of so-called “borderline” videos had resulted in significantly less traffic to those videos.
  • the bulk of his media diet came from far-right channels. And after the election, he began exploring a part of YouTube with a darker, more radical group of creators. These people didn’t couch their racist and anti-Semitic views in sarcastic memes, and they didn’t speak in dog whistles. One channel run by Jared Taylor, the editor of the white nationalist magazine American Renaissance, posted videos with titles like “‘Refugee’ Invasion Is European Suicide.” Others posted clips of interviews with white supremacists like Richard Spencer and David Duke.
  • As Mr. Molyneux promoted white nationalists, his YouTube channel kept growing. He now has more than 900,000 subscribers, and his videos have been watched nearly 300 million times. Last year, he and Ms. Southern — Mr. Cain’s “fashy bae” — went on a joint speaking tour in Australia and New Zealand, where they criticized Islam and discussed what they saw as the dangers of nonwhite immigration. In March, after a white nationalist gunman killed 50 Muslims in a pair of mosques in Christchurch, New Zealand, Mr. Molyneux and Ms. Southern distanced themselves from the violence, calling the killer a left-wing “eco-terrorist” and saying that linking the shooting to far-right speech was “utter insanity.” Neither Mr. Molyneux nor Ms. Southern replied to a request for comment. The day after my request, Mr. Molyneux uploaded a video titled “An Open Letter to Corporate Reporters,” in which he denied promoting hatred or violence and said labeling him an extremist was “just a way of slandering ideas without having to engage with the content of those ideas.”
  • Unlike most progressives Mr. Cain had seen take on the right, Mr. Bonnell and Ms. Wynn were funny and engaging. They spoke the native language of YouTube, and they didn’t get outraged by far-right ideas. Instead, they rolled their eyes at them, and made them seem shallow and unsophisticated.
  • “I noticed that right-wing people were taking these old-fashioned, knee-jerk, reactionary politics and packing them as edgy punk rock,” Ms. Wynn told me. “One of my goals was to take the excitement out of it.”
  • Ms. Wynn and Mr. Bonnell are part of a new group of YouTubers who are trying to build a counterweight to YouTube’s far-right flank. This group calls itself BreadTube, a reference to the left-wing anarchist Peter Kropotkin’s 1892 book, “The Conquest of Bread.” It also includes people like Oliver Thorn, a British philosopher who hosts the channel PhilosophyTube, where he posts videos about topics like transphobia, racism and Marxist economics.
  • The core of BreadTube’s strategy is a kind of algorithmic hijacking. By talking about many of the same topics that far-right creators do — and, in some cases, by responding directly to their videos — left-wing YouTubers are able to get their videos recommended to the same audience.
  • What is most surprising about Mr. Cain’s new life, on the surface, is how similar it feels to his old one. He still watches dozens of YouTube videos every day and hangs on the words of his favorite creators. It is still difficult, at times, to tell where the YouTube algorithm stops and his personality begins.
  • It’s possible that vulnerable young men like Mr. Cain will drift away from radical groups as they grow up and find stability elsewhere. It’s also possible that this kind of whiplash polarization is here to stay as political factions gain and lose traction online.
  • I’ve learned now that you can’t go to YouTube and think that you’re getting some kind of education, because you’re not.
Ed Webb

How Twitter is gagging Arabic users and acting as morality police | openDemocracy - 0 views

  • Today, Twitter has a different story, and it is not one of speaking truth to power. Twitter is no longer empowering its users. Its platform cannot be considered neutral. Twitter’s actions suggest it is systematically suppressing voices in the Middle East and North Africa (MENA) region.
  • What started out as an investigation into the mass suspension of accounts of Egyptian dissidents, uncovered a mass censorship algorithm that targeted users who use Arabic flagging their text as hateful conduct. This story is still unfolding. As you read this, mass and unjustified systemic locking and suspension of Twitter Arabic accounts continues. Users are angry and bewildered.
  • draconian yet lazy algorithms have systematically shut down voices of dissent – and pulled unsuspecting social media users down with them
  • ...14 more annotations...
  • The effects of these suspensions was not just hiding a set of tweets critical of the government, but completely disabling the influence network of Egypt’s dissidents. This is potentially the first documented politically motivated mass shutdown of twitter accounts at a time when online interaction was high and translated to possible action on the ground
  • accusations are not limited to Egypt but the entire region who have a sense that being critical of their governments was met with punitive measures by Twitter against them
  • many of those suspensions had a common denominator: being critical of the Egyptian government
  • suspensions seemed to have happened around late September and lasted from one day to a few days. In many cases Twitter had responded that they had suspended the accounts by mistake. The accounts affected varied from having a few followers to hundreds of thousands
  • a trending anti-Sisi hashtag disappeared suddenly in July 2018, and then later on in 2019. It didn’t help either to find that an officer in the British Army information warfare unit was head of editorial in Twitter for the MENA region.
  • I interviewed @OfficialAmro1, a user affected by mass suspensions with over 265K followers and 115K tweets. He was suspended without cause and added, “I don’t even curse.”To which I foolishly replied, “Cursing would not suspend your account, particularly if directed against a public figure. Incitement will.”“No, now it does,” he replied. He also added that if you criticize a figure loyal to the Arab regimes, you can get your account locked or suspended.
  • The 'hateful conduct' policy as defined by Twitter states: You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.Analyzing the message contents that were flagged for hateful conduct I saw that most did not violate Twitter’s rules. Soon I began to discover that what @OfficialAmro1 had told me was true. The content I was seeing contained profanity. But that wasn’t the whole story.Arabic curse words are used often. I sampled around a little under 50 claims, with over 30 screenshots that contain Twitter’s email identifying the violating tweet. It was clear that profanity alone was not causing the suspensions.
  • Tragically funnier still are those who were joking around with their friends using their usual language that has profanities @ism3lawy_ ended up cursing Egypt’s Zamalek football club and for that his account was suspended permanently along with that of one of his friends. In a separate conversation, his friend @EHAB_M0 was also joking around with his friends and eventually got a permanent suspension.
  • Within seconds of my post, the algorithm identified the curse words and locked my account for 12 hours. It was my first violation ever. The irony of documenting this as a reply to the platform’s account is probably lost on them.
  • the most dangerous and disconcerting of my findings is that the appeal system in Twitter for MENA is broken.
  • Even things like talking about prostitution can get you banned
  • There is an element of guardianship that is present in despotic Arab regimes, and that moral guardianship is reflected in the algorithm by Twitter as was shown through the numerous examples above.
  • With my limited access to Twitter’s data, I have found nearly 20 accounts probably wrongfully and permanently suspended. I imagine hundreds or even thousands more have been kicked off the platform.
  • “Thank you for trying to help make Twitter a free space once again.”
Ed Webb

AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views

  • Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
  • Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
  • Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
  • ...9 more annotations...
  • Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.
  • output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem
  • the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few
  • Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.
  • the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place
  • the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.
  • employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.
  • too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity
  • We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.
Ed Webb

'The End': Anti-normalisation, Islamofuturism and the erasure of Palestine - Middle Eas... - 0 views

  • The End (El-Nehaya), the Egyptian dystopian science fiction thriller series, has captured the imagination of audiences throughout the Arab world this Ramadan TV season. It is ranked the third most popular series this season, and has generated a lot of discussion in social media about its futuristic technology and debt to Hollywood science fiction and dystopian films.The End was also lumped into the debate over normalisation in this year’s Ramadanic TV programming and was attacked by the Israeli Foreign Ministry for its anti-normalisation stance. The End is premised on the fictional idea that the Arab world would become a superpower and that Israel would be destroyed less than a century into its establishment — that is, in less than thirty years. In its place, Al-Quds conglomerate will be created and will be under total Arab control.
  • Some contrasted the daring futuristic scenario with the utter impotence of the Arab world today, to offer any viable solution to the Palestinian struggle for freedom and the ongoing Nakba. Others thought it was enough that the series managed to provoke and infuriate Israel.
  • The series does not only substitute one form of domination in Al-Quds conglomerate for another. More importantly, the Palestinians are completely erased from Al-Quds conglomerate itself.
  • ...8 more annotations...
  • Ironically, the liquidation of Israel in The End did not bring an end to the oppression in Palestine or the Arab world in general.  Around 2090, Al-Quds conglomerate became the main site for a robocide, the genocide in which humans eliminated all robots after one of them terminated its owner. Consequently, laws were passed to ban the production of robots and the development of AI. The series merely substitutes one form of domination and apartheid for another.After the elimination of the majority of the robots, the all-powerful Energy Co. was established in Al-Quds conglomerate. The corporation employs algorithmic governance, using surveillance technology, facial recognition software and military drones to track and control citizens. Its security forces regularly attack and brutalise citizens. One form of oppression is gone, but Palestine and the Arab world do not live in liberty yet.
  • The most bewildering aspect about this triumphalist history of the liberation of Al-Quds conglomerate in the dystopian world of the series, is the absence of any trace of the Palestinians or Palestinian culture. The obverse side of the obliteration of Israel seems to be the erasure of the Palestinians.
  • The people who live in Al-Quds conglomerate speak Egyptian colloquial Arabic, and no one seems to be taking pride in their Arabic cultural heritage or Palestinian identity.
  • The other noticeable feature about the representation of life in Al-Quds conglomerate is its patriarchal gender politics. Women and men follow a rigid division of labour, even professional women who have careers. Radwa, the protagonist’s wife, works as the principal (agricultural) engineer at Green Co., the company responsible for providing food supplies to Al-Quds conglomerate, but she has to perform the domestic chores in the house.
  • the dystopian world of the series is deeply steeped in Islamic culture and traditions. If Afrofuturism, for example, is “rooted in and unapologetically celebrate[s] the uniqueness and innovation of black culture,” this series is clearly grounded in Islamofuturism.
  • The series illuminates and raises questions about these significant matters that have affected humanity in the last few decades. These issues include not only the polarisation of wealth and the cupola created in the global apartheid, but also neoliberal algorithmic governance, the naturalisation of AI (as both human surrogates and sex bots), the rise of megalopolis cities as corporations, renewable energy and ecological sustainability.
  • it is not clear where the series positions itself on the question of the state and the military.
  • the series itself is produced by Synergy, a mega-entertainment production house that has monopolised the Egyptian media sector and has ties to Egyptian intelligence.
Ed Webb

Meta sued for $2bn over Facebook posts 'rousing hate' in Ethiopia | Social Media News |... - 0 views

  • A lawsuit accusing Meta Platforms of enabling violent and hateful posts from Ethiopia to flourish on Facebook, inflaming the country’s bloody civil war, has been filed. The lawsuit, filed in Kenya’s High Court on Tuesday, was brought by two Ethiopian researchers and the Kenyan rights group the Katiba Institute.
  • Among the plaintiffs is Abrham Meareg, who said his father, Tigrayan academic Meareg Amare Abrha, was killed after Facebook posts referring to him using ethnic slurs were published in October 2021.
  • The lawsuit said the company failed to exercise reasonable care in training its algorithms to identify dangerous posts and in hiring staff to police content for the languages covered by its regional moderation hub in Nairobi.
  • ...2 more annotations...
  • Meta’s independent Oversight Board last year recommended a review of how Facebook and Instagram have been used to spread content that heightens the risk of violence in Ethiopia.
  • echoes of accusations the company has faced for years of atrocities being stoked on its platforms, including in Myanmar, Sri Lanka, Indonesia and Cambodia.
Ed Webb

Muzzled by the Bots - www.slate.com - Readability - 0 views

  • It's through such combination of humans and bots that memes emerge
    • Ed Webb
       
      Android meme production
  • with just some clever manipulation, bots might get you to follow the right humans—and it's the humans, not bots, who would then influence your thinking
  • The digitization of our public life is also giving rise to many new intermediaries that are mostly of invisible—and possibly suspect—variety
  • ...5 more annotations...
  • It's the proliferation—not elimination—of intermediaries that has made blogging so widespread.  The right term here is “hyperintermediation,” not “disintermediation.”
  • a single Californian company making decisions over what counts as hate speech and profanity for some of the world's most popular sites without anyone ever examining whether its own algorithms might be biased or excessively conservative
  • this marriage of big data and automated content moderation might also have a darker side, particularly in undemocratic regimes, for whom a war on spam and hate speech—waged with the help of domestic spam-fighting champions—is just a pretense to suppress dissenting opinions. In their hands, solutions like Impermium's might make censorship more fine-grained and customized, eliminating the gaps that plague “dumb” systems that censor in bulk
  • Just imagine what kind of new censorship possibilities open up once moderation decisions can incorporate geolocational information (what some researchers already call7 “spatial big data”): Why not block comments, videos, or photos uploaded by anyone located in, say, Tahrir Square or some other politically explosive location?
  • For governments and corporations alike, the next frontier is to learn how to identify, pre-empt, and disrupt emerging memes before they coalesce behind a catchy hashtag—this is where “big data” analytics would be most helpful. Thus, one of the Russian security agencies has recently awarded a tender12 to create bots that can both spot the formation of memes and to disrupt and counter them in real-time through ”mass distribution of messages in social networks with a view to the formation of public opinion.” Moscow is learning from Washington here: Last year the Pentagon  awarded a $2.7 million contract to the San Diego-based firm Ntrepid in order to build software to create fake multiple online identities13 and “counter violent extremist and enemy propaganda outside the US.” “Big data”-powered analytics would make spotting such “enemy propaganda” much easier.
Ed Webb

Arianna Huffington: Virality Uber Alles: What the Fetishization of Social Media Is Cost... - 0 views

  • The media world's fetishization of social media has reached idol-worshipping proportions. Media conference agendas are filled with panels devoted to social media and how to use social tools to amplify coverage, but you rarely see one discussing what that coverage should actually be about. As Wadah Khanfar, former Director General of Al Jazeera, told our editors when he visited our newsroom last week, "The lack of contextualization and prioritization in the U.S. media makes it harder to know what the most important story is at any given time."
  • locked in the Perpetual Now
  • There's no reason why the notion of the scoop can't be recalibrated to mean not just letting us know 10 seconds before everybody else whom Donald Trump is going to endorse but also giving us more understanding, more clarity, a brighter spotlight on solutions
  • ...9 more annotations...
  • We're treating virality as a good in and of itself, moving forward for the sake of moving
  • "Twitter's algorithm favors novelty over popularity."
  • there were too many tweets about WikiLeaks, and they were so constant that Twitter started treating WikiLeaks as the new normal
  • as we adopt new and better ways to help people communicate, can we keep asking what is really being communicated? And what's the opportunity cost of what is not being communicated while we're all locked in the perpetual present chasing whatever is trending?
  • "What it means to be social is if you want to talk to me, you have to listen to me as well." A lot of brands want to be social, but they don't want to listen, because much of what they're hearing is quite simply not to their liking, and, just as in relationships in the offline world, engaging with your customers or your readers in a transparent and authentic way is not all sweetness and light. So simply issuing a statement saying you're committed to listening isn't the same thing as listening.
  • Fetishizing "social" has become a major distraction, and we're clearly a country that loves to be distracted. Our job in the media is to use all the social tools at our disposal to tell the stories that matter -- as well as the stories that entertain -- and to keep reminding ourselves that the tools are not the story. When we become too obsessed with our closed, circular Twitter or Facebook ecosystem, we can easily forget that poverty is on the rise, or that downward mobility is trending upward, or that over 5 million people have been without a job for half a year or more, or that millions of homeowners are still underwater. And just as easily, we can ignore all the great instances of compassion, ingenuity, and innovation that are changing lives and communities.
  • conflates the form with the substance
  • new social tools can help us bear witness more powerfully or they can help us be distracted more obsessively
  • humans are really a herd animal and that is what we are doing on these social sites, Herding up
Ed Webb

K-12 Media Literacy No Panacea for Fake News, Report Argues - Digital Education - Educa... - 0 views

  • "Media literacy has long focused on personal responsibility, which can not only imbue individuals with a false sense of confidence in their skills, but also put the onus of monitoring media effects on the audience, rather than media creators, social media platforms, or regulators,"
  • the need to better understand the modern media environment, which is heavily driven by algorithm-based personalization on social-media platforms, and the need to be more systematic about evaluating the impact of various media-literacy strategies and interventions
  • In response, bills to promote media literacy in schools have been introduced or passed in more than a dozen states. A range of nonprofit, corporate, and media organizations have stepped up efforts to promote related curricula and programs. Such efforts should be applauded—but not viewed as a "panacea," the Data & Society researchers argue.
  • ...4 more annotations...
  • existing efforts "focus on the interpretive responsibilities of the individual,"
  • "if bad actors intentionally dump disinformation online with an aim to distract and overwhelm, is it possible to safeguard against media manipulation?"
  • A 2012 meta-analysis by academic researchers found that media literacy efforts could help boost students' critical awareness of messaging, bias, and representation in the media they consumed. There have been small studies suggesting that media-literacy efforts can change students' behaviors—for example, by making them less likely to seek out violent media for their own consumption. And more recently, a pair of researchers found that media-literacy training was more important than prior political knowledge when it comes to adopting a critical stance to partisan media content.
  • the roles of institutions, technology companies, and governments
1 - 9 of 9
Showing 20 items per page