Skip to main content

Home/ Digit_al Society/ Group items tagged bias

Rss Feed Group items tagged

dr tech

Content Moderation is a Dead End. - by Ravi Iyer - 0 views

  •  
    "One of the many policy-based projects I worked on at Meta was Engagement Bait, which is defined as "a tactic that urges people to interact with Facebook posts through likes, shares, comments, and other actions in order to artificially boost engagement and get greater reach." Accordingly, "Posts and Pages that use this tactic will be demoted." To do this, "models are built off of certain guidelines" trained using "hundreds of thousands of posts" that "teams at Facebook have reviewed and categorized." The examples provided are obvious (eg. a post saying "comment "Yes" if you love rock as much as I do"), but the problem is that there will always be far subtler ways to get people to engage with something artificially. As an example, psychology researchers have a long history of studying negativity bias, which has been shown to operate across a wide array of domains, and to lead to increased online engagement. "
dr tech

Recognising (and addressing) bias in facial recognition tech - the Gender Shades Audit ... - 0 views

  •  
    "What if facial recognition technology isn't as good at recognising faces as it has sometimes been claimed to be? If the technology is being used in the criminal justice system, and gets the identification wrong, this can cause serious problems for people (see Robert Williams' story in "Facing up to the problems of recognising faces")."
dr tech

There's Tons Of Black Lives Matter Content On TikTok, But You May Not See Much Of It - 0 views

  •  
    "That algorithm can make the app powerfully addictive and fun, but like other social media platforms, it may also be cutting out whole swaths of content that you'll never get to see. I ran an experiment by creating two fresh accounts on TikTok. With these accounts, the only bias they start with is knowing my location - Toronto - which brings up content made near me."
dr tech

AI Inventing Its Own Culture, Passing It On to Humans, Sociologists Find - 0 views

  •  
    ""As expected, we found evidence of a performance improvement over generations due to social learning," the researchers wrote. "Adding an algorithm with a different problem-solving bias than humans temporarily improved human performance but improvements were not sustained in following generations. While humans did copy solutions from the algorithm, they appeared to do so at a lower rate than they copied other humans' solutions with comparable performance." Brinkmann told Motherboard that while they were surprised superior solutions weren't more commonly adopted, this was in line with other research suggesting human biases in decision-making persist despite social learning. Still, the team is optimistic that future research can yield insight into how to amend this."
dr tech

Surveillance Technology: Everything, Everywhere, All at Once - 0 views

  •  
    "Countries around the world are deploying technologies-like digital IDs, facial recognition systems, GPS devices, and spyware-that are meant to improve governance and reduce crime. But there has been little evidence to back these claims, all while introducing a high risk of exclusion, bias, misidentification, and privacy violations. It's important to note that these impacts are not equal. They fall disproportionately on religious, ethnic, and sexual minorities, migrants and refugees, as well as human rights activists and political dissidents."
dr tech

The chatbot optimisation game: can we trust AI web searches? | Artificial intelligence ... - 0 views

  •  
    "But what is pitched as a more convenient way of looking up information online has prompted scrutiny over how and where these chatbots select the information they provide. Looking into the sort of evidence that large language models (LLMs, the engines on which chatbots are built) find most convincing, three computer science researchers from the University of California, Berkeley, found current chatbots overrely on the superficial relevance of information. They tend to prioritise text that includes pertinent technical language or is stuffed with related keywords, while ignoring other features we would usually use to assess trustworthiness, such as the inclusion of scientific references or objective language free of personal bias."
dr tech

Mapping the landscape of histomorphological cancer phenotypes using self-supervised lea... - 1 views

  •  
    "Cancer diagnosis and management depend upon the extraction of complex information from microscopy images by pathologists, which requires time-consuming expert interpretation prone to human bias. Supervised deep learning approaches have proven powerful, but are inherently limited by the cost and quality of annotations used for training. Therefore, we present Histomorphological Phenotype Learning, a self-supervised methodology requiring no labels and operating via the automatic discovery of discriminatory features in image tiles. Tiles are grouped into morphologically similar clusters which constitute an atlas of histomorphological phenotypes (HP-Atlas), revealing trajectories from benign to malignant tissue via inflammatory and reactive phenotypes. These clusters have distinct features which can be identified using orthogonal methods, linking histologic, molecular and clinical phenotypes. Applied to lung cancer, we show that they align closely with patient survival, with histopathologically recognised tumor types and growth patterns, and with transcriptomic measures of immunophenotype. These properties are maintained in a multi-cancer study."
dr tech

Can Community Notes match the speed of misinformation? - 0 views

  •  
    "The promise of Community Notes lies in its transparency and its ability to crowdsource moderation from across ideological divides. By emphasizing consensus, the system avoids the mistrust or perception of bias with platform-driven fact-checking or content removal. Last year YouTube adopted this approach, but as a complement to other products such as information panels, or their recent disclosure requirement when content is altered or synthetic."
dr tech

A beauty contest was judged by AI and the robots didn't like dark skin | Technology | T... - 0 views

  •  
    The ensuing controversy has sparked renewed debates about the ways in which algorithms can perpetuate biases, yielding unintended and often offensive results.
dr tech

Blue Feed, Red Feed - WSJ.com - 0 views

  •  
    "To demonstrate how reality may differ for different Facebook users, The Wall Street Journal created two feeds, one "blue" and the other "red." If a source appears in the red feed, a majority of the articles shared from the source were classified as "very conservatively aligned" in a large 2015 Facebook study. For the blue feed, a majority of each source's articles aligned "very liberal." These aren't intended to resemble actual individual news feeds. Instead, they are rare side-by-side looks at real conversations from different perspectives. "
dr tech

Vast archive of tweets reveals work of trolls backed by Russia and Iran | Technology | ... - 0 views

  •  
    "More than 10m tweets sent by state actors attempting to influence US politics have been released to the public, forming one of the largest archives of political misinformation ever collated. The database reveals the astonishing extent of two misinformation campaigns, which spent more than five years sowing discord in the US and had spillover effects in other national campaigns, including Britain's EU referendum."
dr tech

Technologist Vivienne Ming: 'AI is a human right' | Technology | The Guardian - 0 views

  •  
    "At the heart of the problem that troubles Ming is the training that computer engineers receive and their uncritical faith in AI. Too often, she says, their approach to a problem is to train a neural network on a mass of data and expect the result to work fine. She berates companies for failing to engage with the problem first - applying what is already known about good employees and successful students, for example - before applying the AI."
dr tech

To a man with an algorithm all things look like an advertising opportunity | Arwa Mahda... - 0 views

  •  
    "This affects all of us every single day. When the algorithms that govern increasingly large parts of our lives have been designed almost exclusively by young bro-grammers with homogeneous experiences and worldviews, those algorithms are going to fail significant sections of society. A heartbreaking example of this is Gillian Brockell's experience of continuing to get targeted by pregnancy-related ads on Facebook after the stillbirth of her son. Brockell, a Washington Post journalist, recently made headlines when she tweeted an open letter to big tech companies, imploring them to think more carefully about how they target parenting ads."
dr tech

Revealed: how Italy's populists used Facebook to win power | World news | The Guardian - 0 views

  •  
    "The Facebook data, which captured the engagement metrics on thousands of posts by the six major party leaders in the two months leading up to the election, was collected by academics at the University of Pisa's MediaLab. It reveals all of the 25 most shared Facebook posts in the two months leading up to the election were videos, live broadcasts or photos from either Salvini, who runs the far-right League, or Di Maio, the leader of the anti-establishment Five Star Movement (M5S)."
dr tech

Trump idea on regulating Google 'unfathomable' - Channel NewsAsia - 1 views

  •  
    "There is little evidence to show algorithms by online firms are based on politics, and many conservatives - including Trump himself - have large a social media following. Analysts say it would be dangerous to try to regulate how search engines work to please a government or political faction."
dr tech

Are Google search results politically biased? | Jeff Hancock et al | Opinion | The Guar... - 1 views

  •  
    "This way of thinking about search results is wrong. Recent studies suggest that search engines, rather than providing a neutral way to find information, may actually play a major role in shaping public opinion on political issues and candidates. Some research has even argued that search results can affect the outcomes of close elections. In a study aptly titled In Google We Trust participants heavily prioritized the first page of search results, and the order of the results on that page, and continued to do so even when researchers reversed the order of the actual results."
dr tech

Ethics committee raises alarm over 'predictive policing' tool | UK news | The Guardian - 0 views

  •  
    "Amid mounting financial pressure, at least a dozen police forces are using or considering predictive analytics, despite warnings from campaigners that use of algorithms and "predictive policing" models risks locking discrimination into the criminal justice system."
dr tech

The EU's plan for algorithmic copyright filters is looking more and more unlikely / Boi... - 0 views

  •  
    "Under the proposal, online platforms would have to spend hundreds of millions of euros on algorithmic copyright filters that would compare everything users tried to post with a database of supposedly copyrighted works, which anyone could add anything to, and block any suspected matches. This would snuff out all the small EU competitors to America's Big Tech giants, and put all Europeans' communications under threat of arbitrary censorship by balky, unaccountable, easily abused algorithms."
dr tech

Revealed: graphic video used by Cambridge Analytica to influence Nigerian election | UK... - 0 views

  •  
    "Cambridge Analytica sought to influence the Nigerian presidential election in 2015 by using graphically violent imagery to portray a candidate as a supporter of sharia law who would brutally suppress dissenters and negotiate with militant Islamists, a video passed to British MPs reveals."
« First ‹ Previous 41 - 60 of 105 Next › Last »
Showing 20 items per page