Skip to main content

Home/ Digit_al Society/ Group items tagged ITGS models

Rss Feed Group items tagged

dr tech

The Era of Ownership Is Ending - 0 views

  •  
    "Mobility-as-a-Service (MaaS) is a model for traffic without ownership. You pay a monthly fee for it, like with Spotify, tell the app where you are going and get instant access to taxis, Ubers, buses, and so on. Everything is available on-demand and ownership is no longer needed."
dr tech

Why machine learning struggles with causality | VentureBeat - 0 views

  •  
    "In a paper titled "Towards Causal Representation Learning," researchers at the Max Planck Institute for Intelligent Systems, the Montreal Institute for Learning Algorithms (Mila), and Google Research discuss the challenges arising from the lack of causal representations in machine learning models and provide directions for creating artificial intelligence systems that can learn causal representations."
dr tech

SoundCloud announces overhaul of royalties model to 'fan-powered' system | Soundcloud |... - 0 views

  •  
    "SoundCloud announced on Tuesday it would become the first major streaming service to start directing subscribers' fees only to the artists they listen to, a move welcomed by musicians campaigning for fairer pay. Current practice for streaming services including Spotify, Deezer and Apple is to pool royalty payments and dish them out based on which artists have the most global plays. Many artists and unions have criticised this system, saying it disproportionately favours megastars and leaves y little for musicians further down the pecking order."
dr tech

Inside Robinhood, the free trading app at the heart of the GameStop mania - CNN - 0 views

  •  
    "Robinhood's free-trading revolution helped pave the way to the recent Reddit mayhem on Wall Street. The rise of Robinhood means that the ability to buy stocks, on a whim, is now at everyone's fingertips. Robinhood has opened investing up to the masses. Rival online brokerages were forced to mimic Robinhood's zero-commission business model, and some joined forces just to survive. "
dr tech

Facial Recognition's Latest Failure Is Keeping People From Accessing Their Unemployment... - 0 views

  •  
    "Some unemployment applicants have said that ID.me's facial recognition models fail to properly identify them (generally speaking, facial recognition technology is notoriously less accurate for women and people of color). And after their applications were put on hold because their identity couldn't be verified, many should-be beneficiaries have had to wait days or weeks to reach an ID.me "trusted referee" who could confirm what the technology couldn't."
dr tech

How DuckDuckGo makes money selling search, not privacy - TechRepublic - 0 views

  •  
    "It's actually a big myth that search engines need to track your personal search history to make money or deliver quality search results. Almost all of the money search engines make (including Google) is based on the keywords you type in, without knowing anything about you, including your search history or the seemingly endless amounts of additional data points they have collected about registered and non-registered users alike. In fact, search advertisers buy search ads by bidding on keywords, not people….This keyword-based advertising is our primary business model. "
dr tech

What Does Privacy Really Mean Under Surveillance Capitalism? | Literary Hub - 0 views

  •  
    "The internet is primarily funded by the collection, analysis, and trade of data-the data economy. Much of that data is personal data-data about you. The trading of personal data as a business model is increasingly being exported to all institutions in society-the surveillance society, or surveillance capitalism."
immapotaeto

How Google Maps uses DeepMind's AI tools to predict your arrival time - The V... - 0 views

  •  
    "The models work by dividing maps into what Google calls "supersegments" "
dr tech

Deepfake detectors can be defeated, computer scientists show for the first time | Eurek... - 0 views

  •  
    "Researchers showed detectors can be defeated by inserting inputs called adversarial examples into every video frame. The adversarial examples are slightly manipulated inputs which cause artificial intelligence systems such as machine learning models to make a mistake. In addition, the team showed that the attack still works after videos are compressed."
dr tech

With AI translation service that rivals professionals, Lengoo attracts new $20M round -... - 0 views

  •  
    "Most people who use AI-powered translation tools do so for commonplace, relatively unimportant tasks like understanding a single phrase or quote. Those basic services won't do for an enterprise offering technical documents in 15 languages - but Lengoo's custom machine translation models might just do the trick. And with a new $20 million B round, they may be able to build a considerable lead. The translation business is a big one, in the billions, and isn't going anywhere. It's simply too common a task to need to release a document, piece of software or live website in multiple languages - perhaps dozens."
dr tech

Full Page Reload - 0 views

  •  
    "These experiments in computational creativity are enabled by the dramatic advances in deep learning over the past decade. Deep learning has several key advantages for creative pursuits. For starters, it's extremely flexible, and it's relatively easy to train deep-learning systems (which we call models) to take on a wide variety of tasks."
dr tech

Algorithm finds hidden connections between paintings at the Met | MIT CSAIL - 0 views

  •  
    "What Hamilton and his colleagues found surprising was that this approach could also be applied to helping find problems with existing deep networks, related to the surge of "deepfakes" that have recently cropped up. They applied this data structure to find areas where probabilistic models, such as the generative adversarial networks (GANs) that are often used to create deepfakes, break down. They coined these problematic areas "blind spots," and note that they give us insight into how GANs can be biased. Such blind spots further show that GANs struggle to represent particular areas of a dataset, even if most of their fakes can fool a human. "
dr tech

Google says AI systems should be able to mine publishers' work unless companies opt out... - 0 views

  •  
    "The company has called for Australian policymakers to promote "copyright systems that enable appropriate and fair use of copyrighted content to enable the training of AI models in Australia on a broad and diverse range of data, while supporting workable opt-outs for entities that prefer their data not to be trained in using AI systems". The call for a fair use exception for AI systems is a view the company has expressed to the Australian government in the past, but the notion of an opt-out option for publishers is a new argument from Google."
dr tech

'Critical ignoring' is critical thinking for the digital age | World Economic Forum - 0 views

  •  
    "The platforms that control search were conceived in sin. Their business model auctions off our most precious and limited cognitive resource: attention. These platforms work overtime to hijack our attention by purveying information that arouses curiosity, outrage, or anger. The more our eyeballs remain glued to the screen, the more ads they can show us, and the greater profits accrue to their shareholders."
dr tech

Authors file a lawsuit against OpenAI for unlawfully 'ingesting' their books | Books | ... - 0 views

  •  
    "Two authors have filed a lawsuit against OpenAI, the company behind the artificial intelligence tool ChatGPT, claiming that the organisation breached copyright law by "training" its model on novels without the permission of authors. Mona Awad, whose books include Bunny and 13 Ways of Looking at a Fat Girl, and Paul Tremblay, author of The Cabin at the End of the World, filed the class action complaint to a San Francisco federal court last week."
dr tech

Google will let publishers hide their content from its insatiable AI - 0 views

  •  
    "Google has announced a new control in its robots.txt indexing file that would let publishers decide whether their content will "help improve Bard and Vertex AI generative APIs, including future generations of models that power those products." The control is a crawler called Google-Extended, and publishers can add it to the file in their site's documentation to tell Google not to use it for those two APIs. In its announcement, the company's vice president of "Trust" Danielle Romain said it's "heard from web publishers that they want greater choice and control over how their content is used for emerging generative AI use cases.""
dr tech

Social media bosses must invest in guarding global elections against incitement of hate... - 0 views

  •  
    "In the context of ongoing corruption crises, rising anti-migrant rhetoric and anti-human-rights movements, and threats to press freedom, the role of social media companies may seem like a lesser priority, but in fact, it is a crucial part of the picture. People's rights and freedoms offline are being jeopardised by online platforms' current business model, where profit is made from stoking up anger and fear. At the South African human rights organisation where I work, the Legal Resources Centre, we are seeing an escalation of xenophobic violence that is often incited on social media. A recent joint investigation we conducted with international NGO Global Witness showed that Facebook, TikTok and YouTube all failed to enforce their own policies on hate speech and incitement to violence by approving adverts that included calls on the police in South Africa to kill foreigners, referred to non-South African nationals as a "disease", as well as incited violence through "force" against migrants."
dr tech

Content Moderation is a Dead End. - by Ravi Iyer - 0 views

  •  
    "One of the many policy-based projects I worked on at Meta was Engagement Bait, which is defined as "a tactic that urges people to interact with Facebook posts through likes, shares, comments, and other actions in order to artificially boost engagement and get greater reach." Accordingly, "Posts and Pages that use this tactic will be demoted." To do this, "models are built off of certain guidelines" trained using "hundreds of thousands of posts" that "teams at Facebook have reviewed and categorized." The examples provided are obvious (eg. a post saying "comment "Yes" if you love rock as much as I do"), but the problem is that there will always be far subtler ways to get people to engage with something artificially. As an example, psychology researchers have a long history of studying negativity bias, which has been shown to operate across a wide array of domains, and to lead to increased online engagement. "
dr tech

This Voice Doesn't Exist - Generative Voice AI - 0 views

  •  
    "Similarly to how voice cloning raises fears about the consequences of its potential misuse, increasingly many people worry that the proliferation of AI technology will put professionals' livelihoods at risk. At Eleven, we see a future in which voice actors are able to license their voices to train speech models for specific use, in exchange for fees. Clients and studios will still gladly feature professional voice talent in their projects and using AI will simply contribute to faster turnaround times and greater freedom to experiment and establish direction in early development. The technology will change how spoken audio is designed and recorded but the fact that voice actors no longer need to be physically present for every session really gives them the freedom to be involved in more projects at any one time, as well as to truly immortalize their voices."
dr tech

ChatGPT Stole Your Work. So What Are You Going to Do? | WIRED - 0 views

  •  
    "DATA LEVERAGE CAN be deployed through at least four avenues: direct action (for instance, individuals banding together to withhold, "poison," or redirect data), regulatory action (for instance, pushing for data protection policy and legal recognition of "data coalitions"), legal action (for instance, communities adopting new data-licensing regimes or pursuing a lawsuit), and market action (for instance, demanding large language models be trained only with data from consenting creators). "
« First ‹ Previous 81 - 100 of 147 Next › Last »
Showing 20 items per page