Skip to main content

Home/ Digit_al Society/ Group items tagged train

Rss Feed Group items tagged

1More

New AI algorithm taught by humans learns beyond its training - 0 views

  •  
    "This figure compares a traditionally trained algorithm to Aarabi and Guo's heuristically trained neural net."
1More

The AI feedback loop: Researchers warn of 'model collapse' as AI trains on AI-generated... - 0 views

  •  
    "Now, as more people use AI to produce and publish content, an obvious question arises: What happens as AI-generated content proliferates around the internet, and AI models begin to train on it, instead of on primarily human-generated content? A group of researchers from the UK and Canada have looked into this very problem and recently published a paper on their work in the open access journal arXiv. What they found is worrisome for current generative AI technology and its future: "We find that use of model-generated content in training causes irreversible defects in the resulting models.""
1More

AI cameras to detect violence on Sydney trains - Software - iTnews - 0 views

  •  
    ""The AI will be trained to detect incidents such as people fighting, a group of agitated persons, people following someone else, and arguments or other abnormal behaviour," SMART lecturer and team lead Johan Barthelemy said. "It can also identify an unsafe environment, such as where there is a lack of lighting.The system will then alert a human operator who can quickly react if there is an issue.""
1More

Inside the Secret List of Websites That Make AI Like ChatGPT Sound Smart: SoylentNews S... - 0 views

  •  
    "This text is the AI's mainsource of information about the world as it is being built, and it influences how it responds to users. If it aces the bar exam, for example, it's probably because its training data included thousands of LSAT practice sites. Tech companies have grown secretive about what they feed the AI. So The Washington Post set out to analyze one of these data sets to fully reveal the types of proprietary, personal, and often offensive websites that go into an AI's training data."
1More

Popular brain training games 'do not make users any smarter' - 0 views

  •  
    Although companies have tried to interest more people with brain training games, it turns out they do not work.
1More

8 Skilled Jobs That May Soon Be Replaced by Robots - 0 views

  •  
    "Unskilled manual laborers have felt the pressure of automation for a long time - but, increasingly, they're not alone. The last few years have been a bonanza of advances in artificial intelligence. As our software gets smarter, it can tackle harder problems, which means white-collar and pink-collar workers are at risk as well. Here are eight jobs expected to be automated (partially or entirely) in the coming decades. Call Center Employees call-center Telemarketing used to happen in a crowded call center, with a group of representatives cold-calling hundreds of prospects every day. Of those, maybe a few dozen could be persuaded to buy the product in question. Today, the idea is largely the same, but the methods are far more efficient. Many of today's telemarketers are not human. In some cases, as you've probably experienced, there's nothing but a recording on the other end of the line. It may prompt you to "press '1' for more information," but nothing you say has any impact on the call - and, usually, that's clear to you. But in other cases, you may get a sales call and have no idea that you're actually speaking to a computer. Everything you say gets an appropriate response - the voice may even laugh. How is that possible? Well, in some cases, there is a human being on the other side, and they're just pressing buttons on a keyboard to walk you through a pre-recorded but highly interactive marketing pitch. It's a more practical version of those funny soundboards that used to be all the rage for prank calls. Using soundboard-assisted calling - regardless of what it says about the state of human interaction - has the potential to make individual call center employees far more productive: in some cases, a single worker will run two or even three calls at the same time. In the not too distant future, computers will be able to man the phones by themselves. At the intersection of big data, artificial intelligence, and advanced
1More

Technologist Vivienne Ming: 'AI is a human right' | Technology | The Guardian - 0 views

  •  
    "At the heart of the problem that troubles Ming is the training that computer engineers receive and their uncritical faith in AI. Too often, she says, their approach to a problem is to train a neural network on a mass of data and expect the result to work fine. She berates companies for failing to engage with the problem first - applying what is already known about good employees and successful students, for example - before applying the AI."
1More

I Tried Predictim AI That Scans for 'Risky' Babysitters - 0 views

  •  
    "The founders of Predictim want to be clear with me: Their product-an algorithm that scans the online footprint of a prospective babysitter to determine their "risk" levels for parents-is not racist. It is not biased. "We take ethics and bias extremely seriously," Sal Parsa, Predictim's CEO, tells me warily over the phone. "In fact, in the last 18 months we trained our product, our machine, our algorithm to make sure it was ethical and not biased. We took sensitive attributes, protected classes, sex, gender, race, away from our training set. We continuously audit our model. And on top of that we added a human review process.""
1More

Your next car could have a built-in road-rage detector - 0 views

  •  
    "Affectiva is running a program that pays drivers to help train its emotion-recognition system. The company sends drivers a kit including cameras and other sensors to place within their vehicles. These record a person's facial expressions, gestures, and tone of voice on the road. That data is then labeled by trained specialists for a range of emotions, and fed into deep neural networks."
1More

The coded gaze: biased and understudied facial recognition technology / Boing Boing - 0 views

  •  
    " "Why isn't my face being detected? We have to look at how we give machines sight," she said in a TED Talk late last year. "Computer vision uses machine-learning techniques to do facial recognition. You create a training set with examples of faces. However, if the training sets aren't really that diverse, any face that deviates too much from the established norm will be harder to detect.""
1More

Train firm's 'worker bonus' email is actually cybersecurity test | Rail transport | The... - 0 views

  •  
    "West Midlands Trains emailed about 2,500 employees with a message saying its managing director, Julian Edwards, wanted to thank them for their hard work over the past year under Covid-19. The email said they would get a one-off payment as a thank you after "huge strain was placed upon a large number of our workforce". However, those who clicked through on the link to read Edwards' thank you were instead emailed back with a message telling them it was a company-designed "phishing simulation test" and there was to be no bonus. It warned: "This was a test designed by our IT team to entice you to click the link and used both the promise of thanks and financial reward.""
1More

Scientists identify key conditions to set up a creative 'hot streak' | Artificial intel... - 0 views

  •  
    "They then analysed how diverse the individuals' work was at different points in their careers. This was assessed using an artificial intelligence system that was trained, in the case of art, to "recognise" different styles by features such as the brush strokes, shapes and objects in a piece, while in the case of film, it was trained to classify a director's work based on plot and cast information. For science, the system identified different research topics based on the papers cited within a researcher's publications."
1More

Google says AI systems should be able to mine publishers' work unless companies opt out... - 0 views

  •  
    "The company has called for Australian policymakers to promote "copyright systems that enable appropriate and fair use of copyrighted content to enable the training of AI models in Australia on a broad and diverse range of data, while supporting workable opt-outs for entities that prefer their data not to be trained in using AI systems". The call for a fair use exception for AI systems is a view the company has expressed to the Australian government in the past, but the notion of an opt-out option for publishers is a new argument from Google."
1More

What is AI chatbot phenomenon ChatGPT and could it replace humans? | Artificial intelli... - 0 views

  •  
    "ChatGPT can also give entirely wrong answers and present misinformation as fact, writing "plausible-sounding but incorrect or nonsensical answers", the company concedes. OpenAI says that fixing this issue is difficult because there is no source of truth in the data they use to train the model and supervised training can also be misleading "because the ideal answer depends on what the model knows, rather than what the human demonstrator knows"."
1More

This AI algorithm could save lives in quake zones | Digital Trends - 0 views

  •  
    "It forecast 14 earthquakes within a 200-mile area of the estimated epicenter and also made a very accurate forecast regarding their intensity, a report on the university's website said. It failed to warn of just one earthquake and gave eight false predictions. The research team trained the AI to detect statistical bumps in real-time seismic data that the research team had paired with previous earthquakes, the report explained. Once trained, the AI monitored for signs of approaching earthquakes. "Predicting earthquakes is the holy grail," said Sergey Fomel, a professor at UT's Bureau of Economic Geology and a member of the research team, adding: "What we achieved tells us that what we thought was an impossible problem is solvable in principle.""
1More

Millions of Workers Are Training AI Models for Pennies | WIRED - 0 views

  •  
    "Some experts see platforms like Appen as a new form of data colonialism, says Saiph Savage, director of the Civic AI lab at Northeastern University. "Workers in Latin America are labeling images, and those labeled images are going to feed into AI that will be used in the Global North," she says. "While it might be creating new types of jobs, it's not completely clear how fulfilling these types of jobs are for the workers in the region." Due to the ever moving goal posts of AI, workers are in a constant race against the technology, says Schmidt. "One workforce is trained to three-dimensionally place bounding boxes around cars very precisely, and suddenly it's about figuring out if a large language model has given an appropriate answer," he says, regarding the industry's shift from self-driving cars to chatbots. Thus, niche labeling skills have a "very short half-life." "From the clients' perspective, the invisibility of the workers in microtasking is not a bug but a feature," says Schmidt. Economically, because the tasks are so small, it's more feasible to deal with contractors as a crowd instead of individuals. This creates an industry of irregular labor with no face-to-face resolution for disputes if, say, a client deems their answers inaccurate or wages are withheld. The workers WIRED spoke to say it's not low fees but the way platforms pay them that's the key issue. "I don't like the uncertainty of not knowing when an assignment will come out, as it forces us to be near the computer all day long," says Fuentes, who would like to see additional compensation for time spent waiting in front of her screen. Mutmain, 18, from Pakistan, who asked not to use his surname, echoes this. He says he joined Appen at 15, using a family member's ID, and works from 8 am to 6 pm, and another shift from 2 am to 6 am. "I need to stick to these platforms at all times, so that I don't lose work," he says, but he struggles to earn more than $50
1More

Say what: AI can diagnose type 2 diabetes in 10 seconds from your voice - 0 views

  •  
    "Researchers involved in a recent study trained an artificial intelligence (AI) model to diagnose type 2 diabetes in patients after six to 10 seconds of listening to their voice. Canadian medical researchers trained the machine-learning AI to recognise 14 vocal differences in the voice of someone with type 2 diabetes compared to someone without diabetes. The auditory features that the AI focussed on included slight changes in pitch and intensity, which human ears cannot distinguish. This was then paired with basic health data gathered by the researchers, such as age, sex, height and weight. Researchers believe that the AI model will drastically lower the cost for people with diabetes to be diagnosed."
1More

Human-like programs abuse our empathy - even Google engineers aren't immune | Emily M B... - 0 views

  •  
    "That is why we must demand transparency here, especially in the case of technology that uses human-like interfaces such as language. For any automated system, we need to know what it was trained to do, what training data was used, who chose that data and for what purpose. In the words of AI researchers Timnit Gebru and Margaret Mitchell, mimicking human behaviour is a "bright line" - a clear boundary not to be crossed - in computer software development. We treat interactions with things we perceive as human or human-like differently. With systems such as LaMDA we see their potential perils and the urgent need to design systems in ways that don't abuse our empathy or trust."
1More

Streaming sites urged not to let AI use music to clone pop stars | Music industry | The... - 0 views

  •  
    "The music industry is urging streaming platforms not to let artificial intelligence use copyrighted songs for training, in the latest of a run of arguments over intellectual property that threaten to derail the generative AI sector's explosive growth. In a letter to streamers including Spotify and Apple Music, the record label Universal Music Group expressed fears that AI labs would scrape millions of tracks to use as training data for their models and copycat versions of pop stars."
1More

Big Data Ethics: racially biased training data versus machine learning / Boing Boing - 0 views

  •  
    "O'Neill recounts an exercise to improve service to homeless families in New York City, in which data-analysis was used to identify risk-factors for long-term homelessness. The problem, O'Neill describes, was that many of the factors in the existing data on homelessness were entangled with things like race (and its proxies, like ZIP codes, which map extensively to race in heavily segregated cities like New York). Using data that reflects racism in the system to train a machine-learning algorithm whose conclusions can't be readily understood runs the risk of embedding that racism in a new set of policies, these ones scrubbed clean of the appearance of bias with the application of objective-seeming mathematics. "
1 - 20 of 94 Next › Last »
Showing 20 items per page