Skip to main content

Home/ Digit_al Society/ Group items tagged integrity ai

Rss Feed Group items tagged

dr tech

ChatGPT maker OpenAI releases 'not fully reliable' tool to detect AI generated content ... - 0 views

  •  
    "Open AI researchers said that while it was "impossible to reliably detect all AI-written text", good classifiers could pick up signs that text was written by AI. The tool could be useful in cases where AI was used for "academic dishonesty" and when AI chatbots were positioned as humans, they said."
dr tech

Top 10 AI failures of 2016 - TechRepublic - 0 views

  •  
    "But with all of the successes of AI, it's also important to pay attention to when, and how, it can go wrong, in order to prevent future errors. A recent paper by Roman Yampolskiy, director of the Cybersecurity Lab at the University of Louisville, outlines a history of AI failures which are "directly related to the mistakes produced by the intelligence such systems are designed to exhibit." According to Yampolskiy, these types of failures can be attributed to mistakes during the learning phase or mistakes in the performance phase of the AI system."
dr tech

'The Gospel': how Israel uses AI to select bombing targets in Gaza | Israel | The Guardian - 0 views

  •  
    "Sources familiar with how AI-based systems have been integrated into the IDF's operations said such tools had significantly sped up the target creation process. "We prepare the targets automatically and work according to a checklist," a source who previously worked in the target division told +972/Local Call. "It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate." A separate source told the publication the Gospel had allowed the IDF to run a "mass assassination factory" in which the "emphasis is on quantity and not on quality". A human eye, they said, "will go over the targets before each attack, but it need not spend a lot of time on them". For some experts who research AI and international humanitarian law, an acceleration of this kind raises a number of concerns."
dr tech

Technologist Vivienne Ming: 'AI is a human right' | Technology | The Guardian - 0 views

  •  
    "At the heart of the problem that troubles Ming is the training that computer engineers receive and their uncritical faith in AI. Too often, she says, their approach to a problem is to train a neural network on a mass of data and expect the result to work fine. She berates companies for failing to engage with the problem first - applying what is already known about good employees and successful students, for example - before applying the AI."
dr tech

ChatGPT Will See You Now: AI Is Transforming GP Appointments - 0 views

  •  
    "Kahun still relies on its own vast repository of medical knowledge - over 30 million insights from trusted sources - but ChatGPT will now allow patients to describe their symptoms in their own words. Until now it's been a structured conversation, with the AI asking a question, the patient responding, and the AI working its way through a series of more detailed questions based on the answers it gets. Integrating ChatGPT puts the patient in control. They describe their symptoms exactly as they would to a doctor, and ChatGPT responds, just as their doctor would."
dr tech

Google Grapples With `Horrifying' Reaction to Uncanny AI Tech - 0 views

  •  
    "Eck said machine learning, a powerful form of AI, will be integrated into how humans communicate with each other. He raised the idea of "assistive writing" in the future with Google Docs, the company's online word processing software. This may be based on Google's upcoming Smart Compose technology that suggests words and phrases based on what's being typed. Teachers used to worry about whether students used Wikipedia for their homework. Now they may wonder what part of the work the students wrote themselves, Eck said."
dr tech

Lecturers urged to review assessments in UK amid concerns over new AI tool | Artificial... - 0 views

  •  
    ""As with all technology, there are caveats around making sure that it is used responsibly and not as a licence to cheat, but none of that is insurmountable," he said. In contrast, New York City schools have already banned the use of ChatGPT on all devices and networks because of concerns it will encourage plagiarism. Dr Thomas Lancaster, a computer scientist working at Imperial College London, best known for his research into academic integrity, contract cheating and plagiarism, said it was in many ways a game changer. He said: "It's certainly a major turning point in education where universities have to make big changes."
dr tech

New AI fake text generator may be too dangerous to release, say creators | Technology |... - 0 views

  •  
    "The creators of a revolutionary AI system that can write news stories and works of fiction - dubbed "deepfakes for text" - have taken the unusual step of not releasing their research publicly, for fear of potential misuse."
dr tech

'Why would we employ people?' Experts on five ways AI will change work | Employment | T... - 0 views

  •  
    "In this future, teachers assisted in marking and lesson planning by LLMs would be left with more much-needed time to focus on other elements of their work. However, in a bid to cut costs, the "teaching" of lessons could also be delegated to machines, robbing teachers and students of human interaction. "Of course, that will be for the less well-off students," Luckin says. "The more well-off students will still have lots of lovely one-to-one human interactions, alongside some very smartly integrated AI." Luckin instead advocates a future in which technology eases teachers' workloads but does not disrupt their pastoral care - or disproportionately affect students in poorer areas. "That human interaction is something to be cherished, not thrown out," she says."
dr tech

Inside the Secret List of Websites That Make AI Like ChatGPT Sound Smart: SoylentNews S... - 0 views

  •  
    "This text is the AI's mainsource of information about the world as it is being built, and it influences how it responds to users. If it aces the bar exam, for example, it's probably because its training data included thousands of LSAT practice sites. Tech companies have grown secretive about what they feed the AI. So The Washington Post set out to analyze one of these data sets to fully reveal the types of proprietary, personal, and often offensive websites that go into an AI's training data."
dr tech

This Researcher Says AI Is Neither Artificial nor Intelligent | WIRED - 0 views

  •  
    "We need to look at the nose to tail production of artificial intelligence. The seeds of the data problem were planted in the 1980s, when it became common to use data sets without close knowledge of what was inside, or concern for privacy. It was just "raw" material, reused across thousands of projects."
dr tech

We can reduce gender bias in natural-language AI, but it will take a lot more work | Ve... - 0 views

  •  
    "However, since machine learning algorithms are what they eat (in other words, they function based on the training data they ingest), they inevitably end up picking up on human biases that exist in language data itself."
dr tech

AI Can Write Code Like Humans-Bugs and All | WIRED - 0 views

  •  
    "Alex Naka, a data scientist at a biotech firm who signed up to test Copilot, says the program can be very helpful, and it has changed the way he works. "It lets me spend less time jumping to the browser to look up API docs or examples on Stack Overflow," he says. "It does feel a little like my work has shifted from being a generator of code to being a discriminator of it.""
dr tech

Discrimination by algorithm: scientists devise test to detect AI bias | Technology | Th... - 0 views

  •  
    "Concerns have been growing about AI's so-called "white guy problem" and now scientists have devised a way to test whether an algorithm is introducing gender or racial biases into decision-making."
dr tech

AI learns to write its own code by stealing from other programs | New Scientist - 0 views

  •  
    "DeepCoder uses a technique called program synthesis: creating new programs by piecing together lines of code taken from existing software - just like a programmer might. Given a list of inputs and outputs for each code fragment, DeepCoder learned which pieces of code were needed to achieve the desired result overall. "It could allow non-coders to simply describe an idea for a program and let the system build it""
dr tech

Police across the US are training crime-predicting AIs on falsified data - MIT Technolo... - 0 views

  •  
    "The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department's discriminatory practices."
dr tech

Deepfake Tech Turns Single Photo, Audio Into Bizarre Music Video [ Rasputin s... - 0 views

  •  
    "Research by Imperial College in London and Samsung's AI research center showed how a single photo and audio file can be used to generate a singing video portrait. "
dr tech

A new AI tool from Adobe can detect Photoshopped faces / Boing Boing - 0 views

  •  
    ""But we live in a world where it's becoming harder to trust the digital information we consume, and I look forward to further exploring this area of research.""
dr tech

In facial recognition challenge, top-ranking algorithms show bias against Black women |... - 0 views

  •  
    "The results are unfortunately not surprising - countless studies have shown that facial recognition is susceptible to bias. A paper last fall by University of Colorado, Boulder researchers demonstrated that AI from Amazon, Clarifai, Microsoft, and others maintained accuracy rates above 95% for cisgender men and women but misidentified trans men as women 38% of the time."
dr tech

The lessons we all must learn from the A-levels algorithm debacle | WIRED UK - 0 views

  •  
    "More algorithmic decision making and decision augmenting systems will be used in the coming years. Unlike the approach taken for A-levels, future systems may include opaque AI-led decision making. Despite such risks there remain no clear picture of how public sector bodies - government, local councils, police forces and more - are using algorithmic systems for decision making."
1 - 20 of 38 Next ›
Showing 20 items per page