"Upcoming cell phone chips from Qualcomm will use artificial intelligence to block malware before it infects your phone. The chip company said on Monday that the next-generation Snapdragon 820 processor used in a variety of Android smartphones will be the first chip that uses machine learning to detect threats and privacy issues thanks to an application called Snapdragon Smart Protect."
Qualcomm is trying to use artificial intelligence in a chip for smart phones. The chip will learn to detect any privacy or security issues that would usually be hard to detect.
"The point of creating this vast portfolio of digital gun art is to feed an algorithm made to detect a firearm as soon as a security camera catches it being drawn by synthetically creating tens of thousands of ways each gun may appear. Arcarithm is one of several companies developing automated active shooter detection technology in the hopes of selling it to schools, hotels, entertainment venues and the owners of any location that could be the site of one of America's 15,000 annual gun murders and 29,000 gun injuries."
"Their tools analyze content using sophisticated algorithms, picking up on subtle signals to distinguish the images made with computers from the ones produced by human photographers and artists. But some tech leaders and misinformation experts have expressed concern that advances in A.I. will always stay a step ahead of the tools.
To assess the effectiveness of current A.I.-detection technology, The New York Times tested five new services using more than 100 synthetic images and real photos. The results show that the services are advancing rapidly, but at times fall short."
"Let's start with the dementia socks. An intriguing idea, born out of a personal tragedy.
Zeke Steer watched his own great-grandmother decline into dementia, and wanted to help.
Spin forward a few years, and the research scientist has developed socks which detect early physical signs of the onset of diseases like Alzheimer's.
"Sensors in our socks are detecting early signs of distress, and alerting a carer that they may need help," he says."
"Facebook will rely on users to report fake news despite evidence that suggests users have a difficult time assessing or identifying fake news. Teens seem to be especially vulnerable to fake news. A recent study by researchers at Stanford found that middle and high school students have a difficult time detecting fake news from real news, or detecting bias in tweets and Facebook statuses."
""We don't do facial recognition, we do face detection," Ke Quang, chief operating officer of Quividi, told the Guardian on Monday. "It's software which works from the video feed coming off the camera. It can detect if it's seeing a face, but it never records the image or biomorphological information or traits."
"Artificial intelligence researchers have developed a mosquito early warning system that raises the alarm when the insects are near by detecting the whine of their wingbeats.
The system uses an app that can run on a £20 mobile phone to analyse sounds in the environment and issue a warning if it hears the telltale buzz as a mosquito swoops past."
" "Why isn't my face being detected? We have to look at how we give machines sight," she said in a TED Talk late last year. "Computer vision uses machine-learning techniques to do facial recognition. You create a training set with examples of faces. However, if the training sets aren't really that diverse, any face that deviates too much from the established norm will be harder to detect.""
"Facebook has detailed the steps it's taking to get help for people who need it. Which involves using artificial intelligence to "detect posts or live videos where someone might be expressing thoughts of suicide," identifying appropriate first responders, and then employing more people to "review reports of suicide or self harm".
The social network has been testing this system in the U.S. for the last month, and "worked with first responders on over 100 wellness checks based on reports we received via our proactive detection efforts." In some cases the local authorities were notified in order to help."
""The AI will be trained to detect incidents such as people fighting, a group of agitated persons, people following someone else, and arguments or other abnormal behaviour," SMART lecturer and team lead Johan Barthelemy said.
"It can also identify an unsafe environment, such as where there is a lack of lighting.The system will then alert a human operator who can quickly react if there is an issue.""
""Our job as teachers and professors is not to surveil and police our students, but it's to educate them," he says. "You are assuming that students are trying to cheat-rather than assuming students are trying to learn and help them learn."
He sees the growing adoption of automated proctoring tools as a continuation of a trend started by plagiarism-detection services like Turnitin, which he says were built on the assumption that students want to cheat and must be policed. But despite early pushback by students and some professors, plagiarism detection has become ubiquitous. Parry worries the same thing could happen with automated proctoring."
"Open AI researchers said that while it was "impossible to reliably detect all AI-written text", good classifiers could pick up signs that text was written by AI. The tool could be useful in cases where AI was used for "academic dishonesty" and when AI chatbots were positioned as humans, they said."
"The tool has determined that there is a 99.61% probability this text was generated using OpenAI GPT. Please note that this tool like everything in AI, has a high probability of detecting GPT output, but not 100% as attributed by George E. P. Box "All models are wrong, but some are useful"."
"Apple's Not Digging Itself Out of This One: "Online researchers say they have found flaws in Apple's new child abuse detection tool that could allow bad actors to target iOS users.""
"Dr Kovanović believes this is a "pointless race to have", given the momentum of the technology and its potential positives. He says AI detection "misses the point".
"I think it's much better to sink our effort into how we can use AI productively."
He also argued the practice of using anti-plagiarism software to score university students on how likely it was their work was written by AI was causing unnecessary stress.
"It's hard to trust that score," he said."
"Some of the same social media analyses that have helped Google and the Centers for Disease Control and Prevention spot warning signs of a flu outbreak could be used to detect the rumblings of violent conflict before it begins, scholars said in a paper released this week.
Kenyan officials used essentially this system to track hate speech on Facebook, blogs and Twitter in advance of that nation's 2013 presidential election, which brought Uhuru Kenyatta to power."
"Train the Deep Learning Ahem Detector with two sets of audio files, "a negative sample with clean voice/sound" (minimum 3 minutes) and "a positive one with 'ahem' sounds concatenated" (minimum 10s) and it will detect "ahems" in any voice sample thereafter."
"Concerns have been growing about AI's so-called "white guy problem" and now scientists have devised a way to test whether an algorithm is introducing gender or racial biases into decision-making."