"But for many scientists, Twitter has become an essential tool for collaboration and discovery - a source of real-time conversations around research papers, conference talks and wider topics in academia. Papers now zip around scientific communities faster thanks to Twitter, says Johann Unger, a linguist at Lancaster University, UK, who notes that extra information is also shared in direct private messages through the site. And its limit on tweet length - currently 280 characters - has pushed academics into keeping their commentary pithy, he adds."
""As the deceptive capabilities of AI systems become more advanced, the dangers they pose to society will become increasingly serious," said Dr Peter Park, an AI existential safety researcher at MIT and author of the research.
Park was prompted to investigate after Meta, which owns Facebook, developed a program called Cicero that performed in the top 10% of human players at the world conquest strategy game Diplomacy. Meta stated that Cicero had been trained to be "largely honest and helpful" and to "never intentionally backstab" its human allies."
"The independent review on equity in medical devices once again highlights the multiple ways in which medical technology development can lead to solutions whereby the benefits are distributed inequitably across society, or can further exacerbate health inequalities (UK report reveals bias within medical tools and devices, 11 March). While the report is welcome, the challenge facing scientists and engineers is how to innovate medical devices differently to respond to longstanding societal biases and inequalities."
"DeepCoder uses a technique called program synthesis: creating new programs by piecing together lines of code taken from existing software - just like a programmer might. Given a list of inputs and outputs for each code fragment, DeepCoder learned which pieces of code were needed to achieve the desired result overall.
"It could allow non-coders to simply describe an idea for a program and let the system build it""
DataSift is new kind of search engine that uses crowdsourced human intelligence to answer vague, complex or visual questions, even when the users are not sure what they are searching for.
"At Imperial College London, Murray Shanahan and colleagues are working on a way around this problem using an old, unfashionable technique called symbolic AI. "Basically this meant an engineer labelled everything for the AI," says Shanahan. His idea is to combine this with modern machine learning.
Symbolic AI never took off, because manually describing everything quickly proved overwhelming. Modern AI has overcome that problem by using neural networks, which learn their own representations of the world around them. "They decide what is salient," says Marta Garnelo, also at Imperial College."
"This type of approach can speed up learning times and improve the efficiency of algorithms, says Max Jaderberg at Google's AI company DeepMind. The company used a similar technique last year to teach an AI to explore a virtual maze. Its algorithm learned much more quickly than conventional reinforcement learning approaches. "Our agent is far quicker and requires a lot less experience from the world to train, making it much more data efficient," he says."
"The world's spookiest philosopher is Nick Bostrom, a thin, soft-spoken Swede. Of all the people worried about runaway artificial intelligence, and Killer Robots, and the possibility of a technological doomsday, Bostrom conjures the most extreme scenarios. In his mind, human extinction could be just the beginning."
"Researchers showed detectors can be defeated by inserting inputs called adversarial examples into every video frame. The adversarial examples are slightly manipulated inputs which cause artificial intelligence systems such as machine learning models to make a mistake. In addition, the team showed that the attack still works after videos are compressed."
"The paper, co-authored by researchers inside and outside Google, contended that technology companies could do more to ensure AI systems aimed at mimicking human writing and speech do not exacerbate historical gender biases and use of offensive language, according to a draft copy seen by Reuters."
"Journal editors, researchers and publishers are now debating the place of such AI tools in the published literature, and whether it's appropriate to cite the bot as an author. Publishers are racing to create policies for the chatbot, which was released as a free-to-use tool in November by tech company OpenAI in San Francisco, California."