"Marlinspike's Textsecure has an impeccable reputation as a secure platform, and Whatsapp founder Jan Koum attributes his desire to add security to his users' conversations to his experiences with the surveillance state while growing up in Soviet Ukraine. However, without any independent security audit or (even better) source-code publication, we have to take the company's word that it has done the right thing and that it's done it correctly."
"GCHQ, the UK's spy agency, designed a security protocol for voice-calling called MIKEY-SAKKE and announced that they'll only certify VoIP systems as secure if they use MIKEY-SAKKE, and it's being marketed as "government-grade security."
But a close examination of MIKEY-SAKKE reveals some serious deficiencies. The system is designed from the ground up to support "key escrow" -- that is, the ability of third parties to listen in on conversations without the callers knowing about it."
""Each time we build new ways of doing something close to human level, it allows us to automate or augment human labor," said Jeremy Howard, founder of Fast.ai, an independent lab based in San Francisco that is among those at the forefront of this research. "This can make life easier for a lawyer or a paralegal. But it can also help with medicine."
It may even lead to technology that can - finally - carry on a decent conversation.
But there is a downside: On social media services like Twitter, this new research could also lead to more convincing bots designed to fool us into thinking they are human, Howard said."
"california has banned bots that pretend to be human. under a newly signed bill, these bots will need to disclose their nature, making it clear to the user that they're conversing with a machine rather than a human.
according to a bill signed on friday by democratic gov. jerry brown, automated accounts, more commonly called 'bots', will need to disclose to customers that they're not real humans, according to reporting from PC magazine. the bill is part of a move that will help users avoid falling victim to automated messages selling goods or services in a commercial transaction or to influence a vote in an election."
""The person on the other line said, 'unplug your Alexa devices right now,'" Danielle told KIRO. "'You're being hacked.'" The couple had reportedly placed Amazon devices in every room of their house to control the heat, lights, and security system. All of the gadgets were pulled out after their colleague proved they had received the unauthorized recording."
"We want to start a conversation about emotion recognition technology. Explore the site, watch the video, play a game and add your thoughts to our research. Or turn on your camera to activate our very own emotion recognition machine...will it 'emojify' you? "
""I think, without question, having access to quantitative data about our conversations, about facial expressions and intonations, would provide another dimension to the clinical interaction that's not detected right now," said Barron, a psychiatrist based in Seattle and author of the new book Reading Our Minds: The Rise of Big Data Psychiatry."
"Last month, the U.S. Patent and Trademark Office granted a patent to Microsoft that outlines a process to create a conversational chatbot of a specific person using their social data. In an eerie twist, the patent says the chatbot could potentially be inspired by friends or family members who are deceased, which is almost a direct plot of a popular episode of Netflix's Black Mirror."
"What wasn't publicly known until now is that Facebook actually ran experiments to see how the changes would affect publishers-and when it found that some of them would have a dramatic impact on the reach of right-wing "junk sites," as a former employee with knowledge of the conversations puts it, the engineers were sent back to lessen those impacts. As the Wall Street Journal reported on Friday, they came back in January 2018 with a second iteration that dialed up the harm to progressive-leaning news organizations instead."
""The social engineering type of attack does not tend to scale [up] easily given the time and effort required to succeed, and therefore is more often than not used by individuals rather than the 'call centre' approach of criminal enterprises," Goddard says. "The trigger to target an individual could be targeted, or opportunistic such as overhearing a conversation or getting access to sensitive or exploitable information like a picture or bank statement.""
"Unrecord's appearance at the centre of gaming conversation raises another question: as game graphics improve, to the extent where you don't need millions of dollars and dozens of people to create games that look impressively realistic, how far do we go with it? Motorcycle racing game Ride 4 made waves recently with ultra-realistic gameplay footage of bikes zooming around rainy Northern Ireland; in that context, photorealism is a boon. But when games involve violence, as they often do, it becomes much more uncomfortable. I have suppressed mild disgust for years at the gratuitous neck-snapping or stabbing animations in most first-person shooters. How much worse would that instinctive ickiness be if the game and its characters looked more real?"
"The app, called Historical Figures, has begun to take off in the two weeks since it was released as a way to have conversations with any of 20,000 notable people from history.
But this week, it sparked viral controversy online over its inclusion of Hitler, his Nazi lieutenants and other dictators from the past.
"Are neo-Nazis going to be attracted to this site so they can go and have a dialogue with Adolf Hitler?" asked Rabbi Abraham Cooper, the director of global social action for the Simon Wiesenthal Center, a Jewish human rights organization. "
"There's so much focus on sweeping claims that aren't actionable, or unfounded claims we can contradict with data, that are crowding out the harms we can demonstrate, and the things we can test, that could make social media better." He added, "We're years into this, and we're still having an uninformed conversation about social media. It's totally wild.""
"A Google employee named Blake Lemoine was put on leave recently after claiming that one of Google's artificial-intelligence language models, called LaMDA (Language Models for Dialogue Applications), is sentient. He went public with his concerns, sharing his text conversations with LaMDA. At one point, Lemoine asks, "What does the word 'soul' mean to you?" LaMDA answers, "To me, the soul is a concept of the animating force behind consciousness and life itself."
"I was inclined to give it the benefit of the doubt," Lemoine explained, citing his religious beliefs. "Who am I to tell God where he can and can't put souls?""
"Has the introduction of social media in the past 10-15 years caused the increase in prevalence of mental health problems in teens?
At this point, most of what I'm reading and hearing is a resounding yes (especially for girls).
I don't necessarily disagree with this. Just to level set: I think there is a very good chance (my current number is probably around 75%) that social media has contributed to the teen mental health crisis. At the same time, I think large-scale mental health crises are complex phenomena, that there are likely multiple causes, and that we need to make sure we're approaching the data with the scrutiny it deserves. It's this nuance that, I think, has been missing from the conversation."
"Because for all the promises of smart tech, at least a "dumb" heating system can't be taken over by a vindictive ex, and used to torment you with unbearable heat or terrible cold, when you have no idea why. A daft doorbell can't tell a stalker when you leave, or when you're home, or where you go if you use a smartwatch, too. And no stupid speaker can be used to listen in on your private conversations. These situations may sound like nightmares, but they are all real cases of smart tech-enabled domestic abuse. And the number of cases is shooting up: between 2018 and 2022, the domestic violence charity Refuge saw an increase of 258% in the number of survivors supported by their tech abuse team."
"It remains possible, however, that the true costs of social-media anxieties are harder to tabulate. Gentzkow told me that, for the period between 2016 and 2020, the direct effects of misinformation were difficult to discern. "But it might have had a much larger effect because we got so worried about it-a broader impact on trust," he said. "Even if not that many people were exposed, the narrative that the world is full of fake news, and you can't trust anything, and other people are being misled about it-well, that might have had a bigger impact than the content itself." Nyhan had a similar reaction. "There are genuine questions that are really important, but there's a kind of opportunity cost that is missed here. There's so much focus on sweeping claims that aren't actionable, or unfounded claims we can contradict with data, that are crowding out the harms we can demonstrate, and the things we can test, that could make social media better." He added, "We're years into this, and we're still having an uninformed conversation about social media. It's totally wild.""
"But for many scientists, Twitter has become an essential tool for collaboration and discovery - a source of real-time conversations around research papers, conference talks and wider topics in academia. Papers now zip around scientific communities faster thanks to Twitter, says Johann Unger, a linguist at Lancaster University, UK, who notes that extra information is also shared in direct private messages through the site. And its limit on tweet length - currently 280 characters - has pushed academics into keeping their commentary pithy, he adds."