"There's a similar problem in artificial intelligence: The people who develop AI are increasingly having problems explaining how it works and determining why it has the outputs it has. Deep neural networks (DNN)-made up of layers and layers of processing systems trained on human-created data to mimic the neural networks of our brains-often seem to mirror not just human intelligence but also human inexplicability."
"Sam Smith, of the health data privacy group MedConfidential, said: "This is an utterly appalling case. It's an individual problem that the doctor did this. But it's a systemic problem that they could do it, and that flaws in the way the NHS's data management systems work meant that any doctor can do something like this to any patient."
"This stuff is far more complicated than calibrating safe following distances or even braking for a loose soccer ball. Goodall writes: "There is no obvious way to effectively encode complex human morals in software.""
"However, once technology is released it's like herding cats. Deepfakes are a moving feast and as soon as moderators find a way of detecting them, people will find a workaround."
"The system is designed to solve the an often-ignored problem of 75% of the earth's population, an estimated 4 billion people, who have no address for mailing purposes, making it difficult to open a bank account, get a delivery, or be reached in an emergency."
"According to Derek & Laura Cabrera, "wicked problems result from the mismatch between how real-world systems work and how we think they work". With systems thinking, there is constant testing and feedback between the real world, in all its complexity, and our mental model of it. This openness to test and look for feedback led Dr. Fisman to change his mind on the airborne spread of the coronavirus."
"Marx had a point. Especially when it comes to ethics, philosophy is often better at finding complications and problems than proposing changes. Silicon Valley has been better at changing the world (even if through breaking things) than taking pause to think through the consequences."
"People have been censored or blocked from the platform because their names sounded too fake. Ads for clothing disabled people we removed buy algorithms that believed they were breaking the rules and promoting medical devices. The Vienna Tourist Board had to move to adult content friendly site OnlyFans to share works of art from their museum after Facebook removed photos of paintings. Words that have rude popular meanings but other more specific definitions in certain circles - like "hoe" amongst gardeners, or "cock" amongst chicken farmers or gun enthusiasts - can land people in the so-called "Facebook jail" for days or even weeks."
"Google should know better, given that it already had a "hallucination problem" with its featured snippets(Opens in a new tab) at the top of search results back in 2017. The snippets algorithm seemed to particularly enjoy telling lies about U.S. presidents. Again, what could go wrong?"
"Commentary: Our emails are getting more impolite and that might be a problem
Although some choose to dive into their content immediately, starting your emails with an opening greeting could raise the chances of them being read, says the Financial Times' Pilita Clark."
"Has the introduction of social media in the past 10-15 years caused the increase in prevalence of mental health problems in teens?
At this point, most of what I'm reading and hearing is a resounding yes (especially for girls).
I don't necessarily disagree with this. Just to level set: I think there is a very good chance (my current number is probably around 75%) that social media has contributed to the teen mental health crisis. At the same time, I think large-scale mental health crises are complex phenomena, that there are likely multiple causes, and that we need to make sure we're approaching the data with the scrutiny it deserves. It's this nuance that, I think, has been missing from the conversation."
"Just like Napster forced legal music streaming to advance, popular tools like ChatGPT, Midjourney, and DALL-E2 will force us to establish AI best practices and ethical guidelines.
The IP for Generative AI will continue to be debated in the culture and in the courts, and we will collectively come to agreements. The only issue is whether regulation will ever be able to keep up with the rapid pace of AI."
"Apple is banking on its upcoming AI features to boost iPhone sales especially in China, where demand has been lagging. But there's a problem: ChatGPT - soon to be integrated into Siri - is banned in China.
"
""An emotionally intelligent human does not usually claim they can accurately put a label on everything everyone says and tell you this person is currently feeling 80% angry, 18% fearful, and 2% sad," says Edward B Kang, an assistant professor at New York University writing about the intersection of AI and sound. "In fact, that sounds to me like the opposite of what an emotionally intelligent person would say."
Adding to this is the notorious problem of AI bias. "Your algorithms are only as good as the training material," Barrett says. "And if your training material is biased in some way, then you are enshrining that bias in code.""
"Journalists discovered that two companies had posted the personal data of 170,000 customers online. The leak, which exposed the victims to identity theft and fraud, was reportedly so bad that social security numbers, passport scans, financial data and home addresses were indexed by search engines. Rather than merely address the problem, however, TerraCom and YourTel threatened the reporters, referring to them as "hackers" and accusing them of "numerous violations of the Computer Fraud and Abuse Act"
"So over the weekend, when protesters were expected to rally for the third time, Facebook was inaccessible to locals, who had been using the platform to organise.
People also had problems accessing Facebook's Instagram service.
Israeli VPN service Hola posted a statement saying it saw a surge of about 200,000 users from Vietnam on its system over the weekend, using it to access Facebook."
"95% of the US population, 93% of Europeans and 92% of Asians can't do "level three" tasks like "You want to know what percentage of the emails sent by John Smith last month were about sustainability" -- tasks where "use of tools (e.g. a sort function) is required to make progress towards the solution. The task may involve multiple steps and operators. The goal of the problem may have to be defined by the respondent, and the criteria to be met may or may not be explicit.""
"Facebook will rely on users to report fake news despite evidence that suggests users have a difficult time assessing or identifying fake news. Teens seem to be especially vulnerable to fake news. A recent study by researchers at Stanford found that middle and high school students have a difficult time detecting fake news from real news, or detecting bias in tweets and Facebook statuses."