Skip to main content

Home/ Digit_al Society/ Group items tagged People

Rss Feed Group items tagged

dr tech

ChatGPT isn't a great leap forward, it's an expensive deal with the devil | John Naught... - 0 views

  •  
    "The intriguing echo of Eliza in thinking about ChatGPT is that people regard it as magical even though they know how it works - as a "stochastic parrot" (in the words of Timnit Gebru, a well-known researcher) or as a machine for "hi-tech plagiarism" (Noam Chomsky). But actually we do not know the half of it yet - not the CO2 emissions incurred in training its underlying language model or the carbon footprint of all those delighted interactions people are having with it. Or, pace Chomsky, that the technology only exists because of its unauthorised appropriation of the creative work of millions of people that just happened to be lying around on the web? What's the business model behind these tools? And so on. Answer: we don't know."
dr tech

Why is everyone saying Instagram is rubbish now - and what's TikTok got to do with it? ... - 0 views

  •  
    "The upshot is that people are getting way more videos in their Instagram feeds, and it's going full screen for those videos, so it scrolls like TikTok. But that's not all: people are now also seeing "suggested posts", which works in a TikTok-style algorithm that brings in random posts from people you don't follow into your feed."
dr tech

US drones could be killing the wrong people because of metadata errors - Boing Boing - 1 views

  •  
    "As Redditor actual_hacker said in a thread, the big point of this article: "The US has built a SIM-card kill list. They're shooting missiles at cell phones without caring about who is holding the phone. That is why so many innocent people keep getting killed. That is what this story is about. The next time someone says "it's just metadata," remember this story. Innocent people die because of NSA's use of metadata: the story cites 14 women and 21 children killed in just one operation. All because of metadata.""
dr tech

From Trump Nevermind babies to deep fakes: DALL-E and the ethics of AI art | Artificial... - 0 views

  •  
    ""We are seeing deep fakes being used all the time, and the technology is going to allow still images, but ultimately also video images, to be synthesised [more easily] by bad actors," he says. DALL-E has content policy rules in place that prohibit bullying, harassment, the creation of sexual or political content, or creating images of people without their consent. And while Open AI has limited the number of people who can sign up to DALL-E, its lower-grade replica, DALL-E mini, is open access, meaning people can produce anything they want."
dr tech

'We're going through a big revolution': how AI is de-ageing stars on screen | Film | Th... - 0 views

  •  
    "Tan, however, has misgivings. He says: "AI is in a sense cool and fun in the beginning but then you realise it's actually dangerous. It can imitate people and make them do things on screen and then you can have a whole societal belief that those people are disgraced for whatever they did on screen and in reality it wasn't even them. It's just a ploy to wind people up. "You see it in warfare, which I think Russia tried with Ukraine. There was this use that had the Ukrainian president saying they were giving up and soldiers should put their weapons down. That was done with AI. A simple tool which doesn't look dangerous suddenly can be very dangerous because now you are affecting reality with it.""
dr tech

Zuck's New Glasses Are a Fashionable Privacy Nightmare - 0 views

  •  
    "That is, in a way, Orion's most powerful and dangerous feature: they're so normal that people will want to wear them; they're so normal that people won't notice them. On the other hand, Meta has created a new gadget that, like every other before, can be enhanced or modified for other purposes, for better or worse. Should Meta stop building tech because a small number of people will use it for evil? If they had served this use case on a silver platter then yes, they should be held accountable. They didn't. Sure, Zuck's no friend, but he's not the one sneaking into your privacy."
dr tech

The Intelligence Age - 0 views

  •  
    "As we have seen with other technologies, there will also be downsides, and we need to start working now to maximize AI's benefits while minimizing its harms. As one example, we expect that this technology can cause a significant change in labor markets (good and bad) in the coming years, but most jobs will change more slowly than most people think, and I have no fear that we'll run out of things to do (even if they don't look like "real jobs" to us today). People have an innate desire to create and to be useful to each other, and AI will allow us to amplify our own abilities like never before. As a society, we will be back in an expanding world, and we can again focus on playing positive-sum games. Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter. If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable."
dr tech

'You get desensitised to it': how social media fuels fear of violence | Social media | ... - 0 views

  •  
    ""People glamourise them types of things and the smallest thing can be escalated on social media," he said. "A fight can happen between two people and they can squash it [reach a truce], but because the video's out there on social media and it looks from a different perspective like one is losing, pride is going to be hurt so you might go out there and get some sort of revenge and let people know, you're not going to mess with me." It all created anxiety, explained St Clair-Hughes. "The fearmongering on social media puts you in a fight or flight state so when you leave the house now you are either on the front foot or on the back foot. So you step outside ready to do whatever you need to do … It's the subliminals - no one's telling you to pick up a knife and commit violence, it's just the more that you see it …""
dr tech

'Serious concerns' about DWP's use of AI to read correspondence from benefit claimants ... - 0 views

  •  
    " 'Serious concerns' about DWP's use of AI to read correspondence from benefit claimants White mail system handles 'highly sensitive personal data' and people not told it is processing their information AI prototypes for UK welfare system dropped as officials lament 'false starts' Robert Booth UK technology editor Mon 27 Jan 2025 05.00 GMT Share When your mailbag brims with 25,000 letters and emails every day, deciding which to answer first is daunting. When lurking within are pleas for help from some of the country's most vulnerable people, the stakes only get higher. That is the challenge facing the Department for Work and Pensions (DWP) as correspondence floods in from benefit applicants and claimants - of which there are more than 20 million, including pensioners, in the UK. The DWP thinks it may have found a solution in using artificial intelligence to read it all first - including handwritten missives. Human reading used to take weeks and could leave the most vulnerable people waiting for too long for help. But "white mail" is an AI that can do the same work in a day and supposedly prioritise the most vulnerable cases for officials to get to first."
dr tech

NSA trove shows 9:1 ratio of innocents to suspicious people in "targeted surveillance" ... - 0 views

  •  
    "The NSA uses laughably sloppy tools for deciding whether a target is a "US person" (a person in the USA, or an American citizen abroad). For example, people whose address books contain foreign persons are presumed by some analysts to be foreign. Likewise, people who post in "foreign" languages (the US has no official state language) are presumed by some analysts to be non-US persons."
dr tech

Being human: how realistic do we want robots to be? | Technology | The Guardian - 0 views

  •  
    "Anouk van Maris, a robot cognition specialist who is researching ethical human-robot interaction, has found that comfort levels with robots vary greatly depending on location and culture. "It depends on what you expect from it. Some people love it, others want to run away as soon as it starts moving," she says. "The advantage of a robot that looks human-like is that people feel more comfortable with it being close to them, and it is easier to communicate with it. The big disadvantage is that you expect it to be able to do human things and it often can't.""
dr tech

Under Europe's virus lockdown, social media proves a lifeline - CNA - 0 views

  •  
    "For many, the surge in social media use in recent years has been an awful contradiction - rather than making people more friendly, it has tended to cut them off, cause division and fuel anger and resentment, not sociability. But as Europe adjusts to the reality of self-isolation, there are signs social media can bring out the best in people, not just the boastful or argumentative bits many decry."
dr tech

Can robots make good therapists? | 3 Quarks Daily - 0 views

  •  
    "From another perspective, the idea that people seem comfortable offloading their troubles not on to a sympathetic human, but a sympathetic-sounding computer program, might present an opportunity. Even before the pandemic, there were not enough mental health professionals to meet demand. In the UK, there are 7.6 psychiatrists per 100,000 people; in some low-income countries, the average is 0.1 per 100,000."
dr tech

Experimental evidence of massive-scale emotional contagion through social networks | PNAS - 0 views

  •  
    "Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. Emotional contagion is well established in laboratory experiments, with people transferring positive and negative emotions to others. Data from a large real-world social network, collected over a 20-y period suggests that longer-lasting moods (e.g., depression, happiness) can be transferred through networks [Fowler JH, Christakis NA (2008) BMJ 337:a2338], although the results are controversial. "
dr tech

AI cameras to detect violence on Sydney trains - Software - iTnews - 0 views

  •  
    ""The AI will be trained to detect incidents such as people fighting, a group of agitated persons, people following someone else, and arguments or other abnormal behaviour," SMART lecturer and team lead Johan Barthelemy said. "It can also identify an unsafe environment, such as where there is a lack of lighting.The system will then alert a human operator who can quickly react if there is an issue.""
dr tech

Microsoft Channels 'Black Mirror': Turn Deceased People Into Chatbots | IndieWire - 0 views

  •  
    "As reported by The Independent this week, Microsoft has been granted a patent that allows the company "to make a chatbot using the personal information of deceased people." Under the patent, Microsoft can create an artificial intelligence bot "based on images, voice data, social media posts, electronic message, and more personal information" of a deceased person."
dr tech

How do people use ChatGPT? We analyzed real AI chatbot conversations - The Washington Post - 0 views

  •  
    "What do people really ask chatbots? It's a lot of sex and homework. AI chatbots are taking the world by storm. We analyzed thousands of conversations to see what people are really asking them and what topics are most discussed."
dr tech

Should AI systems behave like people? | AISI Work - 0 views

  •  
    "Most people agree that AI should transparently reveal itself not to be human, but many were happy for AI to talk in human-realistic ways. A majority (approximately 60%) felt that AI systems should refrain from expressing emotions, unless they were idiomatic expressions (like "I'm happy to help")."
dr tech

Online scams 'target Apple customers for richer pickings' - BBC News - 0 views

  •  
    "Cybercriminals are targeting people using Apple products as they are more likely to have disposable income, a security expert has warned. Blogger Graham Cluley said that while malware was more common on Windows, Apple customers could not "afford to be lackadaisical" about security. On Monday, he reported a text message scam that tried to trick people into handing over account information. Apple's support site warns customers not to enter details on spoof sites."
dr tech

AI will create 'useless class' of human, predicts bestselling historian | Technology | ... - 0 views

  •  
    "AIs do not need more intelligence than humans to transform the job market. They need only enough to do the task well. And that is not far off, Harari says. "Children alive today will face the consequences. Most of what people learn in school or in college will probably be irrelevant by the time they are 40 or 50. If they want to continue to have a job, and to understand the world, and be relevant to what is happening, people will have to reinvent themselves again and again, and faster and faster.""
‹ Previous 21 - 40 of 693 Next › Last »
Showing 20 items per page