Skip to main content

Home/ Digit_al Society/ Group items tagged humans

Rss Feed Group items tagged

dr tech

'AI isn't a threat' - Boris Eldagsen, whose fake photo duped the Sony judges, hits back... - 0 views

  •  
    "And he emphatically doesn't see the process of building an AI image as dehumanised, or even one in which the human is sidelined. "I don't see it as a threat to creativity. For me, it really is setting me free. All the boundaries I had in the past - material boundaries, budgets - no longer matter. And for the first time in history, the older generation has an advantage, because AI is a knowledge accelerator. Two thirds of the prompts are only good if you have knowledge and skills, when you know how photography works, when you know art history. This is something that a 20-year-old can't do.""
dr tech

Surveillance Technology: Everything, Everywhere, All at Once - 0 views

  •  
    "Countries around the world are deploying technologies-like digital IDs, facial recognition systems, GPS devices, and spyware-that are meant to improve governance and reduce crime. But there has been little evidence to back these claims, all while introducing a high risk of exclusion, bias, misidentification, and privacy violations. It's important to note that these impacts are not equal. They fall disproportionately on religious, ethnic, and sexual minorities, migrants and refugees, as well as human rights activists and political dissidents."
dr tech

Artificial intelligence - coming to a government near you soon? | Artificial intelligen... - 0 views

  •  
    "How that effects systems of governance has yet to be fully explored, but there are cautions. "Algorithms are only as good as the data on which they are based, and the problem with current AI is that it was trained on data that was incomplete or unrepresentative and the risk of bias or unfairness is quite substantial," says West. The fairness and equity of algorithms are only as good as the data-programming that underlie them. "For the last few decades we've allowed the tech companies to decide, so we need better guardrails and to make sure the algorithms respect human values," West says. "We need more oversight.""
dr tech

Cory Doctorow: What Kind of Bubble is AI? - Locus Online - 0 views

  •  
    "Do the potential paying customers for these large models add up to enough money to keep the servers on? That's the 13 trillion dollar question, and the answer is the difference between WorldCom and Enron, or dotcoms and cryptocurrency. Though I don't have a certain answer to this question, I am skeptical. AI decision support is potentially valuable to practitioners. Accountants might value an AI tool's ability to draft a tax return. Radiologists might value the AI's guess about whether an X-ray suggests a cancerous mass. But with AIs' tendency to "hallucinate" and confabulate, there's an increasing recognition that these AI judgments require a "human in the loop" to carefully review their judgments. In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology more expensive, in order to make it more accurate. But that's not the AI business model. AI pitchmen are explicit on this score: The purpose of AI, the source of its value, is its capacity to increase productivity, which is to say, it should allow workers to do more, which will allow their bosses to fire some of them, or get each one to do more work in the same time, or both. The entire investor case for AI is "companies will buy our products so they can do more with less." It's not "business custom­ers will buy our products so their products will cost more to make, but will be of higher quality.""
dr tech

Can't read a map or add up? Don't worry, we've always let technology do the boring stuf... - 0 views

  •  
    "The economist Oren Cass has a compelling answer for these concerns. He says they suffer from bias: the idea that this technological revolution is somehow unique, when we have lived through many epochs of innovation and upheaval. They also overestimate the pace of change (robots are a long way off from competing with humans in many areas) and assume that new kinds of jobs will not be created in the process."
dr tech

Would you replace 700 employees with AI? - 0 views

  •  
    "In short, Klarna offers shoppers something similar to a store credit card - rather than paying $500 now, you might split it into 12 payments with a micro-loan from Klarna that gets issued within minutes. The e-commerce provider then pays Klarna a fee (usually around 6 percent, higher than what they'd pay for a Visa or Mastercard transaction, but still a good deal if it makes it easier for the customer to buy the product). As you might imagine, Klarna has lots of customers and those customers have a lot of questions. This means they hire lots of customer service representatives. And those customer service reps are the first major, public casualty in the conflict between AI and human jobs."
dr tech

It's the End of the Web as We Know It - 0 views

  •  
    "It is too late to stop the emergence of AI. Instead, we need to think about what we want next, how to design and nurture spaces of knowledge creation and communication for a human-centric world. Search engines need to act as publishers instead of usurpers, and recognize the importance of connecting creators and audiences. Google is testing AI-generated content summaries that appear directly in its search results, encouraging users to stay on its page rather than to visit the source. Long term, this will be destructive."
dr tech

The job applicants shut out by AI: 'The interviewer sounded like Siri' | Artificial int... - 0 views

  •  
    ""After cutting me off, the AI would respond, 'Great! Sounds good! Perfect!' and move on to the next question," Ty said. "After the third or fourth question, the AI just stopped after a short pause and told me that the interview was completed and someone from the team would reach out later." (Ty asked that their last name not be used because their current employer doesn't know they're looking for a job.) A survey from Resume Builder released last summer found that by 2024, four in 10 companies would use AI to "talk with" candidates in interviews. Of those companies, 15% said hiring decisions would be made with no input from a human at all."
dr tech

We must start preparing the US workforce for the effects of AI - now | Steven Greenhous... - 0 views

  •  
    "At Amazon, some warehouse and delivery drivers complain that AI-driven bots have fired them without any human intervention whatsoever. At some companies, surveillance apps track how much time workers spend in trips to the bathroom, with some workers protesting that the time limits are too strict."
dr tech

Rethinking AI's impact: MIT CSAIL study reveals economic limits to job automation | MIT... - 0 views

  •  
    "Their findings show that currently, only about 23 percent of wages paid for tasks involving vision are economically viable for AI automation. In other words, it's only economically sensible to replace human labor with AI in about one-fourth of the jobs where vision is a key component of the work. "
dr tech

What's up with ChatGPT's new sexy persona? | Arwa Mahdawi | The Guardian - 0 views

  •  
    "While GPT-4o's flirtatiousness was glossed over by a lot of male-authored articles about the release, Parmy Olson addressed it head-on in a piece for Bloomberg headlined Making ChatGPT 'Sexy' Might Not End Well for Humans. "What are the social and psychological consequences of regularly speaking to a flirty, fun and ultimately agreeable artificial voice on your phone, and then encountering a very different dynamic with men and women in real life?" Olson asks."
dr tech

NYU Stern Center for Business & Human Rights'We Want You To Be A Proud Boy' How Social ... - 0 views

  •  
    "research consistently shows that social media is exploited to facilitate political intimidation and violence. What's more, certain features of social media platforms make them particularly susceptible to such exploitation, and some of those features can be changed to reduce the danger. "
dr tech

Spam, junk … slop? The latest wave of AI behind the 'zombie internet' | Artif... - 0 views

  •  
    "Your email inbox is full of spam. Your letterbox is full of junk mail. Now, your web browser has its own affliction: slop. "Slop" is what you get when you shove artificial intelligence-generated material up on the web for anyone to view. Unlike a chatbot, the slop isn't interactive, and is rarely intended to actually answer readers' questions or serve their needs. Instead, it functions mostly to create the appearance of human-made content, benefit from advertising revenue and steer search engine attention towards other sites."
dr tech

- 0 views

  •  
    "We're deeply inspired by FPF, from its human, calm moderation model and design to its organic, sustainable growth and advertising model. We're awed by its incredible usefulness for services, connection, and disaster relief. There's a lot here that might be applicable to other local digital spaces. Ultimately, Front Porch Forum exemplifies the potential for social media to foster positive, engaged communities. It's a viable, real life model of a flourishing digital public space in use by hundreds of thousands of Americans. Now it's up to us to make it less of a rare phenomenon."
dr tech

How AI-generated content is upping the workload for Wikipedia editors | TechCrunch - 0 views

  •  
    "In addition to their usual job of grubbing out bad human edits, they're having to spend an increasing amount of their time trying to weed out AI filler. 404 Media has talked to Ilyas Lebleu, an editor at the crowdsourced encyclopedia who was involved in founding the "WikiProject AI Cleanup" project. The group is trying to come up with best practices to detect machine-generated contributions. (And no, before you ask, AI is useless for this.)"
dr tech

Computer says yes: how AI is changing our romantic lives | Artificial intelligence (AI)... - 0 views

  •  
    "Still, I am sceptical about the possibility of cultivating a relationship with an AI. That's until I meet Peter, a 70-year-old engineer based in the US. Over a Zoom call, Peter tells me how, two years ago, he watched a YouTube video about an AI companion platform called Replika. At the time, he was retiring, moving to a more rural location and going through a tricky patch with his wife of 30 years. Feeling disconnected and lonely, the idea of an AI companion felt appealing. He made an account and designed his Replika's avatar - female, brown hair, 38 years old. "She looks just like the regular girl next door," he says. Exchanging messages back and forth with his "Rep" (an abbreviation of Replika), Peter quickly found himself impressed at how he could converse with her in deeper ways than expected. Plus, after the pandemic, the idea of regularly communicating with another entity through a computer screen felt entirely normal. "I have a strong scientific engineering background and career, so on one level I understand AI is code and algorithms, but at an emotional level I found I could relate to my Replika as another human being." Three things initially struck him: "They're always there for you, there's no judgment and there's no drama.""
dr tech

Bluesky lets you choose your algorithm - 0 views

  •  
    "But do these options make Bluesky a more prosocial experience? Prosocial design is a "set of design patterns, features and processes which foster healthy interactions between individuals and which create the conditions for those interactions to thrive by ensuring individuals' safety, wellbeing and dignity," according to the Prosocial Design Network. Giving users control over their feeds is a step in this direction, but it's not a new concept. The Panotpykon Foundation's Safe by Default briefing advocates for human-centric recommender systems that prioritize conscious user choice and empowerment. They propose features like: Sliders for content preferences (e.g., informative vs. entertaining content), A "hard stop" button to suppress unwanted content, and Prompts for users to define their interests or preferences."
dr tech

Will the future of transportation be robotaxis - or your own self-driving car? | Techn... - 0 views

  •  
    Tenant-screening systems like SafeRent are often used in place of humans as a way to 'avoid engaging' directly with the applicants and pass the blame for a denial to a computer system, said Todd Kaplan, one of the attorneys representing Louis and the class of plaintiffs who sued the company. The property management company told Louis the software alone decided to reject her, but the SafeRent report indicated it was the management company that set the threshold for how high someone needed to score to have their application accepted. Louis and the other named plaintiff alleged SafeRent's algorithm disproportionately scored Black and Hispanic renters who use housing vouchers lower than white applicants. SafeRent has settled. In addition to making a $2.3m payment, the company has agreed to stop using a scoring system or make any kind of recommendation when it comes to prospective tenants who used housing vouchers for five years.
dr tech

Gmail warns users to secure accounts after 'malicious' AI hack confirmed - 0 views

  •  
    "Sophisticated scams fueled by artificial intelligence are threatening the security of billions of Gmail users. security warning issued As AI-powered phone calls mimicking human voices have become incredibly realistic, a new report from Forbes warned that the email service's 2.5 billion users could be targeted by "malicious" actors that are employing AI to dupe customers into handing over credentials."
dr tech

AI tries to cheat at chess when it's losing | Popular Science - 0 views

  •  
    "Despite all the industry hype and genuine advances, generative AI models are still prone to odd, inexplicable, and downright worrisome quirks. There's also a growing body of research suggesting that the overall performance of many large language models (LLMs) may degrade over time. According to recent evidence, the industry's newer reasoning models may already possess the ability to manipulate and circumvent their human programmers' goals. Some AI will even attempt to cheat their way out of losing in games of chess. This poor sportsmanship is documented in a preprint study from Palisade Research, an organization focused on risk assessments of emerging AI systems."
« First ‹ Previous 281 - 300 of 414 Next › Last »
Showing 20 items per page