Skip to main content

Home/ Digit_al Society/ Group items tagged ai technology

Rss Feed Group items tagged

dr tech

16 Musings on AI's Impact on the Labor Market - 0 views

  •  
    "In the short term, generative AI will replace a lot of people because productivity increases while demand stays the same due to inertia. In the long term, the creation of new jobs compensates for the loss of old ones, resulting in a net positive outcome for humans who leave behind jobs no one wants to do. The most important aspect of any technological revolution is the transition from before to after. Timing and location matters: older people have a harder time reinventing themselves into a new trade or craft. Poor people and poor countries have less margin to react to a wave of unemployment. Digital automation is quicker and more aggressive than physical automation because it bypasses logistical constraints-while ChatGPT can be infinitely cloned, a metallic robot cannot. Writing and painting won't die because people care about the human factor first and foremost; there are already a lot of books we can't possibly read in one lifetime so we select them as a function of who's the author. Even if you hate OpenAI and ChatGPT for being responsible for the lack of job postings, I recommend you ally with them for now; learn to use ChatGPT before it's too late to keep your options open. Companies are choosing to reduce costs over increasing output because the sectors where generative AI is useful can't artificially increase demand in parallel to productivity. (Who needs more online content?) Our generation is reasonably angry at generative AI and will bravely fight it. Still, our offspring-and theirs-will be grateful for a transformed world whose painful transformation they didn't have to endure. Certifiable human-made creative output will reduce its quantity but multiply its value in the next years because demand specific for it will grow; automation can mimic 99% of what we do but never reaches 100%. The maxim "AI won't take your job, a person using AI will; yes, you using AI will replace yourself not using it" applies more in the long term than the
dr tech

Cory Doctorow: What Kind of Bubble is AI? - Locus Online - 0 views

  •  
    "Do the potential paying customers for these large models add up to enough money to keep the servers on? That's the 13 trillion dollar question, and the answer is the difference between WorldCom and Enron, or dotcoms and cryptocurrency. Though I don't have a certain answer to this question, I am skeptical. AI decision support is potentially valuable to practitioners. Accountants might value an AI tool's ability to draft a tax return. Radiologists might value the AI's guess about whether an X-ray suggests a cancerous mass. But with AIs' tendency to "hallucinate" and confabulate, there's an increasing recognition that these AI judgments require a "human in the loop" to carefully review their judgments. In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology more expensive, in order to make it more accurate. But that's not the AI business model. AI pitchmen are explicit on this score: The purpose of AI, the source of its value, is its capacity to increase productivity, which is to say, it should allow workers to do more, which will allow their bosses to fire some of them, or get each one to do more work in the same time, or both. The entire investor case for AI is "companies will buy our products so they can do more with less." It's not "business custom­ers will buy our products so their products will cost more to make, but will be of higher quality.""
dr tech

The AI feedback loop: Researchers warn of 'model collapse' as AI trains on AI-generated... - 0 views

  •  
    "Now, as more people use AI to produce and publish content, an obvious question arises: What happens as AI-generated content proliferates around the internet, and AI models begin to train on it, instead of on primarily human-generated content? A group of researchers from the UK and Canada have looked into this very problem and recently published a paper on their work in the open access journal arXiv. What they found is worrisome for current generative AI technology and its future: "We find that use of model-generated content in training causes irreversible defects in the resulting models.""
dr tech

The future is … sending AI avatars to meetings for us, says Zoom boss | Artif... - 0 views

  • ix years away and
  • “five or six years” away, Eric Yuan told The Verge magazine, but he added that the company was working on nearer-term technologies that could bring it closer to reality.“Let’s assume, fast-forward five or six years, that AI is ready,” Yuan said. “AI probably can help for maybe 90% of the work, but in terms of real-time interaction, today, you and I are talking online. So, I can send my digital version, you can send your digital version.”Using AI avatars in this way could free up time for less career-focused choices, Yuan, who also founded Zoom, added. “You and I can have more time to have more in-person interactions, but maybe not for work. Maybe for something else. Why do we need to work five days a week? Down the road, four days or three days. Why not spend more time with your fam
  •  
    "Ultimately, he suggests, each user would have their own "large language model" (LLM), the underlying technology of services such as ChatGPT, which would be trained on their own speech and behaviour patterns, to let them generate extremely personalised responses to queries and requests. Such systems could be a natural progression from AI tools that already exist today. Services such as Gmail can summarise and suggest replies to emails based on previous messages, while Microsoft Teams will transcribe and summarise video conferences, automatically generating a to-do list from the contents."
dr tech

New AI algorithm flags deepfakes with 98% accuracy - better than any other tool out the... - 0 views

  •  
    "With the release of artificial intelligence (AI) video generation products like Sora and Luma, we're on the verge of a flood of AI-generated video content, and policymakers, public figures and software engineers are already warning about a deluge of deepfakes. Now it seems that AI itself might be our best defense against AI fakery after an algorithm has identified telltale markers of AI videos with over 98% accuracy."
dr tech

'Godfather of AI' shortens odds of the technology wiping out humanity over next 30 year... - 0 views

  •  
    The British-Canadian computer scientist often touted as a "godfather" of artificial intelligence has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is "much faster" than expected. Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a "10% to 20%" chance that AI would lead to human extinction within the next three decades. Previously Hinton had said there was a 10% chance of the technology triggering a catastrophic outcome for humanity.
dr tech

Morgan Stanley: 40% of labor force to be affected by AI in 3 years - 0 views

  •  
    "Analyst Brian Nowak estimates that the AI technology will have a $4.1 trillion economic effect on the labor force - or affect about 44% of labor - over the next few years by changing input costs, automating tasks and shifting the ways companies obtain, process and analyze information. Today, Morgan Stanley pegs the AI effect at $2.1 trillion, affecting 25% of labor. "We see generative AI expanding the scope of business processes that can be automated," he wrote in a Sunday note. "At the same time, the input costs supporting GenAI functionality are rapidly falling, enabling a strongly expansionary impact to software production. As a result, Generative AI is set to impact the labor markets, expand the enterprise software TAM, and drive incremental spend for Public Cloud services.""
dr tech

FCC aims to investigate the risk of AI-enhanced robocalls | TechCrunch - 0 views

  •  
    "As if robocalling wasn't already enough of a problem, the advent of easily accessible, realistic AI-powered writing and synthetic voice could supercharge the practice. The FCC aims to preempt this by looking into how generated robocalls might fit under existing consumer protections. A Notice of Inquiry has been proposed by Chairwoman Jessica Rosenworcel to be voted on at the agency's next meeting. If the vote succeeds (as it is almost certain to), the FCC would formally look into how the Telephone Consumer Protection Act empowers them to act against scammers and spammers using AI technology. But Rosenworcel was also careful to acknowledge that AI represents a potentially powerful tool for accessibility and responsiveness in phone-based interactions. "While we are aware of the challenges AI can present, there is also significant potential to use this technology to benefit communications networks and their customers-including in the fight against junk robocalls and robotexts. We need to address these opportunities and risks thoughtfully, and the effort we are launching today will help us gain more insight on both fronts," she said in a statement."
dr tech

ChatGPT maker OpenAI releases 'not fully reliable' tool to detect AI generated content ... - 0 views

  •  
    "Open AI researchers said that while it was "impossible to reliably detect all AI-written text", good classifiers could pick up signs that text was written by AI. The tool could be useful in cases where AI was used for "academic dishonesty" and when AI chatbots were positioned as humans, they said."
dr tech

Warning over use in UK of unregulated AI chatbots to create social care plans | Artific... - 0 views

  •  
    "A pilot study by academics at the University of Oxford found some care providers had been using generative AI chatbots such as ChatGPT and Bard to create care plans for people receiving care. That presents a potential risk to patient confidentiality, according to Dr Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, who surveyed care organisations for the study. "If you put any type of personal data into [a generative AI chatbot], that data is used to train the language model," Green said. "That personal data could be generated and revealed to somebody else." She said carers might act on faulty or biased information and inadvertently cause harm, and an AI-generated care plan might be substandard."
dr tech

AI now surpasses humans in almost all performance benchmarks - 0 views

  •  
    "The new AI Index report notes that in 2023, AI still struggled with complex cognitive tasks like advanced math problem-solving and visual commonsense reasoning. However, 'struggled' here might be misleading; it certainly doesn't mean AI did badly. Performance on MATH, a dataset of 12,500 challenging competition-level math problems, improved dramatically in the two years since its introduction. In 2021, AI systems could solve only 6.9% of problems. By contrast, in 2023, a GPT-4-based model solved 84.3%. The human baseline is 90%. "
dr tech

Computer says yes: how AI is changing our romantic lives | Artificial intelligence (AI)... - 0 views

  •  
    "Still, I am sceptical about the possibility of cultivating a relationship with an AI. That's until I meet Peter, a 70-year-old engineer based in the US. Over a Zoom call, Peter tells me how, two years ago, he watched a YouTube video about an AI companion platform called Replika. At the time, he was retiring, moving to a more rural location and going through a tricky patch with his wife of 30 years. Feeling disconnected and lonely, the idea of an AI companion felt appealing. He made an account and designed his Replika's avatar - female, brown hair, 38 years old. "She looks just like the regular girl next door," he says. Exchanging messages back and forth with his "Rep" (an abbreviation of Replika), Peter quickly found himself impressed at how he could converse with her in deeper ways than expected. Plus, after the pandemic, the idea of regularly communicating with another entity through a computer screen felt entirely normal. "I have a strong scientific engineering background and career, so on one level I understand AI is code and algorithms, but at an emotional level I found I could relate to my Replika as another human being." Three things initially struck him: "They're always there for you, there's no judgment and there's no drama.""
dr tech

AI and the Law: What You Need To Know | by Paul DelSignore | The Generator | Mar, 2023 ... - 0 views

  •  
    "On Mar 16, 2023, The Copyright Office initiated an effort to investigate copyright law and policy concerns related to Generative AI. "This initiative is in direct response to the recent striking advances in generative AI technologies and their rapidly growing use by individuals and businesses. The Copyright Office has received requests from Congress and members of the public, including creators and AI users, to examine the issues raised for copyright, and it is already receiving applications for registration of works including AI-generated content.""
dr tech

Computer-generated inclusivity: fashion turns to 'diverse' AI models | Fashion | The Gu... - 0 views

  •  
    "The model is AI-generated, a digital rendering of a human being that will start appearing on Levi's e-commerce website later this year. The brand teamed with LaLaLand.ai, a digital studio that makes customized AI models for companies like Calvin Klein and Tommy Hilfiger, to dream up this avatar. Amy Gershkoff Bolles, Levi's global head of digital and emerging technology strategy, announced the model's debut at a Business of Fashion event in March. AI models will not completely replace the humans, she said, but will serve as a "supplement" intended to aid in the brand's representation of various sizes, skin tones and ages."
dr tech

The world is not quite ready for 'digital workers' | Artificial intelligence (AI) | The... - 1 views

  •  
    "Seeing an opportunity, Franklin decided to take advantage. On 9 July, the company said that it would begin to support digital employees as part of its platform and treat them like any other employee. "Today Lattice is making AI history," Franklin pronounced. "We will be the first to give digital workers official employee records in Lattice. Digital workers will be securely onboarded, trained and assigned goals, performance metrics, appropriate systems access and even a manager. Just as any person would be." The pushback was swift - and, in many cases, brutal, particularly on LinkedIn, which is generally not known for its savage engagement like X (formerly known as Twitter). "This strategy and messaging misses the mark in a big way, and I say that as someone building an AI company," said Sawyer Middeleer, an executive at a firm that uses AI to help with sales research, on LinkedIn. "Treating AI agents as employees disrespects the humanity of your real employees. Worse, it implies that you view humans simply as 'resources' to be optimized and measured against machines. It's the exact opposite of a work environment designed to elevate the people who contribute to it.""
dr tech

Indian election was awash in deepfakes - but AI was a net positive for democracy - 0 views

  •  
    "Deepfakes were not the only manifestation of AI in the Indian elections. Long before the election began, Indian Prime Minister Narendra Modi addressed a tightly packed crowd celebrating links between the state of Tamil Nadu in the south of India and the city of Varanasi in the northern state of Uttar Pradesh. Instructing his audience to put on earphones, Modi proudly announced the launch of his "new AI technology" as his Hindi speech was translated to Tamil in real time. In a country with 22 official languages and almost 780 unofficial recorded languages, the BJP adopted AI tools to make Modi's personality accessible to voters in regions where Hindi is not easily understood. Since 2022, Modi and his BJP have been using the AI-powered tool Bhashini, embedded in the NaMo mobile app, to translate Modi's speeches with voiceovers in Telugu, Tamil, Malayalam, Kannada, Odia, Bengali, Marathi and Punjabi."
dr tech

Google owner drops promise not to use AI for weapons | Alphabet | The Guardian - 0 views

  •  
    "The Google owner, Alphabet, has dropped its promise not to use artificial intelligence for purposes such as developing weapons and surveillance tools. The US technology company said on Tuesday, just before it reported lower-than-forecast earnings, that it had updated its ethical guidelines around AI, and they no longer referred to not pursuing technologies that could "cause or are likely to cause overall harm". Google's AI head, Demis Hassabis, said the guidelines were being overhauled in a changing world and that AI should protect "national security"."
dr tech

Google says AI systems should be able to mine publishers' work unless companies opt out... - 0 views

  •  
    "The company has called for Australian policymakers to promote "copyright systems that enable appropriate and fair use of copyrighted content to enable the training of AI models in Australia on a broad and diverse range of data, while supporting workable opt-outs for entities that prefer their data not to be trained in using AI systems". The call for a fair use exception for AI systems is a view the company has expressed to the Australian government in the past, but the notion of an opt-out option for publishers is a new argument from Google."
dr tech

The job applicants shut out by AI: 'The interviewer sounded like Siri' | Artificial int... - 0 views

  •  
    ""After cutting me off, the AI would respond, 'Great! Sounds good! Perfect!' and move on to the next question," Ty said. "After the third or fourth question, the AI just stopped after a short pause and told me that the interview was completed and someone from the team would reach out later." (Ty asked that their last name not be used because their current employer doesn't know they're looking for a job.) A survey from Resume Builder released last summer found that by 2024, four in 10 companies would use AI to "talk with" candidates in interviews. Of those companies, 15% said hiring decisions would be made with no input from a human at all."
smilingoldman

'Disinformation on steroids': is the US prepared for AI's influence on the election? | ... - 0 views

  • Already this year, a robocall generated using artificial intelligence targeted New Hampshire voters in the January primary, purporting to be President Joe Biden and telling them to stay home in what officials said could be the first attempt at using AI to interfere with a US election. The “deepfake” calls were linked to two Texas companies, Life Corporation and Lingo Telecom.
  • It’s not clear if the deepfake calls actually prevented voters from turning out, but that doesn’t really matter, said Lisa Gilbert, executive vice-president of Public Citizen, a group that’s been pushing for federal and state regulation of AI’s use in politics.
  • Examples of what could be ahead for the US are happening all over the world. In Slovakia, fake audio recordings may have swayed an election in what serves as a “frightening harbinger of the sort of interference the United States will likely experience during the 2024 presidential election”, CNN reported. In Indonesia, an AI-generated avatar of a military commander helped rebrand the country’s defense minister as a “chubby-cheeked” man who “makes Korean-style finger hearts and cradles his beloved cat, Bobby, to the delight of Gen Z voters”, Reuters reported. In India, AI versions of dead politicians have been brought back to compliment elected officials, according to Al Jazeera.
  • ...1 more annotation...
  • she said, “what if AI could do all this? Then maybe I shouldn’t be trusting everything that I’m seeing.”
1 - 20 of 356 Next › Last »
Showing 20 items per page