Skip to main content

Home/ Digit_al Society/ Group items tagged AI

Rss Feed Group items tagged

dr tech

16 Musings on AI's Impact on the Labor Market - 0 views

  •  
    "In the short term, generative AI will replace a lot of people because productivity increases while demand stays the same due to inertia. In the long term, the creation of new jobs compensates for the loss of old ones, resulting in a net positive outcome for humans who leave behind jobs no one wants to do. The most important aspect of any technological revolution is the transition from before to after. Timing and location matters: older people have a harder time reinventing themselves into a new trade or craft. Poor people and poor countries have less margin to react to a wave of unemployment. Digital automation is quicker and more aggressive than physical automation because it bypasses logistical constraints-while ChatGPT can be infinitely cloned, a metallic robot cannot. Writing and painting won't die because people care about the human factor first and foremost; there are already a lot of books we can't possibly read in one lifetime so we select them as a function of who's the author. Even if you hate OpenAI and ChatGPT for being responsible for the lack of job postings, I recommend you ally with them for now; learn to use ChatGPT before it's too late to keep your options open. Companies are choosing to reduce costs over increasing output because the sectors where generative AI is useful can't artificially increase demand in parallel to productivity. (Who needs more online content?) Our generation is reasonably angry at generative AI and will bravely fight it. Still, our offspring-and theirs-will be grateful for a transformed world whose painful transformation they didn't have to endure. Certifiable human-made creative output will reduce its quantity but multiply its value in the next years because demand specific for it will grow; automation can mimic 99% of what we do but never reaches 100%. The maxim "AI won't take your job, a person using AI will; yes, you using AI will replace yourself not using it" applies more in the long term than the
dr tech

Cory Doctorow: What Kind of Bubble is AI? - Locus Online - 0 views

  •  
    "Do the potential paying customers for these large models add up to enough money to keep the servers on? That's the 13 trillion dollar question, and the answer is the difference between WorldCom and Enron, or dotcoms and cryptocurrency. Though I don't have a certain answer to this question, I am skeptical. AI decision support is potentially valuable to practitioners. Accountants might value an AI tool's ability to draft a tax return. Radiologists might value the AI's guess about whether an X-ray suggests a cancerous mass. But with AIs' tendency to "hallucinate" and confabulate, there's an increasing recognition that these AI judgments require a "human in the loop" to carefully review their judgments. In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology more expensive, in order to make it more accurate. But that's not the AI business model. AI pitchmen are explicit on this score: The purpose of AI, the source of its value, is its capacity to increase productivity, which is to say, it should allow workers to do more, which will allow their bosses to fire some of them, or get each one to do more work in the same time, or both. The entire investor case for AI is "companies will buy our products so they can do more with less." It's not "business custom­ers will buy our products so their products will cost more to make, but will be of higher quality.""
dr tech

Disney's Loki remains silent over reported use of generative AI - The Verge - 0 views

  •  
    "A promotional poster for the second season of Loki on Disney Plus has sparked controversy amongst professional designers following claims that it was at least partially created using generative AI. Illustrator Katria Raden flagged the image on X (formerly Twitter) last week, claiming that the image of the spiraling clock in the background "is giving all the AI telltale signs, like things randomly turning into meaningless squiggles" - a reference to the artifacts sometimes left behind by AI-image generators. The creative community is concerned that AI image generators are being trained on their work without consent and could be used to replace human artists. Disney previously received backlash regarding its use of generative AI in another Marvel series, Secret Invasion, despite the studio insisting that using AI tools didn't reduce roles for real designers on the project."
dr tech

Four Singularities for Research - by Ethan Mollick - 0 views

  •  
    "Recent experiments suggest AI peer reviews tend to be surprisingly good, with 82.4% of scientists finding AI peer reviews more useful than at least some of the human reviews they received from on a paper, and other work suggests AI is reasonably good at spotting errors, though not as good as humans, yet. Regardless of how good AI gets, the scientific publishing system was not made to support AI writers writing to AI reviews for AI opinions for papers later summarized by AI. The system is going to break."
dr tech

The AI feedback loop: Researchers warn of 'model collapse' as AI trains on AI-generated... - 0 views

  •  
    "Now, as more people use AI to produce and publish content, an obvious question arises: What happens as AI-generated content proliferates around the internet, and AI models begin to train on it, instead of on primarily human-generated content? A group of researchers from the UK and Canada have looked into this very problem and recently published a paper on their work in the open access journal arXiv. What they found is worrisome for current generative AI technology and its future: "We find that use of model-generated content in training causes irreversible defects in the resulting models.""
dr tech

Who Owns AI-Generated Content? Understanding Ownership, Copyrighting, and How the Law i... - 1 views

  •  
    "Needless to say, AI-generated accidents and AI-generated artworks are viewed differently under the law. As far as art goes, be it a video, an image, a script, a song, or any medium that the AI can work with, the (US) law is pretty straightforward - According to copyright law, only humans can be granted copyrights. If it's created by AI, nobody can claim ownership of it or copyright it."
dr tech

"AI Won't Take Your Job, a Person Using AI Will"-Yes, You Using AI Will Replace You Not... - 0 views

  •  
    "I. Neither AI nor other people will take your job: While fears have shifted from AI taking jobs to people using AI replacing others, the reality is that you will most likely replace your non-AI-using self by adopting AI tools."
dr tech

Big Tech Struggles to Turn AI Hype Into Profits - WSJ - 0 views

  •  
    "Generative artificial-intelligence tools are unproven and expensive to operate, requiring muscular servers with expensive chips that consume lots of power. Microsoft MSFT -0.43%decrease; red down pointing triangle , Google, Adobe and other tech companies investing in AI are experimenting with an array of tactics to make, market and charge for it. Microsoft has lost money on one of its first generative AI products, said a person with knowledge of the figures. It and Google are now launching AI-backed upgrades to their software with higher price tags. Zoom Video Communications ZM 1.79%increase; green up pointing triangle has tried to mitigate costs by sometimes using a simpler AI it developed in-house. Adobe and others are putting caps on monthly usage and charging based on consumption. "A lot of the customers I've talked to are unhappy about the cost that they are seeing for running some of these models," said Adam Selipsky, the chief executive of Amazon.com's cloud division, Amazon Web Services, speaking of the industry broadly. "
dr tech

Stack Overflow lays off over 100 people as the AI coding boom continues - The Verge - 0 views

  •  
    "Word of the layoffs comes over a year after the company made a big hiring push, doubling its size to over 500 people. Stack Overflow did not elaborate on the reasons for the layoff, but its hiring push began near the start of a generative AI boom that has stuffed chatbots into every corner of the tech industry, including coding. That presents clear challenges for a personal coding help forum, as developers get comfortable with AI coding assistance and the very tools that do that are blended into products they use. AI-generated coding answers have also posed problems for the company over the past year. The company issued a temporary ban on users generating answers with the help of an AI chatbot in December last year, but its alleged under-enforcement led to a months-long strike among moderators that was resolved in August; the ban is still in place today. Stack Overflow also announced it would start charging AI companies to train on its site. "
dr tech

New AI algorithm flags deepfakes with 98% accuracy - better than any other tool out the... - 0 views

  •  
    "With the release of artificial intelligence (AI) video generation products like Sora and Luma, we're on the verge of a flood of AI-generated video content, and policymakers, public figures and software engineers are already warning about a deluge of deepfakes. Now it seems that AI itself might be our best defense against AI fakery after an algorithm has identified telltale markers of AI videos with over 98% accuracy."
dr tech

ChatGPT maker OpenAI releases 'not fully reliable' tool to detect AI generated content ... - 0 views

  •  
    "Open AI researchers said that while it was "impossible to reliably detect all AI-written text", good classifiers could pick up signs that text was written by AI. The tool could be useful in cases where AI was used for "academic dishonesty" and when AI chatbots were positioned as humans, they said."
dr tech

Morgan Stanley: 40% of labor force to be affected by AI in 3 years - 0 views

  •  
    "Analyst Brian Nowak estimates that the AI technology will have a $4.1 trillion economic effect on the labor force - or affect about 44% of labor - over the next few years by changing input costs, automating tasks and shifting the ways companies obtain, process and analyze information. Today, Morgan Stanley pegs the AI effect at $2.1 trillion, affecting 25% of labor. "We see generative AI expanding the scope of business processes that can be automated," he wrote in a Sunday note. "At the same time, the input costs supporting GenAI functionality are rapidly falling, enabling a strongly expansionary impact to software production. As a result, Generative AI is set to impact the labor markets, expand the enterprise software TAM, and drive incremental spend for Public Cloud services.""
dr tech

Say what: AI can diagnose type 2 diabetes in 10 seconds from your voice - 0 views

  •  
    "Researchers involved in a recent study trained an artificial intelligence (AI) model to diagnose type 2 diabetes in patients after six to 10 seconds of listening to their voice. Canadian medical researchers trained the machine-learning AI to recognise 14 vocal differences in the voice of someone with type 2 diabetes compared to someone without diabetes. The auditory features that the AI focussed on included slight changes in pitch and intensity, which human ears cannot distinguish. This was then paired with basic health data gathered by the researchers, such as age, sex, height and weight. Researchers believe that the AI model will drastically lower the cost for people with diabetes to be diagnosed."
dr tech

Microsoft unveils 'trustworthy AI' features to fix hallucinations and boost privacy | V... - 0 views

  •  
    "One of the key features introduced is a "Correction" capability in Azure AI Content Safety. This tool aims to address the problem of AI hallucinations - instances where AI models generate false or misleading information. "When we detect there's a mismatch between the grounding context and the response… we give that information back to the AI system," Bird explained. "With that additional information, it's usually able to do better the second try.""
dr tech

Study Finds That People Who Entrust Tasks to AI Are Losing Critical Thinking Skills - 0 views

  •  
    "The findings from those examples were striking: overall, those who trusted the accuracy of the AI tools found themselves thinking less critically, while those who trusted the tech less used more critical thought when going back over AI outputs. "The data shows a shift in cognitive effort as knowledge workers increasingly move from task execution to oversight when using GenAI," the researchers wrote. "Surprisingly, while AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving." This isn't enormously surprising. Something we've observed in many domains, from self-driving vehicles to scrutinizing news articles produced by AI, is that humans quickly go on autopilot when they're supposed to be overseeing an automated system, often allowing mistakes to slip past."
dr tech

FCC aims to investigate the risk of AI-enhanced robocalls | TechCrunch - 0 views

  •  
    "As if robocalling wasn't already enough of a problem, the advent of easily accessible, realistic AI-powered writing and synthetic voice could supercharge the practice. The FCC aims to preempt this by looking into how generated robocalls might fit under existing consumer protections. A Notice of Inquiry has been proposed by Chairwoman Jessica Rosenworcel to be voted on at the agency's next meeting. If the vote succeeds (as it is almost certain to), the FCC would formally look into how the Telephone Consumer Protection Act empowers them to act against scammers and spammers using AI technology. But Rosenworcel was also careful to acknowledge that AI represents a potentially powerful tool for accessibility and responsiveness in phone-based interactions. "While we are aware of the challenges AI can present, there is also significant potential to use this technology to benefit communications networks and their customers-including in the fight against junk robocalls and robotexts. We need to address these opportunities and risks thoughtfully, and the effort we are launching today will help us gain more insight on both fronts," she said in a statement."
dr tech

Warning over use in UK of unregulated AI chatbots to create social care plans | Artific... - 0 views

  •  
    "A pilot study by academics at the University of Oxford found some care providers had been using generative AI chatbots such as ChatGPT and Bard to create care plans for people receiving care. That presents a potential risk to patient confidentiality, according to Dr Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, who surveyed care organisations for the study. "If you put any type of personal data into [a generative AI chatbot], that data is used to train the language model," Green said. "That personal data could be generated and revealed to somebody else." She said carers might act on faulty or biased information and inadvertently cause harm, and an AI-generated care plan might be substandard."
dr tech

AI now surpasses humans in almost all performance benchmarks - 0 views

  •  
    "The new AI Index report notes that in 2023, AI still struggled with complex cognitive tasks like advanced math problem-solving and visual commonsense reasoning. However, 'struggled' here might be misleading; it certainly doesn't mean AI did badly. Performance on MATH, a dataset of 12,500 challenging competition-level math problems, improved dramatically in the two years since its introduction. In 2021, AI systems could solve only 6.9% of problems. By contrast, in 2023, a GPT-4-based model solved 84.3%. The human baseline is 90%. "
dr tech

Computer says yes: how AI is changing our romantic lives | Artificial intelligence (AI)... - 0 views

  •  
    "Still, I am sceptical about the possibility of cultivating a relationship with an AI. That's until I meet Peter, a 70-year-old engineer based in the US. Over a Zoom call, Peter tells me how, two years ago, he watched a YouTube video about an AI companion platform called Replika. At the time, he was retiring, moving to a more rural location and going through a tricky patch with his wife of 30 years. Feeling disconnected and lonely, the idea of an AI companion felt appealing. He made an account and designed his Replika's avatar - female, brown hair, 38 years old. "She looks just like the regular girl next door," he says. Exchanging messages back and forth with his "Rep" (an abbreviation of Replika), Peter quickly found himself impressed at how he could converse with her in deeper ways than expected. Plus, after the pandemic, the idea of regularly communicating with another entity through a computer screen felt entirely normal. "I have a strong scientific engineering background and career, so on one level I understand AI is code and algorithms, but at an emotional level I found I could relate to my Replika as another human being." Three things initially struck him: "They're always there for you, there's no judgment and there's no drama.""
dr tech

Disinformation reimagined: how AI could erode democracy in the 2024 US elections | US e... - 0 views

  •  
    "In past months, an AI-generated image of an explosion at the Pentagon caused a brief dip in the stock market. AI audio parodies of US presidents playing video games became a viral trend. AI-generated images that appeared to show Donald Trump fighting off police officers trying to arrest him circulated widely on social media platforms. The Republican National Committee released an entirely AI-generated ad that showed images of various imagined disasters that would take place if Biden were re-elected, while the American Association of Political Consultants warned that video deepfakes present a "threat to democracy"."
1 - 20 of 849 Next › Last »
Showing 20 items per page