Skip to main content

Home/ Digit_al Society/ Group items tagged modeling

Rss Feed Group items tagged

dr tech

The future, soon: what I learned from Bing's AI - 0 views

  •  
    "I have been working with generative AI and, even though I have been warning that these tools are improving rapidly, I did not expect them to really be improving that rapidly. On every dimension, Bing's AI, which does not actually represent a technological leap over ChatGPT, far outpaces the earlier AI - which is less than three months old! There are many larger, more capable models on their way in the coming months, and we are not really ready."
mrrottenapple

"Anonymous" Data Won't Protect Your Identity - Scientific American - 2 views

  •  
    This discrepancy has made it relatively easy to connect an anonymous line of data to a specific person: if a private detective is searching for someone in New York City and knows the subject is male, is 30 to 35 years old and has diabetes, the sleuth would not be able to deduce the man's name-but could likely do so quite easily if he or she also knows the target's birthday, number of children, zip code, employer and car model.
dr tech

Tall tales - 0 views

  •  
    "Super-charged misinformation and the atrophy of human intelligence. By regurgitating information that is already on the internet, generative models cannot decide what is a good thing to tell a human and will repeat past mistakes made by humans, of which there are plenty."
dr tech

Pause Giant AI Experiments: An Open Letter - Future of Life Institute - 0 views

  •  
    "Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now."
dr tech

Misinformation, mistakes and the Pope in a puffer: what rapidly evolving AI can - and c... - 0 views

  •  
    "The question of why AI generates fake academic papers relates to how large language models work: they are probabilistic, in that they map the probability over sequences of words. As Dr David Smerdon of the University of Queensland puts it: "Given the start of a sentence, it will try to guess the most likely words to come next.""
dr tech

'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says - 0 views

  •  
    "A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai, Belgian outlet La Libre reported. The incident raises the issue of how businesses and governments can better regulate and mitigate the risks of AI, especially when it comes to mental health. The app's chatbot encouraged the user to kill himself, according to statements by the man's widow and chat logs she supplied to the outlet. When Motherboard tried the app, which runs on a bespoke AI language model based on an open-source GPT-4 alternative that was fine-tuned by Chai, it provided us with different methods of suicide with very little prompting. "
dr tech

The AI future for lesson plans is already here | EduResearch Matters - 0 views

  •  
    "What do today's AI-generated lesson plans look like? AI-generated lesson plans are already better than many people realise. Here's an example generated through the GPT-3 deep learning language model:"
dr tech

ChatGPT, artificial intelligence, and the future of education - Vox - 0 views

  •  
    "The technology certainly has its flaws. While the system is theoretically designed not to cross some moral red lines - it's adamant that Hitler was bad - it's not difficult to trick the AI into sharing advice on how to engage in all sorts of evil and nefarious activities, particularly if you tell the chatbot that it's writing fiction. The system, like other AI models, can also say biased and offensive things. As my colleague Sigal Samuel has explained, an earlier version of GPT generated extremely Islamophobic content, and also produced some pretty concerning talking points about the treatment of Uyghur Muslims in China."
dr tech

How to Detect OpenAI's ChatGPT Output | by Sung Kim | Geek Culture | Dec, 2022 | Medium - 0 views

  •  
    "The tool has determined that there is a 99.61% probability this text was generated using OpenAI GPT. Please note that this tool like everything in AI, has a high probability of detecting GPT output, but not 100% as attributed by George E. P. Box "All models are wrong, but some are useful"."
dr tech

Streaming sites urged not to let AI use music to clone pop stars | Music industry | The... - 0 views

  •  
    "The music industry is urging streaming platforms not to let artificial intelligence use copyrighted songs for training, in the latest of a run of arguments over intellectual property that threaten to derail the generative AI sector's explosive growth. In a letter to streamers including Spotify and Apple Music, the record label Universal Music Group expressed fears that AI labs would scrape millions of tracks to use as training data for their models and copycat versions of pop stars."
dr tech

AI Is Coming for Voice Actors. Artists Everywhere Should Take Note | The Walrus - 0 views

  •  
    "All of this probably means I should be worried about recent trends in artificial intelligence, which is encroaching on voice-over work in a manner similar to how it threatens the labour of visual artists and writers-both financially and ethically. The creep is only just beginning, with dubbing companies training software to replace human actors and tech companies introducing digital audiobook narration. But AI poses a threat to work opportunities across the board by giving producers the tools to recreate their favourite voices on demand, without the performer's knowledge or consent and without additional compensation. It's clear that AI will transform the arts sector, and the voice-over industry offers an early, unsettling model for what this future may look like."
dr tech

'I didn't give permission': Do AI's backers care about data law breaches? | Artificial ... - 0 views

  •  
    "Wooldridge says copyright is a "coming storm" for AI companies. LLMs are likely to have accessed copyrighted material, such as news articles. Indeed the GPT-4-assisted chatbot attached to Microsoft's Bing search engine cites news sites in its answers. "I didn't give explicit permission for my works to be used as training data, but they almost certainly were, and now they contribute to what these models know," he says. "Many artists are gravely concerned that their livelihoods are at risk from generative AI. Expect to see legal battles," he adds."
dr tech

Warning over use in UK of unregulated AI chatbots to create social care plans | Artific... - 0 views

  •  
    "A pilot study by academics at the University of Oxford found some care providers had been using generative AI chatbots such as ChatGPT and Bard to create care plans for people receiving care. That presents a potential risk to patient confidentiality, according to Dr Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, who surveyed care organisations for the study. "If you put any type of personal data into [a generative AI chatbot], that data is used to train the language model," Green said. "That personal data could be generated and revealed to somebody else." She said carers might act on faulty or biased information and inadvertently cause harm, and an AI-generated care plan might be substandard."
dr tech

AI now surpasses humans in almost all performance benchmarks - 0 views

  •  
    "The new AI Index report notes that in 2023, AI still struggled with complex cognitive tasks like advanced math problem-solving and visual commonsense reasoning. However, 'struggled' here might be misleading; it certainly doesn't mean AI did badly. Performance on MATH, a dataset of 12,500 challenging competition-level math problems, improved dramatically in the two years since its introduction. In 2021, AI systems could solve only 6.9% of problems. By contrast, in 2023, a GPT-4-based model solved 84.3%. The human baseline is 90%. "
dr tech

Nvidia: what's so good about the tech firm's new AI superchip? | Technology sector | Th... - 0 views

  •  
    "Training a massive AI model, the size of GPT-4, would currently take about 8,000 H100 chips, and 15 megawatts of power, Nvidia said - enough to power about 30,000 typical British homes."
dr tech

Seized ransomware network LockBit rewired to expose hackers to world | Cybercrime | The... - 0 views

  •  
    "The organisation is a pioneer of the "ransomware as a service" model, whereby it outsources the target selection and attacks to a network of semi-independent "affiliates", providing them with the tools and infrastructure and taking a commission on the ransoms in return. As well as ransomware, which typically works by encrypting data on infected machines and demanding a payment for providing the decryption key, LockBit copied stolen data and threatened to publish it if the fee was not paid, promising to delete the copies on receipt of a ransom."
dr tech

In Theory of Mind Tests, AI Beats Humans - IEEE Spectrum - 0 views

  •  
    "AI Outperforms Humans in Theory of Mind Tests Large language models convincingly mimic the understanding of mental states"
dr tech

The future is … sending AI avatars to meetings for us, says Zoom boss | Artif... - 0 views

  • ix years away and
  • “five or six years” away, Eric Yuan told The Verge magazine, but he added that the company was working on nearer-term technologies that could bring it closer to reality.“Let’s assume, fast-forward five or six years, that AI is ready,” Yuan said. “AI probably can help for maybe 90% of the work, but in terms of real-time interaction, today, you and I are talking online. So, I can send my digital version, you can send your digital version.”Using AI avatars in this way could free up time for less career-focused choices, Yuan, who also founded Zoom, added. “You and I can have more time to have more in-person interactions, but maybe not for work. Maybe for something else. Why do we need to work five days a week? Down the road, four days or three days. Why not spend more time with your fam
  •  
    "Ultimately, he suggests, each user would have their own "large language model" (LLM), the underlying technology of services such as ChatGPT, which would be trained on their own speech and behaviour patterns, to let them generate extremely personalised responses to queries and requests. Such systems could be a natural progression from AI tools that already exist today. Services such as Gmail can summarise and suggest replies to emails based on previous messages, while Microsoft Teams will transcribe and summarise video conferences, automatically generating a to-do list from the contents."
dr tech

Benjamin Riley: AI is Another Ed Tech Promise Destined to Fail - The 74 - 0 views

  •  
    "It's an interesting question. I'm almost not sure how to answer it, because there is no thinking happening on the part of an LLM. A large language model takes the prompts and the text that you give it and tries to come up with something that is responsive and useful in relation to that text. And what's interesting is that certain people - I'm thinking of Mark Andreessen most prominently - have talked about how amazing this is conceptually from an education perspective, because with LLMs you will have this infinitely patient teacher. But that's actually not what you want from a teacher. You want, in some sense, an impatient teacher who's going to push your thinking, who's going to try to understand what you're bringing to any task or educational experience, lift up the strengths that you have, and then work on building your knowledge in areas where you don't yet have it. I don't think LLMs are capable of doing any of that. As you say, there's no real thinking going on. It's just a prediction machine. There's an interaction, I guess, but it's an illusion. Is that the word you would use? Yes. It's the illusion of a conversation. "
dr tech

What 50 Years of Hurricane Data Still Hasn't Told Us - 0 views

  •  
    "Because trends in data only become discernible with time, Masters believes it will be five or 10 years before we have a firm handle on what's going on."
« First ‹ Previous 101 - 120 of 131 Next ›
Showing 20 items per page