Skip to main content

Home/ Digit_al Society/ Group items matching "output" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
dr tech

ChatGPT is bullshit | Ethics and Information Technology - 0 views

  •  
    "Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called "AI hallucinations". We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems."
dr tech

16 Musings on AI's Impact on the Labor Market - 0 views

  •  
    "In the short term, generative AI will replace a lot of people because productivity increases while demand stays the same due to inertia. In the long term, the creation of new jobs compensates for the loss of old ones, resulting in a net positive outcome for humans who leave behind jobs no one wants to do. The most important aspect of any technological revolution is the transition from before to after. Timing and location matters: older people have a harder time reinventing themselves into a new trade or craft. Poor people and poor countries have less margin to react to a wave of unemployment. Digital automation is quicker and more aggressive than physical automation because it bypasses logistical constraints-while ChatGPT can be infinitely cloned, a metallic robot cannot. Writing and painting won't die because people care about the human factor first and foremost; there are already a lot of books we can't possibly read in one lifetime so we select them as a function of who's the author. Even if you hate OpenAI and ChatGPT for being responsible for the lack of job postings, I recommend you ally with them for now; learn to use ChatGPT before it's too late to keep your options open. Companies are choosing to reduce costs over increasing output because the sectors where generative AI is useful can't artificially increase demand in parallel to productivity. (Who needs more online content?) Our generation is reasonably angry at generative AI and will bravely fight it. Still, our offspring-and theirs-will be grateful for a transformed world whose painful transformation they didn't have to endure. Certifiable human-made creative output will reduce its quantity but multiply its value in the next years because demand specific for it will grow; automation can mimic 99% of what we do but never reaches 100%. The maxim "AI won't take your job, a person using AI will; yes, you using AI will replace yourself not using it" applies more in the long term than the
dr tech

How to Detect OpenAI's ChatGPT Output | by Sung Kim | Geek Culture | Dec, 2022 | Medium - 0 views

  •  
    "The tool has determined that there is a 99.61% probability this text was generated using OpenAI GPT. Please note that this tool like everything in AI, has a high probability of detecting GPT output, but not 100% as attributed by George E. P. Box "All models are wrong, but some are useful"."
dr tech

AI learns to write its own code by stealing from other programs | New Scientist - 0 views

  •  
    "DeepCoder uses a technique called program synthesis: creating new programs by piecing together lines of code taken from existing software - just like a programmer might. Given a list of inputs and outputs for each code fragment, DeepCoder learned which pieces of code were needed to achieve the desired result overall. "It could allow non-coders to simply describe an idea for a program and let the system build it""
dr tech

Study: File Sharing Leads To More, Not Fewer, Musical Hits Being Written | Techdirt - 0 views

  •  
    "This study therefore concludes that file sharing has not reduced the creation of new original music. It may have led to fewer works as a result of fewer new artists entering the field, but it was also associated with an increase in output by those artists who chose, despite the lower returns, to devote their talents to making music. Given that file sharing undeniably promotes the broad dissemination of existing works, this conclusion suggests that file sharing is both fully consonant with copyright's constitutionally-delimited purposes and welfare enhancing."
dr tech

Magical thinking about machine learning won't bring the reality of AI any closer | John Naughton | Opinion | The Guardian - 0 views

  •  
    " Critics have pointed out that the old computing adage "garbage in, garbage out" also applies to ML. If the data from which a machine "learns" is biased, then the outputs will reflect those biases. And this could become generalised: we may have created a technology that - however good it is at recommending films you might like - may actually morph into a powerful amplifier of social, economic and cultural inequalities."
dr tech

Scientists Increasingly Can't Explain How AI Works - 0 views

  •  
    "There's a similar problem in artificial intelligence: The people who develop AI are increasingly having problems explaining how it works and determining why it has the outputs it has. Deep neural networks (DNN)-made up of layers and layers of processing systems trained on human-created data to mimic the neural networks of our brains-often seem to mirror not just human intelligence but also human inexplicability."
dr tech

Large, creative AI models will transform lives and labour markets | The Economist - 0 views

  •  
    "Getty points to images produced by Stable Diffusion which contain its copyright watermark, suggesting that Stable Diffusion has ingested and is reproducing copyrighted material without permission (Stability AI has not yet commented publicly on the lawsuit). The same level of evidence is harder to come by when examining ChatGPT's text output, but there is no doubt that it has been trained on copyrighted material. OpenAI will be hoping that its text generation is covered by "fair use", a provision in copyright law that allows limited use of copyrighted material for "transformative" purposes. That idea will probably one day be tested in court."
dr tech

New Tool Reveals How AI Makes Decisions - Scientific American - 0 views

  •  
    "Most AI programs function like a "black box." "We know exactly what a model does but not why it has now specifically recognized that a picture shows a cat," said computer scientist Kristian Kersting of the Technical University of Darmstadt in Germany to the German-language newspaper Handelsblatt. That dilemma prompted Kersting-along with computer scientists Patrick Schramowski of the Technical University of Darmstadt and Björn Deiseroth, Mayukh Deb and Samuel Weinbach, all at the Heidelberg, Germany-based AI company Aleph Alpha-to introduce an algorithm called AtMan earlier this year. AtMan allows large AI systems such as ChatGPT, Dall-E and Midjourney to finally explain their outputs."
dr tech

Thanks to AI, it's probably time to take your photos off the Internet | Ars Technica - 0 views

  •  
    "In the future, it may be possible to guard against this kind of photo misuse through technical means. For example, future AI image generators might be required by law to embed invisible watermarks into their outputs so that they can be read later, and people will know they're fakes. But people will need to be able to read the watermarks easily (and be educated on how they work) for that to have any effect. Even so, will it matter if an embarrassing fake photo of a kid shared with an entire school has an invisible watermark? The damage will have already been done."
dr tech

I Taught for Most of My Career. I Quit Because of ChatGPT | TIME - 0 views

  •  
    "In one activity, my students drafted a paragraph in class, fed their work to ChatGPT with a revision prompt, and then compared the output with their original writing. However, these types of comparative analyses failed because most of my students were not developed enough as writers to analyze the subtleties of meaning or evaluate style. "It makes my writing look fancy," one PhD student protested when I pointed to weaknesses in AI-revised text. My students also relied heavily on AI-powered paraphrasing tools such as Quillbot. Paraphrasing well, like drafting original research, is a process of deepening understanding. Recent high-profile examples of "duplicative language" are a reminder that paraphrasing is hard work. It is not surprising, then, that many students are tempted by AI-powered paraphrasing tools. These technologies, however, often result in inconsistent writing style, do not always help students avoid plagiarism, and allow the writer to gloss over understanding. Online paraphrasing tools are useful only when students have already developed a deep knowledge of the craft of writing."
dr tech

Your phone buzzes with a news alert. But what if AI wrote it - and it's not true? | Archie Bland | The Guardian - 0 views

  •  
    "Some might scoff at this, and point out that news organisations make their own mistakes all the time - more consequential than my physicist/physician howler, if less humiliating. But cases of bad journalism are almost always warped representations of the real world, rather than missives from an imaginary one. Crucially, if an outlet gets big things wrong a lot, its reputation will suffer, and its audience are likely to vote with their feet, or other people will publish stories that air the mistake. And all of it will be out in the open. You may also note that journalists are increasingly likely to use AI in the production of stories - and there is no doubt that it is a phenomenally powerful tool, allowing investigative reporters to find patterns in vast financial datasets that reveal corruption, or analyse satellite imagery for evidence of bombing attacks in areas designated safe for civilians. There is a legitimate debate over the extent of disclosure required in such cases: on the one hand, if the inputs and outputs are being properly vetted, it might be a bit like flagging the use of Excel; on the other, AI is still new enough that readers may expect you to err on the side of caution. Still, the fundamental difference is not in what you're telling your audience, but what degree of supervision you're exercising over the machine."
BOB SAGET

Couple who took £61,000 from faulty ATM sentenced | UK news | The Guardian - 0 views

  • faulty cash machine
    • dr tech
       
      So can you explain how that machine works - input process output and storage? Is it an expert system?
  • the fault arose owing to the machine being very old
    • dr tech
       
      What issues would this be?
    • BOB SAGET
       
      RELIABILITY>> DUH
dr tech

Brain-computer interface successfully translates thought into synthesized speech / Boing Boing - 0 views

  •  
    "The listeners accurately heard the sentences 43 percent of the time when given a set of 25 possible words to choose from, and 21 percent of the time when given 50 words, the study found."
cr7_cristiano

For all the hype in 2023, we still don't know what AI's long-term impact will be | John Naughton | The Guardian - 0 views

  • huge public corporations launch products that are known to “hallucinate”
  • And that the tech can do all of the other tricks that are entrancing millions of people – who are, by the way, mostly using it for free
  • We always overestimate the short-term impacts of novel technologies while grossly underestimating their long-term effects
  • ...5 more annotations...
  • If this machine-learning technology is as transformative as some people are claiming, its long-term impact may be just as profound as print has been.
  • (Remember that much of the output of current AI is kept relatively sanitised by the unacknowledged labour of poorly paid people in poor countries.
  • The Nvidia HGX H100 chip, designed for generative AI, is being bought in huge quantities by companies such as Microsoft for $30,000 each. Photograph: AP
  • Microsoft plans to buy 150,000 Nvidia chips – at $30,000 (£24,000) a pop.
  • “are not ready to deploy generative artificial intelligence at scale because they lack strong data infrastructure or the controls needed to make sure the technology is used safely.”
  •  
    "huge public corporations launch products that are known to "hallucinate" "
1 - 15 of 15
Showing 20 items per page