Skip to main content

Home/ Digit_al Society/ Group items tagged wrong

Rss Feed Group items tagged

dr tech

yes, all models are wrong - 0 views

  •  
    "According to Derek & Laura Cabrera, "wicked problems result from the mismatch between how real-world systems work and how we think they work". With systems thinking, there is constant testing and feedback between the real world, in all its complexity, and our mental model of it. This openness to test and look for feedback led Dr. Fisman to change his mind on the airborne spread of the coronavirus."
dr tech

The information warriors fighting 'robot zombie army' of coronavirus sceptics | World n... - 0 views

  •  
    ""It's really easy to lose track on social media," Bowman said. "And most people are not on Twitter, but this stuff percolates on to Facebook, WhatsApp chats, everywhere." The ambition, Ritchie says, is not "for Toby Young to tweet, actually I was wrong. They're in an ideological system where they're not interested in a real debate. It's for the person who hears someone say something bizarre, and thinks, I don't know how to reply to that.""
dr tech

FBI warns of look-alike election sites that could mess with voting - 1 views

  •  
    "Dubbed typosquatting, the idea is simple (if devious): A hacker registers a domain that is close enough to a real site, like yourbanknarne.com, and puts up a clone of yourbankname.com. The unsuspecting victim goes to the wrong site by mistake, and enters their personal banking information. In doing so, they have inadvertently handed the digital keys to their account to a hacker. "
dr tech

Microsoft's robot editor confuses mixed-race Little Mix singers | Technology | The Guar... - 0 views

  •  
    "Microsoft's decision to replace human journalists with robots has backfired, after the tech company's artificial intelligence software illustrated a news story about racism with a photo of the wrong mixed-race member of the band Little Mix."
dr tech

Social Networks Are Becoming a Security Risk [SURVEY] - 1 views

  •  
    Read the facts before it all goes wrong...
dr tech

Is smart tech the new domestic battle ground? | Life and style | The Guardian - 0 views

  •  
    "Joel and Anna have experienced this too, though Joel believes his tech is not inherently misogynistic. "Because I set it up, I know exactly the phrase that needs to be used and Anna doesn't," he explains. "She'll say it slightly wrong, then I say it and to her ear it sounds like I'm saying exactly the same thing in a calmer voice.""
dr tech

What is AI chatbot phenomenon ChatGPT and could it replace humans? | Artificial intelli... - 0 views

  •  
    "ChatGPT can also give entirely wrong answers and present misinformation as fact, writing "plausible-sounding but incorrect or nonsensical answers", the company concedes. OpenAI says that fixing this issue is difficult because there is no source of truth in the data they use to train the model and supervised training can also be misleading "because the ideal answer depends on what the model knows, rather than what the human demonstrator knows"."
dr tech

Microsoft and Google launched AI search too soon | Mashable - 0 views

  •  
    "Google should know better, given that it already had a "hallucination problem" with its featured snippets(Opens in a new tab) at the top of search results back in 2017. The snippets algorithm seemed to particularly enjoy telling lies about U.S. presidents. Again, what could go wrong?"
dr tech

Recognising (and addressing) bias in facial recognition tech - the Gender Shades Audit ... - 0 views

  •  
    "What if facial recognition technology isn't as good at recognising faces as it has sometimes been claimed to be? If the technology is being used in the criminal justice system, and gets the identification wrong, this can cause serious problems for people (see Robert Williams' story in "Facing up to the problems of recognising faces")."
dr tech

The Rise of Human Machines. We create technology to do our jobs… | by Colin H... - 0 views

  •  
    "The more technology helps make us more efficient, the more we are asked to be more efficient. We - our labour, our time, our data - is mined with increasing rapaciousness. Here's my thing with that Keynes essay. Sure, it looks like he was totally wrong about the future. We didn't end up with so much free time that we all went insane. But, then again, we've never actually tested his theory properly. We never just let the machines take over. Clearly, as we're (re)discovering, everyone finds that idea terrifying. I tend to agree. The idea of a completely A.I.-controlled world makes me uneasy. That said, the trend over the last 100 years - and even more since the dawn of this century - doesn't make me feel much better. What seems likelier to me than us all losing our jobs to A.I. is that the way in which we're already being replaced by machines continues is accelerated. That is, that we become ever more tied to the machines, ever more entwined with them. That our lives, bodies, and brains will become ever more machine-like."
dr tech

TechScape: What should social media giants do to protect children? | Technology | The G... - 0 views

  •  
    "In a way, this is a powerful rhetorical move. Insisting that the conversation focus on the details is an insistence that people who dismiss client-side scanning on principle are wrong to do so: if you believe that privacy of private communications is and should be an inviolable right, then Levy and Robinson are effectively arguing that you be cut out of the conversation in favour of more moderate people who are willing to discuss trade-offs."
mrrottenapple

You Have Nothing to Hide? We bet you do. - 1 views

  •  
    Just because you find your data boring yourself, this does not have to be true for everyone else. Data is worth a lot in the right hands and nothing in the wrong ones. Money is worth the same to everyone, data varies in value depending on whether the person who has it is able to merge, match, cluster, compare it.
dr tech

How to Detect OpenAI's ChatGPT Output | by Sung Kim | Geek Culture | Dec, 2022 | Medium - 0 views

  •  
    "The tool has determined that there is a 99.61% probability this text was generated using OpenAI GPT. Please note that this tool like everything in AI, has a high probability of detecting GPT output, but not 100% as attributed by George E. P. Box "All models are wrong, but some are useful"."
dr tech

Mother says AI chatbot led her son to kill himself in lawsuit against its maker | Artif... - 0 views

  •  
    "The mother of a teenager who killed himself after becoming obsessed with an artificial intelligence-powered chatbot now accuses its maker of complicity in his death. Megan Garcia filed a civil suit against Character.ai, which makes a customizable chatbot for role-playing, in Florida federal court on Wednesday, alleging negligence, wrongful death and deceptive trade practices. Her son Sewell Setzer III, 14, died in Orlando, Florida, in February. In the months leading up to his death, Setzer used the chatbot day and night, according to Garcia."
dr tech

Should social media have a warning label? - 0 views

  •  
    "Let's return to my favorite analogy for thinking about issues surrounding youth and social media: cars. Cars can be incredibly dangerous! There's a reason we don't let kids drive them until a certain age, and even then, put all sorts of safety measures in place. Now, let's imagine every time you got into a car, you got a warning saying "This car might crash and kill you." This would certainly raise your awareness that cars are dangerous. It would scare you. But would it change your behavior? Now, let's say you added an "action" to the end: "This car might crash and kill you…but putting on your seatbelt right now will reduce the risk of death by 500%."   It's long been known that fear-based public health messaging cannot simply describe a threat-it also needs to recommend an action to be effective. First you learn what could go wrong, then you learn what to do to avoid it.  So, will warning parents that social media use "is associated with significant mental health harms for adolescents" actually change their behavior? Will it lead to them more effectively limiting, monitoring, and/or managing their kids' social media use? "
dr tech

Why Perplexity's Cynical Theft Represents Everything That Could Go Wrong With AI - 0 views

  •  
    "Perplexity then sent this knockoff story to its subscribers via a mobile push notification. It created an AI-generated podcast using the same (Forbes) reporting - without any credit to Forbes, and that became a YouTube video that outranks all Forbes content on this topic within Google search. Perplexity had taken our work, without our permission, and republished it across multiple platforms - web, video, mobile - as though it were itself a media outlet. As we dug, we found a similar rip-off of a second story at Forbes. And other stolen scoops - all the information, negligible citation - from Bloomberg and CNBC."
dr tech

Your phone buzzes with a news alert. But what if AI wrote it - and it's not true? | Arc... - 0 views

  •  
    "Some might scoff at this, and point out that news organisations make their own mistakes all the time - more consequential than my physicist/physician howler, if less humiliating. But cases of bad journalism are almost always warped representations of the real world, rather than missives from an imaginary one. Crucially, if an outlet gets big things wrong a lot, its reputation will suffer, and its audience are likely to vote with their feet, or other people will publish stories that air the mistake. And all of it will be out in the open. You may also note that journalists are increasingly likely to use AI in the production of stories - and there is no doubt that it is a phenomenally powerful tool, allowing investigative reporters to find patterns in vast financial datasets that reveal corruption, or analyse satellite imagery for evidence of bombing attacks in areas designated safe for civilians. There is a legitimate debate over the extent of disclosure required in such cases: on the one hand, if the inputs and outputs are being properly vetted, it might be a bit like flagging the use of Excel; on the other, AI is still new enough that readers may expect you to err on the side of caution. Still, the fundamental difference is not in what you're telling your audience, but what degree of supervision you're exercising over the machine."
dr tech

How AI-assisted coding will change software engineering: hard truths - 0 views

  •  
    "This cycle is particularly painful for non-engineers because they lack the mental models to understand what's actually going wrong. When an experienced developer encounters a bug, they can reason about potential causes and solutions based on years of pattern recognition. Without this background, you're essentially playing whack-a-mole with code you don't fully understand."
Buka Zakaraia

Every step you take: UK underground centre that is spy capital of the world | UK news |... - 0 views

  • Millions of people walk beneath the unblinking gaze of central London's surveillance cameras.
  • Westminster council's CCTV control room, where a click and swivel of a joystick delivers panoramic views of any central London street
  • Using the latest remote technology, the cameras rotate 360 degrees, 365 days a year
  • ...4 more annotations...
  • The Home Office, which funded the creation of the £1.25m facility seven years ago
  • So famed has central London's surveillance network become that figures released yesterday revealed that more than 6,000 officials from 30 countries have come to learn lessons from the centre.
  • Dean Ingledew, the council's director of community protection, said that to safeguard privacy a team of amateur auditors regularly comes to the control room, unannounced, to inspect the tapes
  • Defending the searching gaze of London's cameras, Ingledew said that people who do not look as though they are doing anything wrong will be left alone.
‹ Previous 21 - 40 of 46 Next ›
Showing 20 items per page