Skip to main content

Home/ Digit_al Society/ Group items tagged self-harm

Rss Feed Group items tagged

dr tech

Leading adviser quits over Instagram's failure to remove self-harm content | Instagram ... - 0 views

  •  
    "A leading psychologist who advises Meta on suicide prevention and self-harm has quit her role, accusing the tech giant of "turning a blind eye" to harmful content on Instagram, repeatedly ignoring expert advice and prioritising profit over lives. Lotte Rubæk, who has been on Meta's global expert group for more than three years, told the Observer that the tech giant's ongoing failure to remove images of self-harm from its platforms is "triggering" vulnerable young women and girls to further harm themselves and contributing to rising suicide figures."
dr tech

When Algorithms Promote Self-Harm, Who Is Held Responsible? | WIRED - 0 views

  •  
    "WHEN 14-YEAR-OLD MOLLY Russell died in 2017, her cell phone contained graphic images of self-harm, an email roundup of "depression pins you might like," and advice on concealing mental illness from loved ones. Investigators initially ruled the British teen's death a suicide. But almost five years later, a British coroner's court has reversed the findings. Now, they claim that Russell died "from an act of self-harm while suffering from depression and the negative effects of online content"-and the algorithms themselves are on notice."
dr tech

Online safety bill must protect adults from self-harm content, say charities | Internet... - 0 views

  •  
    "The legal but harmful provisions have become a lightning rod for concerns that the bill will result in an overly censorious approach on social media platforms. Tory MPs including David Davis have argued that the legal but harmful provisions in the bill mean tech firms will "inevitably err on the side of censorship" in how they police their platforms, while Truss has said she wants to "make sure free speech is allowed" when the bill comes back."
dr tech

Schools monitoring online bullying with slang translation software | Education | thegua... - 0 views

  •  
    "re than a thousand British schools are monitoring pupils' online communication for bullying and self-harm using software that analyses and translates slang for teachers. The software uses a constantly updated dictionary which includes words that most adults would not understand. These include acronyms such as "gnoc" (get naked on camera) and "dirl" (die in real life) and words such as Bio-Oil, a commercial product which can be used by children who self-harm to reduce the appearance of scarring."
dr tech

Facebook Is Now Using AI to Help Prevent Suicides - 0 views

  •  
    "Facebook has detailed the steps it's taking to get help for people who need it. Which involves using artificial intelligence to "detect posts or live videos where someone might be expressing thoughts of suicide," identifying appropriate first responders, and then employing more people to "review reports of suicide or self harm". The social network has been testing this system in the U.S. for the last month, and "worked with first responders on over 100 wellness checks based on reports we received via our proactive detection efforts." In some cases the local authorities were notified in order to help."
dr tech

Looking up health symptoms online less harmful than thought, study says | Health | The ... - 0 views

  •  
    "These findings suggest that medical experts and policymakers probably do not need to warn patients away from the internet when it comes to seeking health information and self-diagnosis or triage. It seems that using the internet may well help patients figure out what is wrong."
dr tech

How does Apple technology hold up against NSO spyware? | Apple | The Guardian - 0 views

  •  
    "The disclosure points to a problem security researchers have been warning about for years: that despite its reputation for building what is seen by millions of customers as a secure product, some believe Apple's closed culture and fear of negative press have harmed its ability to provide security for those targeted by governments and criminals. "Apple's self-assured hubris is just unparalleled," said Patrick Wardle, a former NSA employee and founder of the Mac security developer Objective-See. "They basically believe that their way is the best way. And to be fair … the iPhone has had incredible success. "But you talk to any external security researcher, they're probably not going to have a lot of great things to say about Apple. Whereas if you talk to security researchers in dealing with, say, Microsoft, they've said: 'We're gonna put our ego aside, and ultimately realise that the security researchers are reporting vulnerabilities that at the end of the day are benefiting our users, because we're able to patch them.' I don't think Apple has that same mindset.""
dr tech

What does the Lensa AI app do with my self-portraits and why has it gone viral? | Artif... - 0 views

  •  
    "Prisma Labs has already gotten into trouble for accidentally generating nude and cartoonishly sexualised images - including those of children - despite a "no nudes" and "adults only" policy. Prisma Lab's CEO and co-founder Andrey Usoltsev told TechCrunch this behaviour only happened if the AI was intentionally provoked to create this type of content - which represents a breach of terms against its use. "If an individual is determined to engage in harmful behavior, any tool would have the potential to become a weapon," he said."
1 - 8 of 8
Showing 20 items per page