Skip to main content

Home/ Digit_al Society/ Group items tagged determinism

Rss Feed Group items tagged

dr tech

Cracking apps: are crimefighters going too far to bring down cartels? | Organised crime... - 0 views

  •  
    "The Italian supreme court ordered prosecutors last month to disclose how the Sky ECC data had been retrieved, arguing that it was impossible to have a fair trial if the accused is unable to access the evidence or assess its reliability and legality, a position supposed by the NGO Fair Trials. Whether prosecutors choose to do so could determine whether the arrests made this week lead to convictions or not. Prosecutors in the UK face a similar dilemma in relation to the hacking of EncroChat, another secret messaging platform that had the added facility of a "panic" button that when pressed would immediately erase the phone's contents."
dr tech

What does the Lensa AI app do with my self-portraits and why has it gone viral? | Artif... - 0 views

  •  
    "Prisma Labs has already gotten into trouble for accidentally generating nude and cartoonishly sexualised images - including those of children - despite a "no nudes" and "adults only" policy. Prisma Lab's CEO and co-founder Andrey Usoltsev told TechCrunch this behaviour only happened if the AI was intentionally provoked to create this type of content - which represents a breach of terms against its use. "If an individual is determined to engage in harmful behavior, any tool would have the potential to become a weapon," he said."
dr tech

Scientists Increasingly Can't Explain How AI Works - 0 views

  •  
    "There's a similar problem in artificial intelligence: The people who develop AI are increasingly having problems explaining how it works and determining why it has the outputs it has. Deep neural networks (DNN)-made up of layers and layers of processing systems trained on human-created data to mimic the neural networks of our brains-often seem to mirror not just human intelligence but also human inexplicability."
dr tech

Still flattening the curve?: Increased risk of digital authoritarianism after... - 0 views

  •  
    "The main rationale for increasing state surveillance was to tackle the pandemic effectively to save people's lives. Yet, states are not enthusiastic about abandoning these digital tools, even though the pandemic is winding down. Instead, they are determined to preserve their surveillance capacities under the pretext of national security or preparation for future pandemics. In the face of increasing state surveillance, however, we should thoroughly discuss the risk of digital authoritarianism and the possible use of surveillance technologies to violate privacy, silence political opposition, and oppress minorities. For example, South Korea's sophisticated contact tracing technology that involves surveillance camera footage, cell-phone location data, and credit card purchases has disclosed patients' personal information, such as nationality. It raised privacy concerns, particularly for ethnic minorities, and underlined the risk of technology-enabled ethnic mapping and discrimination."
dr tech

The world's biggest AI models aren't very transparent, Stanford study says - The Verge - 0 views

  •  
    "No prominent developer of AI foundation models - a list including companies like OpenAI and Meta - is releasing sufficient information about their potential impact on society, determines a new report from Stanford HAI (Human-Centered Artificial Intelligence). Today, Stanford HAI released its Foundation Model Transparency Index, which tracked whether creators of the 10 most popular AI models disclose information about their work and how people use their systems. Among the models it tested, Meta's Llama 2 scored the highest, followed by BloomZ and then OpenAI's GPT-4. But none of them, it turned out, got particularly high marks."
dr tech

OpenAI debates when to release its AI-generated image detector | TechCrunch - 0 views

  •  
    "OpenAI has "discussed and debated quite extensively" when to release a tool that can determine whether an image was made with DALL-E 3, OpenAI's generative AI art model, or not. But the startup isn't close to making a decision anytime soon. That's according to Sandhini Agarwal, an OpenAI researcher who focuses on safety and policy, who spoke with TechCrunch in a phone interview this week. She said that, while the classifier tool's accuracy is "really good" - at least by her estimation - it hasn't met OpenAI's threshold for quality."
dr tech

My doctor diagnosed me with ADHD - so how did my phone find out? | Sarah Marsh | The Gu... - 0 views

  •  
    "After I was diagnosed with attention deficit hyperactivity disorder (ADHD) in 2022, I started following Instagram accounts that could help me understand the condition. Reels and memes about being neurodivergent started to fill my feed, along with tips on how to manage ADHD in a relationship and other helpful advice. But within days, something else happened: my phone found out about my diagnosis. All of a sudden, I was being served with ads for apps that claimed they could help me to manage my symptoms. There were quizzes to determine what type of ADHD I had: was I predominantly inattentive or impulsive, one asked. Did I definitely have it? Find out by taking this diagnostic test, another promised."
dr tech

How to Detect OpenAI's ChatGPT Output | by Sung Kim | Geek Culture | Dec, 2022 | Medium - 0 views

  •  
    "The tool has determined that there is a 99.61% probability this text was generated using OpenAI GPT. Please note that this tool like everything in AI, has a high probability of detecting GPT output, but not 100% as attributed by George E. P. Box "All models are wrong, but some are useful"."
dr tech

'Multiple frames were likely used': the royal photo's telltale signs of editing | Cathe... - 0 views

  •  
    ""Once these technical photographic limitations of the image are determined, we can then zoom in as closely as possible to every edge of the subjects, in order to highlight where detail has been altered, knowing what should be sharp and what shouldn't. "As per the annotations, this reveals sharp transitions of detail, usually from hard edged selections [in the image editing programme Adobe Photoshop], which can be either straight or worked around curved areas of detail. "It's the juddering of straight-line detail that is the biggest telltale sign of multiple frames being composited together. This can be seen extensively around the hair, arms, and especially at the zip midway down the princess's jacket. Seeing repetition of detail in the finer areas also reveals the likely use of the cloning tool in Photoshop."
dr tech

Emergency room doctors used a patient's FitBit to determine how to save his life / Boin... - 0 views

  •  
    "To date, activity trackers have been used medically only to encourage or monitor patient activity, particularly in conjunction with weight loss programs.5, 6 To our knowledge, this is the first report to use the information in an activity tracker-smartphone system to assist in specific medical decisionmaking."
dr tech

Silicon Valley's Secret Philosophers Should Share Their Work | WIRED - 0 views

  •  
    "Marx had a point. Especially when it comes to ethics, philosophy is often better at finding complications and problems than proposing changes. Silicon Valley has been better at changing the world (even if through breaking things) than taking pause to think through the conse­quences."
aren01

Protocols, Not Platforms: A Technological Approach to Free Speech | Knight First Amendm... - 1 views

  •  
    "Some have argued for much greater policing of content online, and companies like Facebook, YouTube, and Twitter have talked about hiring thousands to staff up their moderation teams.8 8. April Glaser, Want a Terrible Job? Facebook and Google May Be Hiring,Slate (Jan. 18, 2018), https://slate.com/technology/2018/01/facebook-and-google-are-building-an-army-of-content-moderators-for-2018.html (explaining that major platforms have hired or have announced plans to hire thousands, in some cases more than ten thousand, new content moderators).On the other side of the coin, companies are increasingly investing in more and more sophisticated technology help, such as artificial intelligence, to try to spot contentious content earlier in the process.9 9. Tom Simonite, AI Has Started Cleaning Up Facebook, But Can It Finish?,Wired (Dec. 18, 2018), https://www.wired.com/story/ai-has-started-cleaning-facebook-can-it-finish/.Others have argued that we should change Section 230 of the CDA, which gives platforms a free hand in determining how they moderate (or how they don't moderate).10 10. Gohmert Press Release, supra note 7 ("Social media companies enjoy special legal protections under Section 230 of the Communications Act of 1934, protections not shared by other media. Instead of acting like the neutral platforms they claim to be in order obtain their immunity, these companies have turned Section 230 into a license to potentially defraud and defame with impunity… Since there still appears to be no sincere effort to stop this disconcerting behavior, it is time for social media companies to be liable for any biased and unethical impropriety of their employees as any other media company. If these companies want to continue to act like a biased medium and publish their own agendas to the detriment of others, they need to be held accountable."); Eric Johnson, Silicon Valley's Self-Regulating Days "Probably Should Be" Over, Nancy Pelosi Says, Vox (Apr. 11, 2019), https:/
  •  
    "After a decade or so of the general sentiment being in favor of the internet and social media as a way to enable more speech and improve the marketplace of ideas, in the last few years the view has shifted dramatically-now it seems that almost no one is happy. Some feel that these platforms have become cesspools of trolling, bigotry, and hatred.1 1. Zachary Laub, Hate Speech on Social Media: Global Comparisons, Council on Foreign Rel. (Jun. 7, 2019), https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons.Meanwhile, others feel that these platforms have become too aggressive in policing language and are systematically silencing or censoring certain viewpoints.2 2. Tony Romm, Republicans Accused Facebook, Google and Twitter of Bias. Democrats Called the Hearing 'Dumb.', Wash. Post (Jul. 17, 2018), https://www.washingtonpost.com/technology/2018/07/17/republicans-accused-facebook-google-twitter-bias-democrats-called-hearing-dumb/?utm_term=.895b34499816.And that's not even touching on the question of privacy and what these platforms are doing (or not doing) with all of the data they collect."
‹ Previous 21 - 32 of 32
Showing 20 items per page