Skip to main content

Home/ Digit_al Society/ Group items matching "determinism" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
dr tech

'Multiple frames were likely used': the royal photo's telltale signs of editing | Catherine, Princess of Wales | The Guardian - 0 views

  •  
    ""Once these technical photographic limitations of the image are determined, we can then zoom in as closely as possible to every edge of the subjects, in order to highlight where detail has been altered, knowing what should be sharp and what shouldn't. "As per the annotations, this reveals sharp transitions of detail, usually from hard edged selections [in the image editing programme Adobe Photoshop], which can be either straight or worked around curved areas of detail. "It's the juddering of straight-line detail that is the biggest telltale sign of multiple frames being composited together. This can be seen extensively around the hair, arms, and especially at the zip midway down the princess's jacket. Seeing repetition of detail in the finer areas also reveals the likely use of the cloning tool in Photoshop."
dr tech

My doctor diagnosed me with ADHD - so how did my phone find out? | Sarah Marsh | The Guardian - 0 views

  •  
    "After I was diagnosed with attention deficit hyperactivity disorder (ADHD) in 2022, I started following Instagram accounts that could help me understand the condition. Reels and memes about being neurodivergent started to fill my feed, along with tips on how to manage ADHD in a relationship and other helpful advice. But within days, something else happened: my phone found out about my diagnosis. All of a sudden, I was being served with ads for apps that claimed they could help me to manage my symptoms. There were quizzes to determine what type of ADHD I had: was I predominantly inattentive or impulsive, one asked. Did I definitely have it? Find out by taking this diagnostic test, another promised."
dr tech

OpenAI debates when to release its AI-generated image detector | TechCrunch - 0 views

  •  
    "OpenAI has "discussed and debated quite extensively" when to release a tool that can determine whether an image was made with DALL-E 3, OpenAI's generative AI art model, or not. But the startup isn't close to making a decision anytime soon. That's according to Sandhini Agarwal, an OpenAI researcher who focuses on safety and policy, who spoke with TechCrunch in a phone interview this week. She said that, while the classifier tool's accuracy is "really good" - at least by her estimation - it hasn't met OpenAI's threshold for quality."
dr tech

The world's biggest AI models aren't very transparent, Stanford study says - The Verge - 0 views

  •  
    "No prominent developer of AI foundation models - a list including companies like OpenAI and Meta - is releasing sufficient information about their potential impact on society, determines a new report from Stanford HAI (Human-Centered Artificial Intelligence). Today, Stanford HAI released its Foundation Model Transparency Index, which tracked whether creators of the 10 most popular AI models disclose information about their work and how people use their systems. Among the models it tested, Meta's Llama 2 scored the highest, followed by BloomZ and then OpenAI's GPT-4. But none of them, it turned out, got particularly high marks."
dr tech

Prof Nita Farahany: 'We need a new human right to cognitive liberty' | Neuroscience | The Guardian - 0 views

  •  
    "To start we need a new human right to "cognitive liberty", which would come with an update to other existing human rights to privacy, freedom of thought and self-determination. All told it would protect our freedom of thought and rumination, mental privacy, and self-determination over our brains and mental experiences. It would change the default rules so we have rights around the commodification of our brain data. "
dr tech

How to Detect OpenAI's ChatGPT Output | by Sung Kim | Geek Culture | Dec, 2022 | Medium - 0 views

  •  
    "The tool has determined that there is a 99.61% probability this text was generated using OpenAI GPT. Please note that this tool like everything in AI, has a high probability of detecting GPT output, but not 100% as attributed by George E. P. Box "All models are wrong, but some are useful"."
dr tech

What does the Lensa AI app do with my self-portraits and why has it gone viral? | Artificial intelligence (AI) | The Guardian - 0 views

  •  
    "Prisma Labs has already gotten into trouble for accidentally generating nude and cartoonishly sexualised images - including those of children - despite a "no nudes" and "adults only" policy. Prisma Lab's CEO and co-founder Andrey Usoltsev told TechCrunch this behaviour only happened if the AI was intentionally provoked to create this type of content - which represents a breach of terms against its use. "If an individual is determined to engage in harmful behavior, any tool would have the potential to become a weapon," he said."
dr tech

Cracking apps: are crimefighters going too far to bring down cartels? | Organised crime | The Guardian - 0 views

  •  
    "The Italian supreme court ordered prosecutors last month to disclose how the Sky ECC data had been retrieved, arguing that it was impossible to have a fair trial if the accused is unable to access the evidence or assess its reliability and legality, a position supposed by the NGO Fair Trials. Whether prosecutors choose to do so could determine whether the arrests made this week lead to convictions or not. Prosecutors in the UK face a similar dilemma in relation to the hacking of EncroChat, another secret messaging platform that had the added facility of a "panic" button that when pressed would immediately erase the phone's contents."
dr tech

Scientists Increasingly Can't Explain How AI Works - 0 views

  •  
    "There's a similar problem in artificial intelligence: The people who develop AI are increasingly having problems explaining how it works and determining why it has the outputs it has. Deep neural networks (DNN)-made up of layers and layers of processing systems trained on human-created data to mimic the neural networks of our brains-often seem to mirror not just human intelligence but also human inexplicability."
dr tech

Still flattening the curve?: Increased risk of digital authoritarianism after COVID-19 · Global Voices Advox - 0 views

  •  
    "The main rationale for increasing state surveillance was to tackle the pandemic effectively to save people's lives. Yet, states are not enthusiastic about abandoning these digital tools, even though the pandemic is winding down. Instead, they are determined to preserve their surveillance capacities under the pretext of national security or preparation for future pandemics. In the face of increasing state surveillance, however, we should thoroughly discuss the risk of digital authoritarianism and the possible use of surveillance technologies to violate privacy, silence political opposition, and oppress minorities. For example, South Korea's sophisticated contact tracing technology that involves surveillance camera footage, cell-phone location data, and credit card purchases has disclosed patients' personal information, such as nationality. It raised privacy concerns, particularly for ethnic minorities, and underlined the risk of technology-enabled ethnic mapping and discrimination."
aren01

Protocols, Not Platforms: A Technological Approach to Free Speech | Knight First Amendment Institute - 1 views

  •  
    "Some have argued for much greater policing of content online, and companies like Facebook, YouTube, and Twitter have talked about hiring thousands to staff up their moderation teams.8 8. April Glaser, Want a Terrible Job? Facebook and Google May Be Hiring,Slate (Jan. 18, 2018), https://slate.com/technology/2018/01/facebook-and-google-are-building-an-army-of-content-moderators-for-2018.html (explaining that major platforms have hired or have announced plans to hire thousands, in some cases more than ten thousand, new content moderators).On the other side of the coin, companies are increasingly investing in more and more sophisticated technology help, such as artificial intelligence, to try to spot contentious content earlier in the process.9 9. Tom Simonite, AI Has Started Cleaning Up Facebook, But Can It Finish?,Wired (Dec. 18, 2018), https://www.wired.com/story/ai-has-started-cleaning-facebook-can-it-finish/.Others have argued that we should change Section 230 of the CDA, which gives platforms a free hand in determining how they moderate (or how they don't moderate).10 10. Gohmert Press Release, supra note 7 ("Social media companies enjoy special legal protections under Section 230 of the Communications Act of 1934, protections not shared by other media. Instead of acting like the neutral platforms they claim to be in order obtain their immunity, these companies have turned Section 230 into a license to potentially defraud and defame with impunity… Since there still appears to be no sincere effort to stop this disconcerting behavior, it is time for social media companies to be liable for any biased and unethical impropriety of their employees as any other media company. If these companies want to continue to act like a biased medium and publish their own agendas to the detriment of others, they need to be held accountable."); Eric Johnson, Silicon Valley's Self-Regulating Days "Probably Should Be" Over, Nancy Pelosi Says, Vox (Apr. 11, 2019), https:/
  •  
    "After a decade or so of the general sentiment being in favor of the internet and social media as a way to enable more speech and improve the marketplace of ideas, in the last few years the view has shifted dramatically-now it seems that almost no one is happy. Some feel that these platforms have become cesspools of trolling, bigotry, and hatred.1 1. Zachary Laub, Hate Speech on Social Media: Global Comparisons, Council on Foreign Rel. (Jun. 7, 2019), https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons.Meanwhile, others feel that these platforms have become too aggressive in policing language and are systematically silencing or censoring certain viewpoints.2 2. Tony Romm, Republicans Accused Facebook, Google and Twitter of Bias. Democrats Called the Hearing 'Dumb.', Wash. Post (Jul. 17, 2018), https://www.washingtonpost.com/technology/2018/07/17/republicans-accused-facebook-google-twitter-bias-democrats-called-hearing-dumb/?utm_term=.895b34499816.And that's not even touching on the question of privacy and what these platforms are doing (or not doing) with all of the data they collect."
dr tech

Welcome to dystopia: getting fired from your job as an Amazon worker by an app | Jessa Crispin | The Guardian - 0 views

  •  
    "Instead, the robots are here not to replace this lower tier of underpaid and undervalued work. They are here to smugly sit in the middle, monitoring and surveilling us, hiring and firing us. Amazon has recently replaced its middle management and human resources workers with artificial intelligence to determine when a worker has outlived their usefulness and needs to be let go. There is no human to appeal to, no negotiating with a bot. "
dr tech

Fears over DNA privacy as 23andMe plans to go public in deal with Richard Branson | Data protection | The Guardian - 0 views

  •  
    "Launched in 2006, 23andMe sells tests to determine consumers' genetic ancestry and risk of developing certain illnesses, using saliva samples sent in by mail. Privacy advocates and researchers have long raised concerns about a for-profit company owning the genetic data of millions of people, fears that have only intensified with news of the partnership."
dr tech

Facebook movement data could help find new Covid-19 locations, study finds | World news | The Guardian - 0 views

  •  
    "Anonymised Facebook data on people's travels could be used to identify the spread of Covid-19 in locations where health officials are not yet aware of it, a new Australian study has found. Published in the Journal of the Royal Society Interface on Wednesday, University of Melbourne researchers analysed anonymised population mobility data provided by Facebook as part of its Data for Good program to determine whether it could be a useful predictor in determining the spread of Covid outbreaks based on where people were travelling."
dr tech

Profile 1: Chloe - 0 views

  •  
    "Welcome, real human A troll is a fake social media account, often created to spread misleading information. Each of the following 8 profiles include a brief selection of posts from a single social media account. You decide if each is an authentic account or a professional troll. After each profile, you'll review the signs that can help you determine if it's a troll or not."
dr tech

The disruption con: why big tech's favourite buzzword is nonsense | Silicon Valley | The Guardian - 0 views

  •  
    "The answers to such questions will determine what regulatory oversight we believe is necessary or desirable, what role we think the government or unions should play in a new industry such as tech, and even how the industry and its titans ought to be discussed."
dr tech

Silicon Valley's Secret Philosophers Should Share Their Work | WIRED - 0 views

  •  
    "Marx had a point. Especially when it comes to ethics, philosophy is often better at finding complications and problems than proposing changes. Silicon Valley has been better at changing the world (even if through breaking things) than taking pause to think through the conse­quences."
dr tech

AI expert calls for end to UK use of 'racially biased' algorithms | Technology | The Guardian - 0 views

  •  
    "On inbuilt bias in algorithms, Sharkey said: "There are so many biases happening now, from job interviews to welfare to determining who should get bail and who should go to jail. It is quite clear that we really have to stop using decision algorithms, and I am someone who has always been very light on regulation and always believed that it stifles innovation."
dr tech

I Tried Predictim AI That Scans for 'Risky' Babysitters - 0 views

  •  
    "The founders of Predictim want to be clear with me: Their product-an algorithm that scans the online footprint of a prospective babysitter to determine their "risk" levels for parents-is not racist. It is not biased. "We take ethics and bias extremely seriously," Sal Parsa, Predictim's CEO, tells me warily over the phone. "In fact, in the last 18 months we trained our product, our machine, our algorithm to make sure it was ethical and not biased. We took sensitive attributes, protected classes, sex, gender, race, away from our training set. We continuously audit our model. And on top of that we added a human review process.""
dr tech

Amazon's Clever Machines Are Moving From the Warehouse to Headquarters - 0 views

  •  
    "About two years ago the retail team lost another key task: negotiating with major brands and manufacturers the terms of popular sales on the site called "Lightning Deals." Common during the holidays as well as Mother's Day and Father's Day, they help move lots of inventory in a short period. Now, instead of calling their vendor manager at Amazon, the makers of handbags, smartphone accessories and other products simply logged into an Amazon portal that would determine if Amazon liked the deal being offered and the quantity it was willing to buy. No small talk. No give and take. Thousands of Amazon man hours spent forecasting demand, planning marketing strategies and negotiating deals was now handled by software, a major leap in efficiency."
1 - 20 of 32 Next ›
Showing 20 items per page