Skip to main content

Home/ Digit_al Society/ Group items tagged risks

Rss Feed Group items tagged

dr tech

Why US elections remain 'dangerously vulnerable' to cyber-attacks | US news | The Guardian - 0 views

  •  
    "Cybersecurity experts have warned for years that malfeasance, technical breakdown or administrative incompetence could easily wreak havoc with electronic systems and could go largely or wholly undetected. This is a concern made much more urgent by Russia's cyber-attacks on political party servers and state voter registration databases in 2016 and by the risk of a repeat - or worse - in this November's midterms. "
dr tech

Ethics committee raises alarm over 'predictive policing' tool | UK news | The Guardian - 0 views

  •  
    "Amid mounting financial pressure, at least a dozen police forces are using or considering predictive analytics, despite warnings from campaigners that use of algorithms and "predictive policing" models risks locking discrimination into the criminal justice system."
dr tech

How a global health crisis turns into a state-run surveillance opportunity | John Naugh... - 0 views

  •  
    "But an analysis of the app's code found that it does more than decide in real time whether someone poses a contagion risk; it also shares information with the police, setting "a template for new forms of automated social control that could persist long after the epidemic subsides"."
dr tech

Using Big Tech to tackle coronavirus risks swapping one lockdown for another | Adam Smi... - 0 views

  •  
    "At a time when science fiction and reality feel as if they are collapsing in on each other, we must resist the temptations of Big Brother; we would just be trading one kind of lockdown for another."
dr tech

Algorithms Identify People with Suicidal Thoughts - IEEE Spectrum - 0 views

  •  
    "Brain scans, however, are quite telling, especially when analyzed with an algorithm, Brent and his colleagues discovered. "We're trying to figure out what's going on in somebody's brain when they're thinking about suicide," says Brent.  These scans, taken using fMRI, or functional magnetic resonance imaging, show that strong words such as 'death,' 'trouble,' 'carefree,' and 'praise,' trigger different patterns of brain activity in people who are suicidal, compared with people who are not. That means that people at risk of suicide think about those concepts differently than everyone else-evidenced by the levels and patterns of brain activity, or neural signatures."
dr tech

Nearly four in 10 university students addicted to smartphones, study finds | Health | T... - 0 views

  •  
    "More than two-thirds (68.7%) of the addicts had trouble sleeping, compared with 57.1% of those who were not addicted to their device. Students who used their phone after midnight or for four or more hours a day were most likely to be at high risk of displaying addictive use of their device."
dr tech

'Missing from desk': AI webcam raises remote surveillance concerns | Working from home ... - 0 views

  •  
    "Explained by "Anna", a desk-sitting avatar complete with an artificial voice, the video introduces TP Observer as "a risk-mitigation tool that monitors and tracks real time employee behaviour, and detects any violations to pre-set business rules". Anna explains that this means home workers will have an AI-enabled webcam added to their computers that recognises their face, tags their location and scans for "breaches" of rules at random points during a shift."
dr tech

Ban Eproctoring - 0 views

  •  
    "This is an abuse of the concept of consent and risks desensitizing people to surveillance. Eproctoring also treats students as if they are guilty until proven innocent, which is a concerning and disrespectful stance for any academic institution to take." What do you think?
dr tech

This 'robot lawyer' can take the mystery out of license agreements - The Verge - 0 views

  •  
    "These ranged from the mundane (Facebook may change its terms of service at any time) to a reminder that Facebook may store and process your data anywhere in the world, meaning it might be subject to different data protection laws. When scanning license agreements from Google, Do Not Sign told me the company reserves the right to stop providing its services at any time and that its services are used at the users' sole risk."
dr tech

Toolkit | Electronic Frontier Foundation - 0 views

  •  
    "Fighting the creep of government use of face surveillance and the related risks can seem overwhelming. Police agencies, and the spy tech vendors that profit from the growth of a surveillance state, have much to gain by deploying this invasive spying technology. "
dr tech

Doctored video of sinister Mark Zuckerberg puts Facebook to the test | Technology | The... - 0 views

  •  
    "The piece is meant to be a commentary on the collection and use of private data by tech companies, as Posters explained. "The fact that citizens' data - including intimate knowledge on political leanings, sexuality, psychological traits and personality - are made available to the highest bidder shows that the digital influence industry and its associated architectures pose a risk not only to individual human rights but to our democracies at large.""
dr tech

Doctors use algorithms that aren't designed to treat all patients equally - 0 views

  •  
    "The battle over algorithms in healthcare has come into full view since last fall. The debate only intensified in the wake of the coronavirus pandemic, which has disproportionately devastated Black and Latino communities. In October, Science published a study that found one hospital unintentionally directed more white patients than Black patients to a high-risk care management program because it used an algorithm to predict the patients' future healthcare costs as a key indicator of personal health. Optum, the company that sells the software product, told Mashable that the hospital used the tool incorrectly. "
dr tech

Parents Against Facial Recognition - 0 views

  •  
    "To Lawmakers and School Administrators: As parents and caregivers, there is nothing more important to us than our children's safety. That's why we're calling for an outright ban on the use of facial recognition in schools. We're concerned about this technology spreading to our schools, infringing on our kids' rights and putting them in danger. We don't even know the psychological impacts this constant surveillance can have on our children, but we do know that violating their basic rights will create an environment of mistrust and will make it hard for students to succeed and grow. The images collected by this technology will become a target for those wishing to harm our children, and could put them in physical danger or at risk of having their biometric information stolen or sold. The well-known bias built into this technology will put Black and brown children, girls, and gender noncomforming kids in specific danger. Facial recognition creates more harm than good and should not be used on the children we have been entrusted to protect. It should instead be immediately banned."
dr tech

Amazon's driver monitoring app is an invasive nightmare - 0 views

  •  
    "Mentor is made by eDriving, which describes the app on its website as a "smartphone-based solution that collects and analyzes driver behaviors most predictive of crash risk and helps remediate risky behavior by providing engaging, interactive micro-training modules delivered directly to the driver in the smartphone app." But CNBC talked to drivers who said the app mostly invades their privacy or miscalculates dangerous driving behavior. One driver said even though he didn't answer a ringing phone, the app docked points for using a phone while driving. Another worker was flagged for distracted driving at every delivery stop she made. The incorrect tracking has real consequences. ranging from restricted payouts and bonuses to job loss. "
dr tech

Fears over DNA privacy as 23andMe plans to go public in deal with Richard Branson | Dat... - 0 views

  •  
    "Launched in 2006, 23andMe sells tests to determine consumers' genetic ancestry and risk of developing certain illnesses, using saliva samples sent in by mail. Privacy advocates and researchers have long raised concerns about a for-profit company owning the genetic data of millions of people, fears that have only intensified with news of the partnership."
dr tech

Drug companies look to AI to end 'hit and miss' research | Pharmaceuticals industry | T... - 0 views

  •  
    "Functional genomics - a new area of science that looks at why small changes in a person's genetic make-up can increase the risk of diseases - deals with huge datasets. Each person has about 30,000 genes, which can be combined with others, as Hal Barron, GSK's chief scientific officer, explains. "You start to realise you're dealing with trillions and trillions of data points, even per experiment, and no human can interpret that, it's just too complicated.""
dr tech

Singapore deploys Spot robot to patrol parks and remind people to socially distance - T... - 0 views

  •  
    "Using the robot will reduce the need for staff to patrol the grounds, says NParks, and it "lowers the risk of exposure to the virus." According to local newspaper The Straits Times, the board is also considering deploying the robot elsewhere in the city. Signs posted in the park ask visitors not to "disrupt" the robot on its patrols."
dr tech

The lessons we all must learn from the A-levels algorithm debacle | WIRED UK - 0 views

  •  
    "More algorithmic decision making and decision augmenting systems will be used in the coming years. Unlike the approach taken for A-levels, future systems may include opaque AI-led decision making. Despite such risks there remain no clear picture of how public sector bodies - government, local councils, police forces and more - are using algorithmic systems for decision making."
dr tech

Encryption Lava Lamps - San Francisco, California - Atlas Obscura - 1 views

  •  
    "As the lava lamps bubble and swirl, a video camera on the ceiling monitors their unpredictable changes and connects the footage to a computer, which converts the randomness into a virtually unhackable code.  Why use lava lamps for encryption instead of computer-generated code? Since computer codes are created by machines with relatively predictable patterns, it is entirely possible for hackers to guess their algorithms, posing a security risk. Lava lamps, on the other hand, add to the equation the sheer randomness of the physical world, making it nearly impossible for hackers to break through."
dr tech

U.K. Found 'Critical' Weakness in Huawei Equipment - Bloomberg - 0 views

  •  
    ""Critical, user-facing vulnerabilities" were found in the Chinese supplier's fixed-broadband products caused by poor code quality and an old operating system, the Huawei Cyber Security Evaluation Centre Oversight Board said in a report. "U.K. operators needed to take extraordinary action to mitigate the risk.""
« First ‹ Previous 41 - 60 of 103 Next › Last »
Showing 20 items per page