Skip to main content

Home/ ITGSonline/ Group items tagged algorithm

Rss Feed Group items tagged

1More

To a man with an algorithm all things look like an advertising opportunity | Arwa Mahda... - 0 views

  •  
    "This affects all of us every single day. When the algorithms that govern increasingly large parts of our lives have been designed almost exclusively by young bro-grammers with homogeneous experiences and worldviews, those algorithms are going to fail significant sections of society. A heartbreaking example of this is Gillian Brockell's experience of continuing to get targeted by pregnancy-related ads on Facebook after the stillbirth of her son. Brockell, a Washington Post journalist, recently made headlines when she tweeted an open letter to big tech companies, imploring them to think more carefully about how they target parenting ads."
1More

'Creative' AlphaZero leads way for chess computers and, maybe, science | Sean Ingle | S... - 0 views

  •  
    "Hassabis was a child chess prodigy, who learned the game aged four and was able to beat his dad three weeks later - indeed, when he started playing competitively he was so small he had to bring a pillow with him to reach the board - and became a strong player. Yet in AlphaZero's case there was no human input, other than telling it the rules of each game. "In a matter of a few hours it was superhuman," Hassabis says proudly."
1More

Technologist Vivienne Ming: 'AI is a human right' | Technology | The Guardian - 0 views

  •  
    "At the heart of the problem that troubles Ming is the training that computer engineers receive and their uncritical faith in AI. Too often, she says, their approach to a problem is to train a neural network on a mass of data and expect the result to work fine. She berates companies for failing to engage with the problem first - applying what is already known about good employees and successful students, for example - before applying the AI."
1More

I Tried Predictim AI That Scans for 'Risky' Babysitters - 0 views

  •  
    "The founders of Predictim want to be clear with me: Their product-an algorithm that scans the online footprint of a prospective babysitter to determine their "risk" levels for parents-is not racist. It is not biased. "We take ethics and bias extremely seriously," Sal Parsa, Predictim's CEO, tells me warily over the phone. "In fact, in the last 18 months we trained our product, our machine, our algorithm to make sure it was ethical and not biased. We took sensitive attributes, protected classes, sex, gender, race, away from our training set. We continuously audit our model. And on top of that we added a human review process.""
1More

When Your Boss Is an Algorithm - New York Times Opinion - Medium - 0 views

  •  
    "The algorithmic manager seems to watch everything you do. Ride-hailing platforms track a variety of personalized statistics, including ride acceptance rates, cancellation rates, hours spent logged in to the app and trips completed. And they display selected statistics to individual drivers as motivating tools, like "You're in the top 10 percent of partners!" Uber uses the accelerometer in drivers' phones along with GPS and gyroscope to give them safe driving reports, tracking their performance in granular detail. One driver posted to a forum that a grade of 210 out of 247 "smooth accelerations" earned a "Great work!" from the boss."
1More

The airline Ryanair uses algorithms to split up families, study indicates - Vox - 0 views

  •  
    "In a survey of more than 4,200 people conducted by CAA, travelers most frequently cited being split from their party while traveling on Ryanair, but the airline insists that it doesn't employ a family-splitting algorithm. Ryanair says if a person doesn't pay for their seat assignment, they are "randomly" assigned, which may result in them not sitting with their party."
1More

The terrifying, hidden reality of Ridiculously Complicated Algorithms - 0 views

  •  
    ""Weapons of math destruction" is how the writer Cathy O'Neil describes the nasty and pernicious kinds of algorithms that are not subject to the same challenges that human decision-makers are. Parole algorithms (not Jure's) can bias decisions on the basis of income or (indirectly) ethnicity. Recruitment algorithms can reject candidates on the basis of mistaken identity. In some circumstances, such as policing, they might create feedback loops, sending police into areas with more crime, which causes more crime to be detected."
1More

Are Google search results politically biased? | Jeff Hancock et al | Opinion | The Guar... - 1 views

  •  
    "This way of thinking about search results is wrong. Recent studies suggest that search engines, rather than providing a neutral way to find information, may actually play a major role in shaping public opinion on political issues and candidates. Some research has even argued that search results can affect the outcomes of close elections. In a study aptly titled In Google We Trust participants heavily prioritized the first page of search results, and the order of the results on that page, and continued to do so even when researchers reversed the order of the actual results."
1More

Trump idea on regulating Google 'unfathomable' - Channel NewsAsia - 1 views

  •  
    "There is little evidence to show algorithms by online firms are based on politics, and many conservatives - including Trump himself - have large a social media following. Analysts say it would be dangerous to try to regulate how search engines work to please a government or political faction."
1More

Franken-algorithms: the deadly consequences of unpredictable code | Technology | The Gu... - 0 views

  •  
    ""In some ways we've lost agency. When programs pass into code and code passes into algorithms and then algorithms start to create new algorithms, it gets farther and farther from human agency. Software is released into a code universe which no one can fully understand.""
1More

Google attempting to redefine truth through its biased algorithm -- Society's Child -- ... - 0 views

  •  
    "They've moved "authoritative sources" to the top search results. The question we need to ask is: "How does this play out in the Real World?" In the real world it means that the worldview, the political bias, the social preferences, the positions taken in various ideological and scientific controversies - as decided by top Google Executives - have been virtually hard-coded into Google's search algorithms. No longer is Google returning "unbiased and objective results"."
1More

The GPS app that can find anyone anywhere | Technology | The Guardian - 0 views

  •  
    "The algorithm behind what3words took six months to write. Sheldrick worked on it with two friends he had grown up with. Mohan Ganesalingham, a maths fellow at Trinity College, Cambridge, and Jack Waley-Cohen, a full-time quiz obsessive and question-setter for Only Connect. After the initial mapping was complete, they incorporated an error-correction algorithm, which places similar-sounding combinations a very long way apart."
1More

The Matchmaking Algorithm That Lets Zoos Swipe Right on Animals - 0 views

  •  
    "The animal matchmaking program isn't just for gorillas, and it takes some things into consideration that probably aren't on Tinder's radar. It scores every animal on a variety of traits (and when we say "every" animal, we mean there's an entry for each flamingo in each American zoo), including social skills, age, experience, family history, and interpersonal relationships. Oh, and genetic diversity. Animals with rare genes are more valuable to breeding programs because their offspring will introduce more genetic diversity into the dating pool."
1More

Chinese schools are testing AI that grades papers almost as well as teachers | VentureBeat - 0 views

  •  
    "It is also self-improving. The 10-year-old grading software leverages deep learning algorithms to "compare notes" with human teachers' scores, suggestions, and comments. An engineer involved in the project compared its capabilities to those of AlphaGo, the record-breaking AI Go player developed by Google subsidiary DeepMind."
1More

Police trial AI software to help process mobile phone evidence | UK news | The Guardian - 0 views

  •  
    "Cellebrite, the Israeli-founded and now Japanese-owned company behind some of the software, claims a wider rollout would solve problems over failures to disclose crucial digital evidence that have led to the collapse of a series of rape trials and other prosecutions in the past year. However, the move by police has prompted concerns over privacy and the potential for software to introduce bias into processing of criminal evidence."
1More

"The Biology of Disinformation," a paper by Rushkoff, Pescovitz, and Dunagan / Boing Boing - 0 views

  •  
    "Already, artificially intelligent software can evolve false political and social constructs highly targeted to sway specific audiences. Users find themselves in highly individualized, algorithmically determined news and information feeds, intentionally designed to: isolate them from conflicting evidence or opinions, create self-reinforcing feedback loops of confirmation, and untether them from fact-based reality. And these are just early days. If memes and disinformation have been weaponized on social media, it is still in the musket stage."
1More

Software 'no more accurate than untrained humans' at judging reoffending risk | US news... - 0 views

  •  
    "The algorithm, called Compas (Correctional Offender Management Profiling for Alternative Sanctions), is used throughout the US to weigh up whether defendants awaiting trial or sentencing are at too much risk of reoffending to be released on bail. Since being developed in 1998, the tool is reported to have been used to assess more than one million defendants. But a new paper has cast doubt on whether the software's predictions are sufficiently accurate to justify its use in potentially life-changing decisions."
1 - 20 of 75 Next › Last »
Showing 20 items per page