"A newly released set of slides from the Snowden leaks reveals that the NSA is harvesting millions of facial images from the Web for use in facial recognition algorithms through a program called "Identity Intelligence." James Risen and Laura Poitras's NYT piece shows that the NSA is linking these facial images with other biometrics, identity data, and "behavioral" data including "travel, financial, behaviors, social network." "
"Unskilled manual laborers have felt the pressure of automation for a long time - but, increasingly, they're not alone. The last few years have been a bonanza of advances in artificial intelligence. As our software gets smarter, it can tackle harder problems, which means white-collar and pink-collar workers are at risk as well.
Here are eight jobs expected to be automated (partially or entirely) in the coming decades.
Call Center Employees
call-center
Telemarketing used to happen in a crowded call center, with a group of representatives cold-calling hundreds of prospects every day. Of those, maybe a few dozen could be persuaded to buy the product in question. Today, the idea is largely the same, but the methods are far more efficient.
Many of today's telemarketers are not human. In some cases, as you've probably experienced, there's nothing but a recording on the other end of the line. It may prompt you to "press '1' for more information," but nothing you say has any impact on the call - and, usually, that's clear to you.
But in other cases, you may get a sales call and have no idea that you're actually speaking to a computer. Everything you say gets an appropriate response - the voice may even laugh. How is that possible? Well, in some cases, there is a human being on the other side, and they're just pressing buttons on a keyboard to walk you through a pre-recorded but highly interactive marketing pitch. It's a more practical version of those funny soundboards that used to be all the rage for prank calls.
Using soundboard-assisted calling - regardless of what it says about the state of human interaction - has the potential to make individual call center employees far more productive: in some cases, a single worker will run two or even three calls at the same time. In the not too distant future, computers will be able to man the phones by themselves.
At the intersection of big data, artificial intelligence, and advanced
"The message seems to be that if you really want to keep something private, treat it as a secret, and in the age of algorithmic analysis and big data, perhaps best to follow Winston Smith's bitter lesson from Nineteen Eighty-Four: "If you want to keep a secret, you must also hide it from yourself.""
"The really significant thing about AlphaGo is that it (and its creators) cannot explain its moves. And yet it plays a very difficult game expertly. So it's displaying a capability eerily similar to what we call intuition - "knowledge obtained without conscious reasoning". Up to now, we have regarded that as an exclusively human prerogative. It's what Newton was on about when he wrote "Hypotheses non fingo" in the second edition of his Principia: "I don't make hypotheses," he's saying, "I just know.""
"O'Neill recounts an exercise to improve service to homeless families in New York City, in which data-analysis was used to identify risk-factors for long-term homelessness. The problem, O'Neill describes, was that many of the factors in the existing data on homelessness were entangled with things like race (and its proxies, like ZIP codes, which map extensively to race in heavily segregated cities like New York). Using data that reflects racism in the system to train a machine-learning algorithm whose conclusions can't be readily understood runs the risk of embedding that racism in a new set of policies, these ones scrubbed clean of the appearance of bias with the application of objective-seeming mathematics. "
"Tufekci describes how insurgent, democratic movements were early arrivals to the internet, and how clumsy authoritarians' attempts to fight them by shutting the net down only energized their movements. But canny authoritarians mastered the platforms, figuring out how to game their automated algorithms to upvote their messages, and how to game their moderation policies to banish their adversaries."
"The researchers, Tero Karras, Samuli Laine, and Timo Aila, came up with a new way of constructing a generative adversarial network, or GAN.
GANs employ two dueling neural networks to train a computer to learn the nature of a data set well enough to generate convincing fakes. When applied to images, this provides a way to generate often highly realistic fakery. The same Nvidia researchers have previously used the technique to create artificial celebrities (read our profile of the inventor of GANs, Ian Goodfellow)."
"Big data and artificial intelligence are some of today's most popular buzzwords. Both are promised to help deliver insights that were previously too complex for computer systems to calculate. With examples ranging from personalised recommendation systems to automatic facial analyses, user-generated data is now analysed by algorithms to identify patterns and predict outcomes. And the common view is that these developments will have a positive impact on society."
"EFF's Legal Director Corynne McSherry offers five lessons to keep in mind:
1. (Lots of) mistakes will be made: copyright takedowns result in the removal of tons of legitimate content.
2. Robots won't help: automated filtering tools like Content ID have been a disaster, and policing copyright with algorithms is a lot easier than policing "bad speech."
3. These systems need to be transparent and have due process. A system that allows for automated instant censorship and slow, manual review of censorship gives a huge advantage to people who want to abuse the system.
4. Punish abuse. The ability to censor other peoples' speech is no joke. If you're careless or malicious in your takedown requests, you should pay a consequence: maybe a fine, maybe being barred form using the takedown system.
5. Voluntary moderation quickly becomes mandatory. Every voluntary effort to stem copyright infringement has been followed by calls to make those efforts mandatory (and expand them)."
"The platform's automated tools may have mistaken the visuals of the burning building for 9/11 footage, according to Vagelis Papalexakis, an assistant professor of computer science and engineering at the University of California, Riverside who studies machine learning used in similar systems."
"His team trained a machine learning algorithm to spot words and phrases associated with bullying on social media site AskFM, which allows users to ask and answer questions. It managed to detect and block almost two-thirds of insults within almost 114,000 posts in English and was more accurate than a simple keyword search. Still, it did struggle with sarcastic remarks."
"So they trained a deep-learning neural net on tons of examples of deepfaked videos, and produced a model that's better than any previous automated technique at spotting hoaxery. (Their paper documenting the work is here.)"
"The Bank of England has previously highlighted the impact of trading algorithms. "Some markets appear to have become more fragile, as evidenced by episodes of short-term volatility and illiquidity over the past couple of years," Threadneedle Street said last December, warning of a move towards "fast, electronic trading."
"
"Hangzhou Zhongheng Electric is just one example of the large-scale application of brain surveillance devices to monitor people's emotions and other mental activities in the workplace, according to scientists and companies involved in the government-backed projects.
Concealed in regular safety helmets or uniform hats, these lightweight, wireless sensors constantly monitor the wearer's brainwaves and stream the data to computers that use artificial intelligence algorithms to detect emotional spikes such as depression, anxiety or rage."
"Then came the algorithm, which used automatic speech recognition to detect specific features and patterns in each of the 48 recordings. It was clear from examining the waveforms of the cries that each category had a specific pattern."
"These systems weren't set up to be biased; it's likely that they were simply trained on a subset of the diversity of accents and usages present in the United States. But, as we become ever more reliant on these systems, making them less frustrating for all their users should be a priority."
"We are talking about vast fields of aggregate data, the scale of which is difficult to comprehend; this data can be parsed by the artificial intelligence recommendation algorithms that Google has pioneered, and that now steer everything from employment application processes to dating apps."
"Raza wasn't the only one in her class who felt concerned about new levels of surveillance. Another student in the class, who did not want to be named, said that in addition to privacy worries, they were concerned that they didn't even have enough RAM to run the Proctorio software. Worse, the tool's facial detection algorithm seemed to struggle to recognize them, so they needed to sit in the full light of the window to better expose the contours of their face, in their view an indication that the system might be biased. "