Skip to main content

Home/ GAVNet Collaborative Curation/ Group items tagged algorithms

Rss Feed Group items tagged

Bill Fulkerson

Anatomy of an AI System - 1 views

shared by Bill Fulkerson on 14 Sep 18 - No Cached
  •  
    "With each interaction, Alexa is training to hear better, to interpret more precisely, to trigger actions that map to the user's commands more accurately, and to build a more complete model of their preferences, habits and desires. What is required to make this possible? Put simply: each small moment of convenience - be it answering a question, turning on a light, or playing a song - requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data. The scale of resources required is many magnitudes greater than the energy and labor it would take a human to operate a household appliance or flick a switch. A full accounting for these costs is almost impossible, but it is increasingly important that we grasp the scale and scope if we are to understand and govern the technical infrastructures that thread through our lives. III The Salar, the world's largest flat surface, is located in southwest Bolivia at an altitude of 3,656 meters above sea level. It is a high plateau, covered by a few meters of salt crust which are exceptionally rich in lithium, containing 50% to 70% of the world's lithium reserves. 4 The Salar, alongside the neighboring Atacama regions in Chile and Argentina, are major sites for lithium extraction. This soft, silvery metal is currently used to power mobile connected devices, as a crucial material used for the production of lithium-Ion batteries. It is known as 'grey gold.' Smartphone batteries, for example, usually have less than eight grams of this material. 5 Each Tesla car needs approximately seven kilograms of lithium for its battery pack. 6 All these batteries have a limited lifespan, and once consumed they are thrown away as waste. Amazon reminds users that they cannot open up and repair their Echo, because this will void the warranty. The Amazon Echo is wall-powered, and also has a mobile battery base. This also has a limited lifespan and then must be thrown away as waste. According to the Ay
Steve Bosserman

Uber has cracked two classic '80s video games by giving an AI algorithm a new type of m... - 0 views

  • AI researchers have typically tried to get around the issues posed by by Montezuma’s Revenge and Pitfall! by instructing reinforcement-learning algorithms to explore randomly at times, while adding rewards for exploration—what’s known as “intrinsic motivation.” But the Uber researchers believe this fails to capture an important aspect of human curiosity. “We hypothesize that a major weakness of current intrinsic motivation algorithms is detachment,” they write. “Wherein the algorithms forget about promising areas they have visited, meaning they do not return to them to see if they lead to new states.”
  • The team’s new family of reinforcement-learning algorithms, dubbed Go-Explore, remember where they have been before, and will return to a particular area or task later on to see if it might help provide better overall results. The researchers also found that adding a little bit of domain knowledge, by having human players highlight interesting or important areas, sped up the algorithms’ learning and progress by a remarkable amount. This is significant because there may be many real-world situations where you would want an algorithm and a person to work together to solve a hard task.
Steve Bosserman

Are You Creditworthy? The Algorithm Will Decide. - 0 views

  • The decisions made by algorithmic credit scoring applications are not only said to be more accurate in predicting risk than traditional scoring methods; its champions argue they are also fairer because the algorithm is unswayed by the racial, gender, and socioeconomic biases that have skewed access to credit in the past.
  • Algorithmic credit scores might seem futuristic, but these practices do have roots in credit scoring practices of yore. Early credit agencies, for example, hired human reporters to dig into their customers’ credit histories. The reports were largely compiled from local gossip and colored by the speculations of the predominantly white, male middle class reporters. Remarks about race and class, asides about housekeeping, and speculations about sexual orientation all abounded.
  • By 1935, whole neighborhoods in the U.S. were classified according to their credit characteristics. A map from that year of Greater Atlanta comes color-coded in shades of blue (desirable), yellow (definitely declining) and red (hazardous). The legend recalls a time when an individual’s chances of receiving a mortgage were shaped by their geographic status.
  • ...1 more annotation...
  • These systems are fast becoming the norm. The Chinese Government is now close to launching its own algorithmic “Social Credit System” for its 1.4 billion citizens, a metric that uses online data to rate trustworthiness. As these systems become pervasive, and scores come to stand for individual worth, determining access to finance, services, and basic freedoms, the stakes of one bad decision are that much higher. This is to say nothing of the legitimacy of using such algorithmic proxies in the first place. While it might seem obvious to call for greater transparency in these systems, with machine learning and massive datasets it’s extremely difficult to locate bias. Even if we could peer inside the black box, we probably wouldn’t find a clause in the code instructing the system to discriminate against the poor, or people of color, or even people who play too many video games. More important than understanding how these scores get calculated is giving users meaningful opportunities to dispute and contest adverse decisions that are made about them by the algorithm.
Steve Bosserman

How We Made AI As Racist and Sexist As Humans - 0 views

  • Artificial intelligence may have cracked the code on certain tasks that typically require human smarts, but in order to learn, these algorithms need vast quantities of data that humans have produced. They hoover up that information, rummage around in search of commonalities and correlations, and then offer a classification or prediction (whether that lesion is cancerous, whether you’ll default on your loan) based on the patterns they detect. Yet they’re only as clever as the data they’re trained on, which means that our limitations—our biases, our blind spots, our inattention—become theirs as well.
  • The majority of AI systems used in commercial applications—the ones that mediate our access to services like jobs, credit, and loans— are proprietary, their algorithms and training data kept hidden from public view. That makes it exceptionally difficult for an individual to interrogate the decisions of a machine or to know when an algorithm, trained on historical examples checkered by human bias, is stacked against them. And forget about trying to prove that AI systems may be violating human rights legislation.
  • Data is essential to the operation of an AI system. And the more complicated the system—the more layers in the neural nets, to translate speech or identify faces or calculate the likelihood someone defaults on a loan—the more data must be collected.
  • ...8 more annotations...
  • But not everyone will be equally represented in that data.
  • And sometimes, even when ample data exists, those who build the training sets don’t take deliberate measures to ensure its diversity
  • The power of the system is its “ability to recognize that correlations occur between gender and professions,” says Kathryn Hume. “The downside is that there’s no intentionality behind the system—it’s just math picking up on correlations. It doesn’t know this is a sensitive issue.” There’s a tension between the futuristic and the archaic at play in this technology. AI is evolving much more rapidly than the data it has to work with, so it’s destined not just to reflect and replicate biases but also to prolong and reinforce them.
  • Accordingly, groups that have been the target of systemic discrimination by institutions that include police forces and courts don’t fare any better when judgment is handed over to a machine.
  • A growing field of research, in fact, now looks to apply algorithmic solutions to the problems of algorithmic bias.
  • Still, algorithmic interventions only do so much; addressing bias also demands diversity in the programmers who are training machines in the first place.
  • A growing awareness of algorithmic bias isn’t only a chance to intervene in our approaches to building AI systems. It’s an opportunity to interrogate why the data we’ve created looks like this and what prejudices continue to shape a society that allows these patterns in the data to emerge.
  • Of course, there’s another solution, elegant in its simplicity and fundamentally fair: get better data.
Steve Bosserman

We Need an FDA For Algorithms: UK mathematician Hannah Fry on the promise and danger of... - 0 views

  • Right now other people are making lots of money on our data. So much money. I think the one that stands out for me is a company called Palantir, founded by Peter Thiel in 2003. It’s actually one of Silicon Valley’s biggest success stories, and is worth more than Twitter. Most people have never heard of it because it’s all operating completely behind the scenes. This company and companies like it have databases that contain every possible thing you can ever imagine, on you, and who you are, and what you’re interested in. It’s got things like your declared sexuality as well as your true sexuality, things like whether you’ve had a miscarriage, whether you’ve had an abortion. Your feelings on guns, whether you’ve used drugs, like, all of these things are being packaged up, inferred, and sold on for huge profit.
  • Do we need to develop a brand-new intuition about how to interact with algorithms? It’s not on us to change that as the users. It’s on the people who are designing the algorithms to make their algorithms to fit into existing human intuition.
Steve Bosserman

Google and corporate news giants forge new alliance - 0 views

  • The “new media” monopolists of Silicon Valley and the once-dominant traditional print media have clearly agreed that the “fake news” frenzy is a convenient pretext to step up their censorship of the internet through new algorithms, allowing them to boost their profit margins and silence opposition through a new framework of “algorithmic censorship.”
  • Last April, Google clamped down on alternative media with new structural changes to its algorithms — accompanying the change with an announcement tarring alternative media with the broad black brush of “misleading information, unexpected offensive results, hoaxes and unsupported conspiracy theories” as opposed to what it called “authoritative content.”As a result, organic search-engine traffic to these sites uniformly plummeted to less than half of what it had previously been, devastating many publishers.
  • The “new media” monopolists of Silicon Valley and the once-dominant traditional print media have clearly agreed that the “fake news” frenzy is a convenient pretext to step up their censorship of the internet through new algorithms, allowing them to boost their profit margins and silence opposition through a new framework of “algorithmic censorship.”This new model overwhelmingly favors those who see information and journalism as an article of commerce alone. It poses a stark threat not only to internet users’ ability to access information, but to the ability of citizens and social movements that hope to interact with, participate in, and wield influence over the political and economic activities that determine our lives and the fate of communities across the world.
Bill Fulkerson

Implementing a quantum approximate optimization algorithm on a 53-qubit NISQ device - 0 views

  •  
    A large team of researchers working with Google Inc. and affiliated with a host of institutions in the U.S., one in Germany and one in the Netherlands has implemented a quantum approximate optimization algorithm (QAOA) on a 53-qubit noisy intermediate-scale quantum (NISQ) device. In their paper published in the journal Nature Physics,, the group describes their method of studying the performance of their QAOA on Google's Sycamore superconducting 53-qubit quantum processor and what they learned from it. Boaz Barak with Harvard University has published a News & Views piece on the work done by the team in the same journal issue.
Bill Fulkerson

Setting the bar for variational quantum algorithms using high-performance classical sim... - 0 views

  •  
    The IBM Quantum team envisions a future where quantum computers interact frictionlessly with high performance computing resources, taking over for the specific problems where quantum can offer a computational advantage. Pushing the envelope of classical computing is crucial to this goal, especially as we develop new quantum algorithms and try to understand which problems are worth tackling with a quantum computer.
Steve Bosserman

There is no difference between computer art and human art | Aeon Ideas - 0 views

  • In industry, there is blunt-force algorithmic tension – ‘Efficiency, capitalism, commerce!’ versus ‘Robots are stealing our jobs!’ But for algorithmic art, the tension is subtler. Only 4 per cent of the work done in the United States economy requires ‘creativity at a median human level’, according to the consulting firm McKinsey and Company. So for computer art – which tries explicitly to zoom into this small piece of that vocational pie – it’s a question not of efficiency or equity, but of trust. Art requires emotional and phrenic investments, with the promised return of a shared slice of the human experience. When we view computer art, the pestering, creepy worry is: who’s on the other end of the line? Is it human? We might, then, worry that it’s not art at all.
  • But the honest-to-God truth, at the end of all of this, is that this whole notion is in some way a put-on: a distinction without a difference. ‘Computer art’ doesn’t really exist in an any more provocative sense than ‘paint art’ or ‘piano art’ does. The algorithmic software was written by a human, after all, using theories thought up by a human, using a computer built by a human, using specs written by a human, using materials gathered by a human, at a company staffed by humans, using tools built by a human, and so on. Computer art is human art – a subset rather than a distinction. It’s safe to release the tension.
Steve Bosserman

Believing without evidence is always morally wrong - Francisco Mejia Uribe | Aeon Ideas - 0 views

  • Today, we truly have a global reservoir of belief into which all of our commitments are being painstakingly added: it’s called Big Data. You don’t even need to be an active netizen posting on Twitter or ranting on Facebook: more and more of what we do in the real world is being recorded and digitised, and from there algorithms can easily infer what we believe before we even express a view. In turn, this enormous pool of stored belief is used by algorithms to make decisions for and about us. And it’s the same reservoir that search engines tap into when we seek answers to our questions and acquire new beliefs. Add the wrong ingredients into the Big Data recipe, and what you’ll get is a potentially toxic output. If there was ever a time when critical thinking was a moral imperative, and credulity a calamitous sin, it is now.
1 - 20 of 97 Next › Last »
Showing 20 items per page