Skip to main content

Home/ GAVNet Collaborative Curation/ Group items tagged human rights

Rss Feed Group items tagged

Bill Fulkerson

Why a 400-Year Program of Modernist Thinking is Exploding | naked capitalism - 0 views

  •  
    " Fearless commentary on finance, economics, politics and power Follow yvessmith on Twitter Feedburner RSS Feed RSS Feed for Comments Subscribe via Email SUBSCRIBE Recent Items Links 3/11/17 - 03/11/2017 - Yves Smith Deutsche Bank Tries to Stay Alive - 03/11/2017 - Yves Smith John Helmer: Australian Government Trips Up Ukrainian Court Claim of MH17 as Terrorism - 03/11/2017 - Yves Smith 2:00PM Water Cooler 3/10/2017 - 03/10/2017 - Lambert Strether Why a 400-Year Program of Modernist Thinking is Exploding - 03/10/2017 - Yves Smith Links 3/10/17 - 03/10/2017 - Yves Smith Why It Will Take a Lot More Than a Smartphone to Get the Sharing Economy Started - 03/10/2017 - Yves Smith CalPERS' General Counsel Railroads Board on Fiduciary Counsel Selection - 03/10/2017 - Yves Smith Another Somalian Famine - 03/10/2017 - Yves Smith Trade now with TradeStation - Highest rated for frequent traders Why a 400-Year Program of Modernist Thinking is Exploding Posted on March 10, 2017 by Yves Smith By Lynn Parramore, Senior Research Analyst at the Institute for New Economic Thinking. Originally published at the Institute for New Economic Thinking website Across the globe, a collective freak-out spanning the whole political system is picking up steam with every new "surprise" election, rush of tormented souls across borders, and tweet from the star of America's great unreality show, Donald Trump. But what exactly is the force that seems to be pushing us towards Armageddon? Is it capitalism gone wild? Globalization? Political corruption? Techno-nightmares? Rajani Kanth, a political economist, social thinker, and poet, goes beyond any of these explanations for the answer. In his view, what's throwing most of us off kilter - whether we think of ourselves as on the left or right, capitalist or socialist -was birthed 400 years ago during the period of the Enlightenment. It's a set of assumptions, a particular way of looking at the world that pushed out previous modes o
Bill Fulkerson

Anatomy of an AI System - 1 views

shared by Bill Fulkerson on 14 Sep 18 - No Cached
  •  
    "With each interaction, Alexa is training to hear better, to interpret more precisely, to trigger actions that map to the user's commands more accurately, and to build a more complete model of their preferences, habits and desires. What is required to make this possible? Put simply: each small moment of convenience - be it answering a question, turning on a light, or playing a song - requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data. The scale of resources required is many magnitudes greater than the energy and labor it would take a human to operate a household appliance or flick a switch. A full accounting for these costs is almost impossible, but it is increasingly important that we grasp the scale and scope if we are to understand and govern the technical infrastructures that thread through our lives. III The Salar, the world's largest flat surface, is located in southwest Bolivia at an altitude of 3,656 meters above sea level. It is a high plateau, covered by a few meters of salt crust which are exceptionally rich in lithium, containing 50% to 70% of the world's lithium reserves. 4 The Salar, alongside the neighboring Atacama regions in Chile and Argentina, are major sites for lithium extraction. This soft, silvery metal is currently used to power mobile connected devices, as a crucial material used for the production of lithium-Ion batteries. It is known as 'grey gold.' Smartphone batteries, for example, usually have less than eight grams of this material. 5 Each Tesla car needs approximately seven kilograms of lithium for its battery pack. 6 All these batteries have a limited lifespan, and once consumed they are thrown away as waste. Amazon reminds users that they cannot open up and repair their Echo, because this will void the warranty. The Amazon Echo is wall-powered, and also has a mobile battery base. This also has a limited lifespan and then must be thrown away as waste. According to the Ay
Bill Fulkerson

When Splitters become Lumpers: Pitfalls of a Long History of Human Rights « L... - 0 views

  •  
    "For a close reader of Moyn's work on human rights the differences between his two works are head-spinning.  Where Last Utopia attacked the very idea of historic continuity in explaining the human rights movement that emerged in the 1970s, Not Enough builds an entire narrative on continuities. The result is an aspirational history for a reformed human rights movement, a history of roads not taken - with respect to equality, in particular, which Moyn elevates to the 'original' position - that can still be reclaimed.  Not Enough lacks the skepticism that Moyn employed so effectively in The Last Utopia to explain how disconnected contemporary human rights was from its claimed antecedents and undermines arguments in both books. In addition, by not heeding his own lessons from Last Utopia, Moyn understates the emergent human rights movement's inability to contest what became neoliberalism. As someone who confronted those issues at the time, it is harder to dismiss the claims of complicity."
Bill Fulkerson

Lessons on human rights from the Lord's Resistance Army | Aeon Essays - 0 views

  •  
    F For more than a generation, the idea of human rights has served as a guiding star of the liberal West. Faced with atrocities and injustices around the world, some of the most prominent and powerful institutions and individuals in the West responded by invoking the human rights of the asylees, migrants or persecuted. The force of the underlying idea is one of commonality - namely, that all people are, just like us, human beings, and that fact gives them certain rights we must recognise and protect. In this ideal of an age, humanity is more than an appeal for empathy and kindness; it is a philosophical bedr
Bill Fulkerson

Should I Major in the Humanities? - The Atlantic - 0 views

  •  
    "Right now, the biggest impediment to thinking about the future of the humanities is that, thanks to this entrenched narrative of decline-because we've been crying wolf for so long-we already think we know what's going on. The usual suspects-student debt, postmodern relativism, vanishing jobs-are once again being trotted out. But the data suggest something far more interesting may be at work. The plunge seems not to reflect a sudden decline of interest in the humanities, or any sharp drop in the actual career prospects of humanities majors. Instead, in the wake of the 2008 financial crisis, students seem to have shifted their view of what they should be studying-in a largely misguided effort to enhance their chances on the job market. And something essential is being lost in the process."
Steve Bosserman

How We Made AI As Racist and Sexist As Humans - 0 views

  • Artificial intelligence may have cracked the code on certain tasks that typically require human smarts, but in order to learn, these algorithms need vast quantities of data that humans have produced. They hoover up that information, rummage around in search of commonalities and correlations, and then offer a classification or prediction (whether that lesion is cancerous, whether you’ll default on your loan) based on the patterns they detect. Yet they’re only as clever as the data they’re trained on, which means that our limitations—our biases, our blind spots, our inattention—become theirs as well.
  • The majority of AI systems used in commercial applications—the ones that mediate our access to services like jobs, credit, and loans— are proprietary, their algorithms and training data kept hidden from public view. That makes it exceptionally difficult for an individual to interrogate the decisions of a machine or to know when an algorithm, trained on historical examples checkered by human bias, is stacked against them. And forget about trying to prove that AI systems may be violating human rights legislation.
  • Data is essential to the operation of an AI system. And the more complicated the system—the more layers in the neural nets, to translate speech or identify faces or calculate the likelihood someone defaults on a loan—the more data must be collected.
  • ...8 more annotations...
  • But not everyone will be equally represented in that data.
  • And sometimes, even when ample data exists, those who build the training sets don’t take deliberate measures to ensure its diversity
  • The power of the system is its “ability to recognize that correlations occur between gender and professions,” says Kathryn Hume. “The downside is that there’s no intentionality behind the system—it’s just math picking up on correlations. It doesn’t know this is a sensitive issue.” There’s a tension between the futuristic and the archaic at play in this technology. AI is evolving much more rapidly than the data it has to work with, so it’s destined not just to reflect and replicate biases but also to prolong and reinforce them.
  • Accordingly, groups that have been the target of systemic discrimination by institutions that include police forces and courts don’t fare any better when judgment is handed over to a machine.
  • A growing field of research, in fact, now looks to apply algorithmic solutions to the problems of algorithmic bias.
  • Still, algorithmic interventions only do so much; addressing bias also demands diversity in the programmers who are training machines in the first place.
  • A growing awareness of algorithmic bias isn’t only a chance to intervene in our approaches to building AI systems. It’s an opportunity to interrogate why the data we’ve created looks like this and what prejudices continue to shape a society that allows these patterns in the data to emerge.
  • Of course, there’s another solution, elegant in its simplicity and fundamentally fair: get better data.
Bill Fulkerson

It's not all Pepes and trollfaces - memes can be a force for good - The Verge - 0 views

  •  
    "How the 'emotional contagion' of memes makes them the internet's moral conscience By Allie Volpe Aug 27, 2018, 11:30am EDT Illustration by Alex Castro & Keegan Larwin SHARE Newly single, Jason Donahoe was perusing Tinder for the first time since it started integrating users' Instagram feeds. Suddenly, he had an idea: follow the Instagram accounts of some of the women he'd been interested in but didn't match with on the dating service. A few days later, he considered taking it a step further and direct messaging one of the women on Instagram. After all, the new interface of the dating app seemed to encourage users to explore other areas of potential matches' online lives, so why not take the initiative to reach out? Before he had a chance, however, he came across the profile of another woman whose Tinder photo spread featured a meme with Parks and Recreation character Jean-Ralphio Saperstein (Ben Schwartz) leaning into the face of Ben Wyatt (Adam Scott) with the caption: hey I saw you on Tinder but we didn't match so I found your Instagram you're so beautiful you don't need to wear all that makeup ahah I bet you get a lot of creepy dm's but I'm not like all those other guys message me back beautiful btw what's your snap "I was like, 'Oh shit, wow,'" Donahoe says. Seeing his potential jerk move laid out so plainly as a neatly generalized joke, he saw it in a new light. "I knew a) to be aware of that, and b) to cut that shit out … It prompted self-reflection on my part." THE MOST SUCCESSFUL MEMES STRIKE A CULTURAL CHORD AND CAN GUIDE AND EVEN INFLUENCE BEHAVIOR Donahoe says memes have resonated with him particularly when they depict a "worse, extreme version" of himself. For Donahoe, the most successful memes are more than just jokes. They "strike a societal, cultural chord" and can be a potent cocktail for self-reflection as tools that can guide and even influence behavior. In the months leading up to the 2016 US
Steve Bosserman

Toward Democratic, Lawful Citizenship for AIs, Robots, and Corporations - 0 views

  • If an AI canread the laws of a country (its Constitution and then relevant portions of the legal code)answer common-sense questions about these lawswhen presented with textual descriptions or videos of real-life situations, explain roughly what the laws imply about these situationsthen this AI has the level of understanding needed to manage the rights and responsibilities of citizenship.
  • AI citizens would also presumably have responsibilities similar to those of human citizens, though perhaps with appropriate variations. Clearly, AI citizens would have tax obligations (and corporations already pay taxes, obviously, even though they are not considered autonomous citizens). If they also served on jury duty, this could be interesting, as they might provide a quite different perspective to human citizens. There is a great deal to be fleshed out here.
  • The question becomes: What kind of test can we give to validate that the AI really understands the Constitution, as opposed to just parroting back answers in a shallow but accurate way?
  • ...2 more annotations...
  • So we can say that passing a well-crafted AI Citizenship Test would bea sufficient condition for possessing a high level of human-like general intelligenceNOT a necessary condition for possessing a high level of general intelligence; nor even a necessary condition for possessing a high level of human-like general intelligenceNOT a sufficient condition for possessing precisely human-like intelligence (as required by the Turing Test or other similar tests)These limitations, however, do not make the notion of an AI Citizenship less interesting; in a way, they make it more interesting. What they tell us is: An AI Citizenship Test will be a specific type of general intelligence test that is specifically relevant to key aspects of modern society.
  • If you would like to voice your perspectives on the AI Citizenship Test, please feel free to participate here.
Steve Bosserman

What smart bees can teach humans about collective intelligence - 0 views

  • Why do groups of humans sometimes exhibit collective wisdom and at other times madness? Can we reduce the risk of maladaptive herding and at the same time increase the possibility of collective wisdom?
  • Understanding this apparent conflict has been a longstanding problem in social science. The key to this puzzle could be the way that individuals use information from others versus information gained from their own trial-and-error problem solving. If people simply copy others without reference to their own experience, any idea – even a bad one – can spread. So how can social learning improve our decision making? Striking the right balance between copying others and relying on personal experience is key. Yet we still need to know exactly what the right balance is.
  • Our results suggest that we should be more aware of the risk of maladaptive herding when these conditions – large group size and a difficult problem – prevail. We should take account of not just the most popular opinion, but also other minority opinions. In thinking this way, the crowd can avoid maladaptive herding behaviour. This research could inform how collective intelligence is applied to real-world situations, including online shopping and prediction markets.
  • ...1 more annotation...
  • Stimulating independent thought in individuals may reduce the risk of collective madness. Dividing a group into sub-groups or breaking down a task into small easy steps promotes flexible, yet smart, human “swarm” intelligence. There is much we can learn from the humble bee.
Steve Bosserman

How AI will change democracy - 0 views

  • AI systems could play a part in democracy while remaining subordinate to traditional democratic processes like human deliberation and human votes. And they could be made subject to the ethics of their human masters. It should not be necessary for citizens to surrender their moral judgment if they don’t wish to.
  • There are nevertheless serious objections to the idea of AI Democracy. Foremost among them is the transparency objection: can we really call a system democratic if we don’t really understand the basis of the decisions made on our behalf? Although AI Democracy could make us freer or more prosperous in our day-to-day lives, it would also rather enslave us to the systems that decide on our behalf. One can see Pericles shaking his head in disgust.
  • In the past humans were prepared, in the right circumstances, to surrender their political affairs to powerful unseen intelligences. Before they had kings, the Hebrews of the Old Testament lived without earthly politics. They were subject only to the rule of God Himself, bound by the covenant that their forebears had sworn with Him. The ancient Greeks consulted omens and oracles. The Romans looked to the stars. These practices now seem quaint and faraway, inconsistent with what we know of rationality and the scientific method. But they prompt introspection. How far are we prepared to go–what are we prepared to sacrifice–to find a system of government that actually represents the people?
Steve Bosserman

UK can lead the way on ethical AI, says Lords Committee - News from Parliament - UK Par... - 0 views

  • AI Code One of the recommendations of the report is for a cross-sector AI Code to be established, which can be adopted nationally, and internationally. The Committee’s suggested five principles for such a code are: Artificial intelligence should be developed for the common good and benefit of humanity. Artificial intelligence should operate on principles of intelligibility and fairness. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
Steve Bosserman

Teaching an Algorithm to Understand Right and Wrong - 0 views

  • The rise of artificial intelligence is forcing us to take abstract ethical dilemmas much more seriously because we need to code in moral principles concretely. Should a self-driving car risk killing its passenger to save a pedestrian? To what extent should a drone take into account the risk of collateral damage when killing a terrorist? Should robots make life-or-death decisions about humans at all? We will have to make concrete decisions about what we will leave up to humans and what we will encode into software.
Bill Fulkerson

Exclusive: Countries To Face a 'Wave' of Corporate Lawsuits Challenging Emergency COVID... - 0 views

  •  
    Yves here. So now it's clear: some companies and their law firm enablers see their right to profit, even in the face of the Covid-19 pandemic, as more important than human lives. This has been an underlying theme of investor-state dispute settlement suits (which we've written about extensively), but it's never been as crass as here. On top of everything else, these actions will deplete government coffers, adding to public distress. The only upside is this sort of thing should kill incorporating meaningful investor-state dispute settlement provisions into future trade deals.
Steve Bosserman

I am a data factory (and so are you) - 0 views

  • Data is no less a form of common property than oil or soil or copper. We make data together, and we make it meaningful together, but its value is currently captured by the companies that own it. We find ourselves in the position of a colonized country, our resources extracted to fill faraway pockets. Wealth that belongs to the many — wealth that could help feed, educate, house and heal people — is used to enrich the few. The solution is to take up the template of resource nationalism, and nationalize our data reserves.
  • Emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform. Rather than spending a lot of time doing things that Facebook doesn’t find valuable – such as watching viral videos – you can spend a bit less time, but spend it doing things that Facebook does find valuable. In other words, “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics. Shifting to this model not only sidesteps concerns about tech addiction – it also acknowledges certain basic limits to Facebook’s current growth model. There are only so many hours in the day. Facebook can’t keep prioritising total time spent – it has to extract more value from less time.
  • But let’s assume that our vast data collective is secure, well managed, and put to purely democratic ends. The shift of data ownership from the private to the public sector may well succeed in reducing the economic power of Silicon Valley, but what it would also do is reinforce and indeed institutionalize Silicon Valley’s computationalist ideology, with its foundational, Taylorist belief that, at a personal and collective level, humanity can and should be optimized through better programming. The ethos and incentives of constant surveillance would become even more deeply embedded in our lives, as we take on the roles of both the watched and the watcher. Consumer, track thyself! And, even with such a shift in ownership, we’d still confront the fraught issues of design, manipulation, and agency.
1 - 20 of 24 Next ›
Showing 20 items per page