Skip to main content

Home/ GAVNet Collaborative Curation/ Group items tagged fair

Rss Feed Group items tagged

1More

The Fourth Industrial Revolution: Proceedings of a Workshop-in Brief - 0 views

  •  
    The Forum's perspective on present and future technological and societal changes is captured in their 'Principled Framework for the Fourth Industrial Revolution.' Philbeck explained the four principles that characterize the Fourth Industrial Revolution. * Think systems, not technologies. Individual technologies are interesting, but it is their systemic impact that matters. Emerging technologies challenge our societal values and norms, sometimes for good, but sometimes also in negative ways; the Fourth Industrial Revolution will have civilization-changing impact-on species, on the planet, on geopolitics, and on the global economy. Philbeck suggested that wealth creation and aggregation supported by this phase of technological innovation may challenge societal commitments to accessibility, inclusivity, and fairness and create the need for relentless worker re-education. As Philbeck stated, "The costs for greater productivity are often externalized to stakeholders who are not involved in a particular technology's development." * Empowering, not determining. The Forum urges an approach to the Fourth Industrial Revolution that honors existing social principles. "We need to take a stance toward technology and technological systems that empowers society and acts counter to fatalistic and deterministic views, so that society and its agency is not nullified," said Philbeck. "Technologies are not forces; we have the ability to shape them and decide on how they are applied." * Future by design, and not by default. Seeking a future by design requires active governance. There are many types of governance-by individuals, by governments, by civic society, and by companies. Philbeck argued that failure to pay attention to critical governance questions in consideration of the Fourth Industrial Revolution means societies are likely to allow undemocratic, random, and potentially malicious forces to shape the future of technological systems and th
1More

Trump's New Executive Orders To Restrain the Administrative State - Reason.com - 0 views

  •  
    "The first order declares that its goal is "to ensure that Americans are subject to only those binding rules imposed through duly enacted statutes or through regulations lawfully promulgated under them, and that Americans have fair notice of their obligations."  The second complements the first, promising that Americans will not "be subjected to a civil administrative enforcement action or adjudication absent prior public notice of both the enforcing agency's jurisdiction over particular conduct and the legal standards applicable to that conduct.""
3More

For better AI, diversify the people building it - 0 views

  • Lyons announced the Partnership on AI’s first three working groups, which are dedicated to fair, transparent, and accountable AI; safety-critical AI; and AI, labor, and the economy. Each group will have a for-profit and nonprofit chair and aim to share its results as widely as possible. Lyons says these groups will be like a “union of concerned scientists.” “A big part of this is on us to really achieve inclusivity,” she says. Tess Posner, the executive director of AI4ALL, a nonprofit that runs summer programs teaching AI to students from underrepresented groups, showed why training a diverse group for the next generation of AI workers is essential. Currently, only 13 percent of AI companies have female CEOs, and less than 3 percent of tenure-track engineering faculty in the US are black. Yet an inclusive workforce may have more ideas and can spot problems with systems before they happen, and diversity can improve the bottom line. Posner pointed out a recent Intel report saying diversity could add $500 billion to the US economy.
  • “It’s good for business,” she says. These weren’t the first presentations at EmTech Digital by women with ideas on fixing AI. On Monday, Microsoft researcher Timnit Gebru presented examples of bias in current AI systems, and earlier on Tuesday Fast.ai cofounder Rachel Thomas talked about her company’s free deep-learning course and its effort to diversify the overall AI workforce. Even with the current problems achieving diversity, there are more women and people of color that could be brou
  • ght into the workforce.
4More

Gamification has a dark side - 0 views

  • Gamification is the application of game elements into nongame spaces. It is the permeation of ideas and values from the sphere of play and leisure to other social spaces. It’s premised on a seductive idea: if you layer elements of games, such as rules, feedback systems, rewards and videogame-like user interfaces over reality, it will make any activity motivating, fair and (potentially) fun. ‘We are starving and games are feeding us,’ writes Jane McGonigal in Reality Is Broken (2011). ‘What if we decided to use everything we know about game design to fix what’s wrong with reality?’
  • But gamification’s trapping of total fun masks that we have very little control over the games we are made to play – and hides the fact that these games are not games at all. Gamified systems are tools, not toys. They can teach complex topics, engage us with otherwise difficult problems. Or they can function as subtle systems of social control.
  • The problem of the gamified workplace goes beyond micromanagement. The business ethicist Tae Wan Kim at Carnegie Mellon University in Pittsburgh warns that gamified systems have the potential to complicate and subvert ethical reasoning. He cites the example of a drowning child. If you save the child, motivated by empathy, sympathy or goodwill – that’s a morally good act. But say you gamify the situation. Say you earn points for saving drowning children. ‘Your gamified act is ethically unworthy,’ he explained to me in an email. Providing extrinsic gamified motivators, even if they work as intended, deprive us of the option to live worthy lives, Kim argues. ‘The workplace is a sacred space where we develop ourselves and help others,’ he notes. ‘Gamified workers have difficulty seeing what contributions they really make.’
  • ...1 more annotation...
  • The 20th-century French philosopher Michel Foucault would have said that these are technologies of power. Today, the interface designer and game scholar Sebastian Deterding says that this kind of gamification expresses a modernist view of a world with top-down managerial control. But the concept is flawed. Gamification promises easy, centralised overviews and control. ‘It’s a comforting illusion because de facto reality is not as predictable as a simulation,’ Deterding says. You can make a model of a city in SimCity that bears little resemblance to a real city. Mistaking games for reality is ultimately mistaking map for territory. No matter how well-designed, a simulation cannot account for the unforeseen.
1More

Millennials are struggling. Is it the fault of the baby boomers? - 0 views

  • Anthropologist Helen Fisher inelegantly described the maturing of this huge postwar bulge in the population as “like a pig moving through a python”, changing society as we grew older on a scale never known before. We challenged the Victorian puritanism, censorship, class snobbery and inhibitions of the establishment. Full employment put money in the pockets of managers and factory workers alike. In spanking new houses with inside lavatories and proper bathrooms, hire purchase allowed him (and less so, her) to spend, spend, spend as if, overnight, everyone had become a toff. It could only get better. Yet today, “baby boomer” is a toxic phrase, shorthand for greed and selfishness, for denying the benefits we took for granted to subsequent generations, notably beleaguered millennials, who reached adulthood in the early years of this century. So, where did it all go so very wrong?
1More

UK can lead the way on ethical AI, says Lords Committee - News from Parliament - UK Par... - 0 views

  • AI Code One of the recommendations of the report is for a cross-sector AI Code to be established, which can be adopted nationally, and internationally. The Committee’s suggested five principles for such a code are: Artificial intelligence should be developed for the common good and benefit of humanity. Artificial intelligence should operate on principles of intelligibility and fairness. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
11More

How We Made AI As Racist and Sexist As Humans - 0 views

  • Artificial intelligence may have cracked the code on certain tasks that typically require human smarts, but in order to learn, these algorithms need vast quantities of data that humans have produced. They hoover up that information, rummage around in search of commonalities and correlations, and then offer a classification or prediction (whether that lesion is cancerous, whether you’ll default on your loan) based on the patterns they detect. Yet they’re only as clever as the data they’re trained on, which means that our limitations—our biases, our blind spots, our inattention—become theirs as well.
  • The majority of AI systems used in commercial applications—the ones that mediate our access to services like jobs, credit, and loans— are proprietary, their algorithms and training data kept hidden from public view. That makes it exceptionally difficult for an individual to interrogate the decisions of a machine or to know when an algorithm, trained on historical examples checkered by human bias, is stacked against them. And forget about trying to prove that AI systems may be violating human rights legislation.
  • Data is essential to the operation of an AI system. And the more complicated the system—the more layers in the neural nets, to translate speech or identify faces or calculate the likelihood someone defaults on a loan—the more data must be collected.
  • ...8 more annotations...
  • The power of the system is its “ability to recognize that correlations occur between gender and professions,” says Kathryn Hume. “The downside is that there’s no intentionality behind the system—it’s just math picking up on correlations. It doesn’t know this is a sensitive issue.” There’s a tension between the futuristic and the archaic at play in this technology. AI is evolving much more rapidly than the data it has to work with, so it’s destined not just to reflect and replicate biases but also to prolong and reinforce them.
  • And sometimes, even when ample data exists, those who build the training sets don’t take deliberate measures to ensure its diversity
  • But not everyone will be equally represented in that data.
  • Accordingly, groups that have been the target of systemic discrimination by institutions that include police forces and courts don’t fare any better when judgment is handed over to a machine.
  • A growing field of research, in fact, now looks to apply algorithmic solutions to the problems of algorithmic bias.
  • Still, algorithmic interventions only do so much; addressing bias also demands diversity in the programmers who are training machines in the first place.
  • A growing awareness of algorithmic bias isn’t only a chance to intervene in our approaches to building AI systems. It’s an opportunity to interrogate why the data we’ve created looks like this and what prejudices continue to shape a society that allows these patterns in the data to emerge.
  • Of course, there’s another solution, elegant in its simplicity and fundamentally fair: get better data.
1 - 20 of 20
Showing 20 items per page