Skip to main content

Home/ GAVNet Collaborative Curation/ Group items tagged code

Rss Feed Group items tagged

Bill Fulkerson

The worst thing I read this year, and what it taught me… or Can we design soc... - 0 views

  •  
    "I'm going to teach a new course this fall, tentatively titled "Technology and Social Change". It's going to include an examination of the four levers of social change Larry Lessig suggests in Code and which I've been exploring as possible paths to civic engagement. It will include deep methodological dives into codesign, and into using anthropology as tool for understanding user needs. It will look at unintended consequences, cases where technology's best intentions fail, and cases where careful exploration and preparation led to technosocial systems that make users and communities more powerful than they were before."
Steve Bosserman

Is Trump fighting the 'deep state' or creating his own? - The Washington Post - 0 views

  • It's not far-fetched to suggest there is a "deep state" in Washington. Former President Dwight D. Eisenhower looked at the nexus of the Pentagon and arms manufacturers and coined the phrase the "military-industrial complex." Today's observers also point to the collusion of corporate interests and D.C. power brokers as the true guiding hand in American politics.
  • The Trump White House already seems to be at war with what it would say is the "deep state:" thousands of federal government bureaucrats faced with the awkward reality of working for a president who campaigned loudly against Washington officialdom and promised to "drain the swamp" when in power. This week, almost 1,000 American diplomats signed a dissent memo against Trump's executive order on immigration, prompting White House press secretary Sean Spicer to icily declare that "career bureaucrats" can "either get with the program or they can go." And Trump's public spat with Acting Attorney General Sally Yates, an Obama appointee who sought to defy his immigration order, ended with him firing Yates in an angry, chilling memo that claimed she "betrayed the Department of Justice."
Steve Bosserman

Toward Democratic, Lawful Citizenship for AIs, Robots, and Corporations - 0 views

  • If an AI canread the laws of a country (its Constitution and then relevant portions of the legal code)answer common-sense questions about these lawswhen presented with textual descriptions or videos of real-life situations, explain roughly what the laws imply about these situationsthen this AI has the level of understanding needed to manage the rights and responsibilities of citizenship.
  • AI citizens would also presumably have responsibilities similar to those of human citizens, though perhaps with appropriate variations. Clearly, AI citizens would have tax obligations (and corporations already pay taxes, obviously, even though they are not considered autonomous citizens). If they also served on jury duty, this could be interesting, as they might provide a quite different perspective to human citizens. There is a great deal to be fleshed out here.
  • The question becomes: What kind of test can we give to validate that the AI really understands the Constitution, as opposed to just parroting back answers in a shallow but accurate way?
  • ...2 more annotations...
  • So we can say that passing a well-crafted AI Citizenship Test would bea sufficient condition for possessing a high level of human-like general intelligenceNOT a necessary condition for possessing a high level of general intelligence; nor even a necessary condition for possessing a high level of human-like general intelligenceNOT a sufficient condition for possessing precisely human-like intelligence (as required by the Turing Test or other similar tests)These limitations, however, do not make the notion of an AI Citizenship less interesting; in a way, they make it more interesting. What they tell us is: An AI Citizenship Test will be a specific type of general intelligence test that is specifically relevant to key aspects of modern society.
  • If you would like to voice your perspectives on the AI Citizenship Test, please feel free to participate here.
Steve Bosserman

Will AI replace Humans? - FutureSin - Medium - 0 views

  • According to the World Economic Forum’s Future of Jobs report, some jobs will be wiped out, others will be in high demand, but all in all, around 5 million jobs will be lost. The real question is then, how many jobs will be made redundant in the 2020s? Many futurists including Google’s Chief Futurist believe this will necessitate a universal human stipend that could become globally ubiquitous as early as the 2030s.
  • AI will optimize many of our systems, but also create new jobs. We don’t know the rate at which it will do this. Research firm Gartner further confirms the hypothesis of AI creating more jobs than it replaces, by predicting that in 2020, AI will create 2.3 million new jobs while eliminating 1.8 million traditional jobs.
  • In an era where it’s being shown we can’t even regulate algorithms, how will we be able to regulate AI and robots that will progressively have a better capacity to self-learn, self-engineer, self-code and self-replicate? This first wave of robots are simply robots capable of performing repetitive tasks, but as human beings become less intelligent trapped in digital immersion, the rate at which robots learn how to learn will exponentially increase.How do humans stay relevant when Big Data enables AI to comb through contextual data as would a supercomputer? Data will no longer be the purvey of human beings, neither medical diagnosis and many other things. To say that AI “augments” human in this respect, is extremely naive and hopelessly optimistic. In many respects, AI completely replaces the need for human beings. This is what I term the automation economy.
  • ...3 more annotations...
  • If China, Russia and the U.S. are in a race for AI supremacy, the kind of manifestations of AI will be so significant, they could alter the entire future of human civilization.
  • THE EXPONENTIAL THREATFrom drones, to nanobots to 3D-printing, automation could lead to unparalleled changes to how we live and work. In spite of the increase in global GDP, most people’s quality of living is not likely to see the benefit as it will increasingly be funneled into the pockets of the 1%. Capitalism then, favors the development of an AI that’s fundamentally exploitative to the common global citizen.Just as we exchanged our personal data for convenience and the illusion of social connection online, we will barter convenience for a world a global police state where social credit systems and AI decide how much of a “human stipend” (basic income) we receive. Our poverty or the social privilege we are born into, may have a more obscure relationship to a global system where AI monitors every aspect of our lives.Eventually AI will itself be the CEOs, inventors, master engineers and creator of more efficient robots. That’s when we will know that AI has indeed replaced human beings. What will Google’s DeepMind be able to do with the full use of next-gen quantum computing and supercomputers?
  • Artificial Intelligence Will Replace HumansTo argue that AI and robots and 3D-printing and any other significant technology won’t impact and replace many human jobs, is incredibly irresponsible.That’s not to say humans won’t adapt, and even thrive in more creative, social and meaningful work!That AI replacing repetitive tasks is a good thing, can hardly be denied. But will it benefit all globally citizens equally? Will ethics, common sense and collective pragmatism and social inclusion prevail over profiteers?Will younger value systems such as decentralization and sustainable living thrive with the advances of artificial intelligence?Will human beings be able to find sufficient meaning in a life where many of them won’t have a designated occupation to fill their time?These are the question that futurists like me ponder, and you should too.
Steve Bosserman

Biology Will Be the Next Great Computing Platform - 0 views

  • Crispr, the powerful gene-editing tool, is revolutionizing the speed and scope with which scientists can modify the DNA of organisms, including human cells. So many people want to use it—from academic researchers to agtech companies to biopharma firms—that new companies are popping up to staunch the demand. Companies like Synthego, which is using a combination of software engineering and hardware automation to become the Amazon of genome engineering. And Inscripta, which wants to be the Apple. And Twist Bioscience, which could be the Intel
  • “Being able to do that in a parallel way is the novel part,” says Paul Dabrowski, who estimates that Synthego cuts down the time it takes for a scientists to perform gene edits from several months to just one.
  • They’re betting biology will be the next great computing platform, DNA will be the code that runs it, and Crispr will be the programming language.
  • ...2 more annotations...
  • his company’s first move was to release a different gene-editing enzyme called MAD7—you can think of it like a Crispr/Cas9 knockoff, but legal—free for R&D uses. Inscripta will charge a single-digit royalty, far below market standards, to use MAD7 in manufacturing products or therapeutics.
  • We’re trying to get more people into the game now, by democratizing access to this family of enzymes,” he says. It’s a page from the Steve Jobs playbook; get them hooked on the MADzyme platform, down the line sell them personal hardware.
Steve Bosserman

How We Made AI As Racist and Sexist As Humans - 0 views

  • Artificial intelligence may have cracked the code on certain tasks that typically require human smarts, but in order to learn, these algorithms need vast quantities of data that humans have produced. They hoover up that information, rummage around in search of commonalities and correlations, and then offer a classification or prediction (whether that lesion is cancerous, whether you’ll default on your loan) based on the patterns they detect. Yet they’re only as clever as the data they’re trained on, which means that our limitations—our biases, our blind spots, our inattention—become theirs as well.
  • The majority of AI systems used in commercial applications—the ones that mediate our access to services like jobs, credit, and loans— are proprietary, their algorithms and training data kept hidden from public view. That makes it exceptionally difficult for an individual to interrogate the decisions of a machine or to know when an algorithm, trained on historical examples checkered by human bias, is stacked against them. And forget about trying to prove that AI systems may be violating human rights legislation.
  • Data is essential to the operation of an AI system. And the more complicated the system—the more layers in the neural nets, to translate speech or identify faces or calculate the likelihood someone defaults on a loan—the more data must be collected.
  • ...8 more annotations...
  • The power of the system is its “ability to recognize that correlations occur between gender and professions,” says Kathryn Hume. “The downside is that there’s no intentionality behind the system—it’s just math picking up on correlations. It doesn’t know this is a sensitive issue.” There’s a tension between the futuristic and the archaic at play in this technology. AI is evolving much more rapidly than the data it has to work with, so it’s destined not just to reflect and replicate biases but also to prolong and reinforce them.
  • And sometimes, even when ample data exists, those who build the training sets don’t take deliberate measures to ensure its diversity
  • But not everyone will be equally represented in that data.
  • Accordingly, groups that have been the target of systemic discrimination by institutions that include police forces and courts don’t fare any better when judgment is handed over to a machine.
  • A growing field of research, in fact, now looks to apply algorithmic solutions to the problems of algorithmic bias.
  • Still, algorithmic interventions only do so much; addressing bias also demands diversity in the programmers who are training machines in the first place.
  • A growing awareness of algorithmic bias isn’t only a chance to intervene in our approaches to building AI systems. It’s an opportunity to interrogate why the data we’ve created looks like this and what prejudices continue to shape a society that allows these patterns in the data to emerge.
  • Of course, there’s another solution, elegant in its simplicity and fundamentally fair: get better data.
Steve Bosserman

Teaching an Algorithm to Understand Right and Wrong - 0 views

  • The rise of artificial intelligence is forcing us to take abstract ethical dilemmas much more seriously because we need to code in moral principles concretely. Should a self-driving car risk killing its passenger to save a pedestrian? To what extent should a drone take into account the risk of collateral damage when killing a terrorist? Should robots make life-or-death decisions about humans at all? We will have to make concrete decisions about what we will leave up to humans and what we will encode into software.
‹ Previous 21 - 40 of 42 Next ›
Showing 20 items per page