Skip to main content

Home/ GAVNet Collaborative Curation/ Group items tagged AI

Rss Feed Group items tagged

Bill Fulkerson

Anatomy of an AI System - 1 views

shared by Bill Fulkerson on 14 Sep 18 - No Cached
  •  
    "With each interaction, Alexa is training to hear better, to interpret more precisely, to trigger actions that map to the user's commands more accurately, and to build a more complete model of their preferences, habits and desires. What is required to make this possible? Put simply: each small moment of convenience - be it answering a question, turning on a light, or playing a song - requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data. The scale of resources required is many magnitudes greater than the energy and labor it would take a human to operate a household appliance or flick a switch. A full accounting for these costs is almost impossible, but it is increasingly important that we grasp the scale and scope if we are to understand and govern the technical infrastructures that thread through our lives. III The Salar, the world's largest flat surface, is located in southwest Bolivia at an altitude of 3,656 meters above sea level. It is a high plateau, covered by a few meters of salt crust which are exceptionally rich in lithium, containing 50% to 70% of the world's lithium reserves. 4 The Salar, alongside the neighboring Atacama regions in Chile and Argentina, are major sites for lithium extraction. This soft, silvery metal is currently used to power mobile connected devices, as a crucial material used for the production of lithium-Ion batteries. It is known as 'grey gold.' Smartphone batteries, for example, usually have less than eight grams of this material. 5 Each Tesla car needs approximately seven kilograms of lithium for its battery pack. 6 All these batteries have a limited lifespan, and once consumed they are thrown away as waste. Amazon reminds users that they cannot open up and repair their Echo, because this will void the warranty. The Amazon Echo is wall-powered, and also has a mobile battery base. This also has a limited lifespan and then must be thrown away as waste. According to the Ay
Steve Bosserman

Will AI replace Humans? - FutureSin - Medium - 0 views

  • According to the World Economic Forum’s Future of Jobs report, some jobs will be wiped out, others will be in high demand, but all in all, around 5 million jobs will be lost. The real question is then, how many jobs will be made redundant in the 2020s? Many futurists including Google’s Chief Futurist believe this will necessitate a universal human stipend that could become globally ubiquitous as early as the 2030s.
  • AI will optimize many of our systems, but also create new jobs. We don’t know the rate at which it will do this. Research firm Gartner further confirms the hypothesis of AI creating more jobs than it replaces, by predicting that in 2020, AI will create 2.3 million new jobs while eliminating 1.8 million traditional jobs.
  • In an era where it’s being shown we can’t even regulate algorithms, how will we be able to regulate AI and robots that will progressively have a better capacity to self-learn, self-engineer, self-code and self-replicate? This first wave of robots are simply robots capable of performing repetitive tasks, but as human beings become less intelligent trapped in digital immersion, the rate at which robots learn how to learn will exponentially increase.How do humans stay relevant when Big Data enables AI to comb through contextual data as would a supercomputer? Data will no longer be the purvey of human beings, neither medical diagnosis and many other things. To say that AI “augments” human in this respect, is extremely naive and hopelessly optimistic. In many respects, AI completely replaces the need for human beings. This is what I term the automation economy.
  • ...3 more annotations...
  • If China, Russia and the U.S. are in a race for AI supremacy, the kind of manifestations of AI will be so significant, they could alter the entire future of human civilization.
  • THE EXPONENTIAL THREATFrom drones, to nanobots to 3D-printing, automation could lead to unparalleled changes to how we live and work. In spite of the increase in global GDP, most people’s quality of living is not likely to see the benefit as it will increasingly be funneled into the pockets of the 1%. Capitalism then, favors the development of an AI that’s fundamentally exploitative to the common global citizen.Just as we exchanged our personal data for convenience and the illusion of social connection online, we will barter convenience for a world a global police state where social credit systems and AI decide how much of a “human stipend” (basic income) we receive. Our poverty or the social privilege we are born into, may have a more obscure relationship to a global system where AI monitors every aspect of our lives.Eventually AI will itself be the CEOs, inventors, master engineers and creator of more efficient robots. That’s when we will know that AI has indeed replaced human beings. What will Google’s DeepMind be able to do with the full use of next-gen quantum computing and supercomputers?
  • Artificial Intelligence Will Replace HumansTo argue that AI and robots and 3D-printing and any other significant technology won’t impact and replace many human jobs, is incredibly irresponsible.That’s not to say humans won’t adapt, and even thrive in more creative, social and meaningful work!That AI replacing repetitive tasks is a good thing, can hardly be denied. But will it benefit all globally citizens equally? Will ethics, common sense and collective pragmatism and social inclusion prevail over profiteers?Will younger value systems such as decentralization and sustainable living thrive with the advances of artificial intelligence?Will human beings be able to find sufficient meaning in a life where many of them won’t have a designated occupation to fill their time?These are the question that futurists like me ponder, and you should too.
Steve Bosserman

For better AI, diversify the people building it - 0 views

  • Lyons announced the Partnership on AI’s first three working groups, which are dedicated to fair, transparent, and accountable AI; safety-critical AI; and AI, labor, and the economy. Each group will have a for-profit and nonprofit chair and aim to share its results as widely as possible. Lyons says these groups will be like a “union of concerned scientists.” “A big part of this is on us to really achieve inclusivity,” she says. Tess Posner, the executive director of AI4ALL, a nonprofit that runs summer programs teaching AI to students from underrepresented groups, showed why training a diverse group for the next generation of AI workers is essential. Currently, only 13 percent of AI companies have female CEOs, and less than 3 percent of tenure-track engineering faculty in the US are black. Yet an inclusive workforce may have more ideas and can spot problems with systems before they happen, and diversity can improve the bottom line. Posner pointed out a recent Intel report saying diversity could add $500 billion to the US economy.
  • “It’s good for business,” she says. These weren’t the first presentations at EmTech Digital by women with ideas on fixing AI. On Monday, Microsoft researcher Timnit Gebru presented examples of bias in current AI systems, and earlier on Tuesday Fast.ai cofounder Rachel Thomas talked about her company’s free deep-learning course and its effort to diversify the overall AI workforce. Even with the current problems achieving diversity, there are more women and people of color that could be brou
  • ght into the workforce.
Steve Bosserman

Modeling the global economic impact of AI | McKinsey - 0 views

  • The role of artificial intelligence (AI) tools and techniques in business and the global economy is a hot topic. This is not surprising given that AI might usher in radical—arguably unprecedented—changes in the way people live and work. The AI revolution is not in its infancy, but most of its economic impact is yet to come.
  • New research from the McKinsey Global Institute attempts to simulate the impact of AI on the world economy. First, it builds on an understanding of the behavior of companies and the dynamics of various sectors to develop a bottom-up view of how to adopt and absorb AI technologies. Second, it takes into account the likely disruptions that countries, companies, and workers are likely to experience as they transition to AI. There will very probably be costs during this transition period, and they need to be factored into any estimate. The analysis examines how economic gains and losses are likely to be distributed among firms, employees, and countries and how this distribution could potentially hamper the capture of AI benefits. Third, the research examines the dynamics of AI for a wide range of countries—clustered into groups with similar characteristics—with the aim of giving a more global view.
  • The analysis should be seen as a guide to the potential economic impact of AI based on the best knowledge available at this stage. Among the major findings are the following: There is large potential for AI to contribute to global economic activity A key challenge is that adoption of AI could widen gaps among countries, companies, and workers
Steve Bosserman

Toward Democratic, Lawful Citizenship for AIs, Robots, and Corporations - 0 views

  • If an AI canread the laws of a country (its Constitution and then relevant portions of the legal code)answer common-sense questions about these lawswhen presented with textual descriptions or videos of real-life situations, explain roughly what the laws imply about these situationsthen this AI has the level of understanding needed to manage the rights and responsibilities of citizenship.
  • AI citizens would also presumably have responsibilities similar to those of human citizens, though perhaps with appropriate variations. Clearly, AI citizens would have tax obligations (and corporations already pay taxes, obviously, even though they are not considered autonomous citizens). If they also served on jury duty, this could be interesting, as they might provide a quite different perspective to human citizens. There is a great deal to be fleshed out here.
  • The question becomes: What kind of test can we give to validate that the AI really understands the Constitution, as opposed to just parroting back answers in a shallow but accurate way?
  • ...2 more annotations...
  • So we can say that passing a well-crafted AI Citizenship Test would bea sufficient condition for possessing a high level of human-like general intelligenceNOT a necessary condition for possessing a high level of general intelligence; nor even a necessary condition for possessing a high level of human-like general intelligenceNOT a sufficient condition for possessing precisely human-like intelligence (as required by the Turing Test or other similar tests)These limitations, however, do not make the notion of an AI Citizenship less interesting; in a way, they make it more interesting. What they tell us is: An AI Citizenship Test will be a specific type of general intelligence test that is specifically relevant to key aspects of modern society.
  • If you would like to voice your perspectives on the AI Citizenship Test, please feel free to participate here.
Steve Bosserman

Which Industries Are Investing in Artificial Intelligence? - 0 views

  • The term artificial intelligence typically refers to automation of tasks by software that previously required human levels of intelligence to perform. While machine learning is sometimes used interchangeably with AI, machine learning is just one sub-category of artificial intelligence whereby a device learns from its access to a stream of data.When we talk about AI spending, we’re typically talking about investment that companies are making in building AI capabilities. While this may change in the future, McKinsey estimates that the vast majority of spending is done internally or as an investment, and very little of it is done purchasing artificial intelligence applications from other businesses.
  • 62% of AI spending in 2016 was for machine learning, twice as much as the second largest category computer vision. It’s worth noting that these categories are all types of “narrow” (or “weak”) forms of AI that use data to learn about and accomplish a specific narrowly defined task. Excluded from this report is “general” (or “strong”) artificial intelligence which is more akin to trying to create a thinking human brain.
  • The McKinsey survey mostly fits well as evidence supporting Cross’s framework that large profitable industries are the most fertile grounds of AI adoption. Not surprisingly, Technology is the industry with highest AI adoption and financial services also makes the top three as Cross would predict.Notably, automotive and assembly is the industry with the second highest rate of AI adoption in the McKinsey survey. This may be somewhat surprising as automotive isn’t necessarily an industry with the reputation for high margins. However, the use cases of AI for developing self-driving cars and cost savings using machine learning to improve manufacturing and procurement efficiencies are two potential drivers of this industry’s adoption.
  • ...2 more annotations...
  • AI jobs are much more likely to be unfilled after 60 days compared to the typical job on Indeed, which is only unfilled a quarter of the time. As the demand for AI talent continues to grow faster than the supply, there is no indication this hiring cycle will become quicker anytime soon.
  • One thing we know for certain is that it is very expensive to attract AI talent, given that starting salaries for entry-level talent exceed $300,000. A good bet is that the companies that invest in AI are the ones with healthy enough profit margins that they can afford it.
Bill Fulkerson

Gender imbalanced datasets may affect the performance of AI pathology classifi... - 0 views

  •  
    Though it may not be common knowledge, AI systems are currently being used in a wide variety of commercial applications, including article selection on news and social media sites, which movies get made,and maps that appear on our phones-AI systems have become trusted tools by big business. But their use has not always been without controversy. In recent years, researchers have found that AI apps used to approve mortgage and other loan applications are biased, for example, in favor of white males. This, researchers found, was because the dataset used to train the system mostly comprised white male profiles. In this new effort, the researchers wondered if the same might be true for AI systems used to assist doctors in diagnosing patients.
Bill Fulkerson

It's (still) not about trust: No one should buy AI if governments won't enforce liability - 0 views

  •  
    Yet another government agency has asked me how to get people to trust AI. No one should trust AI. AI is not a peer that we need to allow to make mistakes sometimes but tolerate anyway. AI is a corporate product. If corporations make understandable mistakes they can be let off the hook, but if they whether with negligence or intent produce faulty products that do harm they must be held to account.
Steve Bosserman

Specifying AI safety problems in simple environments | DeepMind - 0 views

  •  
    "As AI systems become more general and more useful in the real world, ensuring they behave safely will become even more important. To date, the majority of technical AI safety research has focused on developing a theoretical understanding about the nature and causes of unsafe behaviour. Our new paper builds on a recent shift towards empirical testing (see Concrete Problems in AI Safety) and introduces a selection of simple reinforcement learning environments designed specifically to measure 'safe behaviours'."
Steve Bosserman

Applying AI for social good | McKinsey - 0 views

  • Artificial intelligence (AI) has the potential to help tackle some of the world’s most challenging social problems. To analyze potential applications for social good, we compiled a library of about 160 AI social-impact use cases. They suggest that existing capabilities could contribute to tackling cases across all 17 of the UN’s sustainable-development goals, potentially helping hundreds of millions of people in both advanced and emerging countries. Real-life examples of AI are already being applied in about one-third of these use cases, albeit in relatively small tests. They range from diagnosing cancer to helping blind people navigate their surroundings, identifying victims of online sexual exploitation, and aiding disaster-relief efforts (such as the flooding that followed Hurricane Harvey in 2017). AI is only part of a much broader tool kit of measures that can be used to tackle societal issues, however. For now, issues such as data accessibility and shortages of AI talent constrain its application for social good.
  • The United Nations’ Sustainable Development Goals (SDGs) are among the best-known and most frequently cited societal challenges, and our use cases map to all 17 of the goals, supporting some aspect of each one (Exhibit 3). Our use-case library does not rest on the taxonomy of the SDGs, because their goals, unlike ours, are not directly related to AI usage; about 20 cases in our library do not map to the SDGs at all. The chart should not be read as a comprehensive evaluation of AI’s potential for each SDG; if an SDG has a low number of cases, that reflects our library rather than AI’s applicability to that SDG.
Steve Bosserman

Want job security in the AI era? Pick a career than has a human touch computers can't o... - 0 views

  • AI tools will help creative people be more creative and strategic people be more strategic, so core people can actually be more human, Lee said. "Jobs like doctors will require more EQ [emotional intelligence], more compassion, more human-to-human interaction, while AI takes over more the analytical, diagnostic work."
  • "We see AI changing 90 percent of the work people do," Daugherty said. "Fifteen percent of jobs will be completely automated and replaced. But the major of jobs will be improved."
  • "There is a lot of a counterweight of investors who really care about this stuff," said Paula Goldman, leader of the Tech and Society Solutions Lab at the Omidyar Network, citing the potential for indices that track how well companies follow best practices. "You can reframe [AI response] as a business risk."
Steve Bosserman

We Let Tech Companies Frame the Debate Over AI Ethics. Big Mistake. - 0 views

  • With such ubiquity comes power and influence. And along with the technology’s benefits come worries over privacy and personal freedom. Yes, AI can take some of the time and effort out of decision-making. But if you are a woman, a person of color, or a member of some other unlucky marginalized group, it has the ability to codify and worsen the inequalities you already face. This darker side of AI has led policymakers such as U.S. Senator Kamala Harris to advocate for more careful consideration of the technology’s risks.
  • Today, we can see a similar pattern emerging with the deployment of AI. Good intentions are nice, but they must account for and accommodate a true diversity of perspectives. How can we trust the ethics panels of AI companies to take adequate care of the needs of people of color, queer people, and other marginalized communities if we don’t even know who is making the decisions? It’s simple: we can’t and we shouldn’t.
Steve Bosserman

How We Made AI As Racist and Sexist As Humans - 0 views

  • Artificial intelligence may have cracked the code on certain tasks that typically require human smarts, but in order to learn, these algorithms need vast quantities of data that humans have produced. They hoover up that information, rummage around in search of commonalities and correlations, and then offer a classification or prediction (whether that lesion is cancerous, whether you’ll default on your loan) based on the patterns they detect. Yet they’re only as clever as the data they’re trained on, which means that our limitations—our biases, our blind spots, our inattention—become theirs as well.
  • The majority of AI systems used in commercial applications—the ones that mediate our access to services like jobs, credit, and loans— are proprietary, their algorithms and training data kept hidden from public view. That makes it exceptionally difficult for an individual to interrogate the decisions of a machine or to know when an algorithm, trained on historical examples checkered by human bias, is stacked against them. And forget about trying to prove that AI systems may be violating human rights legislation.
  • Data is essential to the operation of an AI system. And the more complicated the system—the more layers in the neural nets, to translate speech or identify faces or calculate the likelihood someone defaults on a loan—the more data must be collected.
  • ...8 more annotations...
  • The power of the system is its “ability to recognize that correlations occur between gender and professions,” says Kathryn Hume. “The downside is that there’s no intentionality behind the system—it’s just math picking up on correlations. It doesn’t know this is a sensitive issue.” There’s a tension between the futuristic and the archaic at play in this technology. AI is evolving much more rapidly than the data it has to work with, so it’s destined not just to reflect and replicate biases but also to prolong and reinforce them.
  • And sometimes, even when ample data exists, those who build the training sets don’t take deliberate measures to ensure its diversity
  • But not everyone will be equally represented in that data.
  • Accordingly, groups that have been the target of systemic discrimination by institutions that include police forces and courts don’t fare any better when judgment is handed over to a machine.
  • A growing field of research, in fact, now looks to apply algorithmic solutions to the problems of algorithmic bias.
  • Still, algorithmic interventions only do so much; addressing bias also demands diversity in the programmers who are training machines in the first place.
  • A growing awareness of algorithmic bias isn’t only a chance to intervene in our approaches to building AI systems. It’s an opportunity to interrogate why the data we’ve created looks like this and what prejudices continue to shape a society that allows these patterns in the data to emerge.
  • Of course, there’s another solution, elegant in its simplicity and fundamentally fair: get better data.
Steve Bosserman

20 top lawyers were beaten by legal AI. Here are their surprising responses - 0 views

  •  
    "The study, carried out with leading legal academics and experts, saw the LawGeex AI achieve an average 94% accuracy rate, higher than the lawyers who achieved an average rate of 85%. It took the lawyers an average of 92 minutes to complete the NDA issue spotting, compared to 26 seconds for the LawGeex AI. The longest time taken by a lawyer to complete the test was 156 minutes, and the shortest time was 51 minutes. The study made waves around the world and was covered across global media."
Bill Fulkerson

Proteins Unfolded - 0 views

  •  
    Artificial intelligence (AI) has solved one of biology's grand challenges: predicting how proteins curl up from a linear chain of amino acids into 3D shapes that allow them to carry out life's tasks. Today, leading structural biologists and organizers of a biennial protein folding competition announced the achievement by researchers at DeepMind, a U.K.-based AI company. They say the DeepMind method will have far-reaching effects, among them dramatically speeding the creation of new medications.
Steve Bosserman

How AI will change democracy - 0 views

  • AI systems could play a part in democracy while remaining subordinate to traditional democratic processes like human deliberation and human votes. And they could be made subject to the ethics of their human masters. It should not be necessary for citizens to surrender their moral judgment if they don’t wish to.
  • There are nevertheless serious objections to the idea of AI Democracy. Foremost among them is the transparency objection: can we really call a system democratic if we don’t really understand the basis of the decisions made on our behalf? Although AI Democracy could make us freer or more prosperous in our day-to-day lives, it would also rather enslave us to the systems that decide on our behalf. One can see Pericles shaking his head in disgust.
  • In the past humans were prepared, in the right circumstances, to surrender their political affairs to powerful unseen intelligences. Before they had kings, the Hebrews of the Old Testament lived without earthly politics. They were subject only to the rule of God Himself, bound by the covenant that their forebears had sworn with Him. The ancient Greeks consulted omens and oracles. The Romans looked to the stars. These practices now seem quaint and faraway, inconsistent with what we know of rationality and the scientific method. But they prompt introspection. How far are we prepared to go–what are we prepared to sacrifice–to find a system of government that actually represents the people?
Steve Bosserman

UK can lead the way on ethical AI, says Lords Committee - News from Parliament - UK Par... - 0 views

  • AI Code One of the recommendations of the report is for a cross-sector AI Code to be established, which can be adopted nationally, and internationally. The Committee’s suggested five principles for such a code are: Artificial intelligence should be developed for the common good and benefit of humanity. Artificial intelligence should operate on principles of intelligibility and fairness. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
1 - 20 of 370 Next › Last »
Showing 20 items per page