Skip to main content

Home/ Dystopias/ Group items tagged monopoly

Rss Feed Group items tagged

Ed Webb

Do Not Pass Go | On the Media | WNYC Studios - 0 views

  •  
    How the multies win
Ed Webb

Artificial Intelligence and the Future of Humans | Pew Research Center - 0 views

  • experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities
  • most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.
  • CONCERNS Human agency: Individuals are  experiencing a loss of control over their lives Decision-making on key aspects of digital life is automatically ceded to code-driven, "black box" tools. People lack input and do not learn the context about how the tools work. They sacrifice independence, privacy and power over choice; they have no control over these processes. This effect will deepen as automated systems become more prevalent and complex. Data abuse: Data use and surveillance in complex systems is designed for profit or for exercising power Most AI tools are and will be in the hands of companies striving for profits or governments striving for power. Values and ethics are often not baked into the digital systems making people's decisions for them. These systems are globally networked and not easy to regulate or rein in. Job loss: The AI takeover of jobs will widen economic divides, leading to social upheaval The efficiencies and other economic advantages of code-based machine intelligence will continue to disrupt all aspects of human work. While some expect new jobs will emerge, others worry about massive job losses, widening economic divides and social upheavals, including populist uprisings. Dependence lock-in: Reduction of individuals’ cognitive, social and survival skills Many see AI as augmenting human capacities but some predict the opposite - that people's deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others. Mayhem: Autonomous weapons, cybercrime and weaponized information Some predict further erosion of traditional sociopolitical structures and the possibility of great loss of lives due to accelerated growth of autonomous military applications and the use of weaponized information, lies and propaganda to dangerously destabilize human groups. Some also fear cybercriminals' reach into economic systems.
  • ...18 more annotations...
  • AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons
  • “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”
  • SUGGESTED SOLUTIONS Global good is No. 1: Improve human collaboration across borders and stakeholder groups Digital cooperation to serve humanity's best interests is the top priority. Ways must be found for people around the world to come to common understandings and agreements - to join forces to facilitate the innovation of widely accepted approaches aimed at tackling wicked problems and maintaining control over complex human-digital networks. Values-based system: Develop policies to assure AI will be directed at ‘humanness’ and common good Adopt a 'moonshot mentality' to build inclusive, decentralized intelligent digital networks 'imbued with empathy' that help humans aggressively ensure that technology meets social and ethical responsibilities. Some new level of regulatory and certification process will be necessary. Prioritize people: Alter economic and political systems to better help humans ‘race with the robots’ Reorganize economic and political systems toward the goal of expanding humans' capacities and capabilities in order to heighten human/AI collaboration and staunch trends that would compromise human relevance in the face of programmed intelligence.
  • “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”
  • We humans care deeply about how others see us – and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: Programs will compose many of our messages and our online/AR appearance will [be] computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us. Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals?
  • AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment
  • The record to date is that convenience overwhelms privacy
  • As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging
  • AI will eventually cause a large number of people to be permanently out of work
  • Newer generations of citizens will become more and more dependent on networked AI structures and processes
  • there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control
  • As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified
  • Given historical precedent, one would have to assume it will be our worst qualities that are augmented
  • Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing
  • We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully
  • the Orwellian nightmare realised
  • “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”
  • The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education
Ed Webb

Could fully automated luxury communism ever work? - 0 views

  • Having achieved a seamless, pervasive commodification of online sociality, Big Tech companies have turned their attention to infrastructure. Attempts by Google, Amazon and Facebook to achieve market leadership, in everything from AI to space exploration, risk a future defined by the battle for corporate monopoly.
  • The technologies are coming. They’re already here in certain instances. It’s the politics that surrounds them. We have alternatives: we can have public ownership of data in the citizen’s interest or it could be used as it is in China where you have a synthesis of corporate and state power
  • the two alternatives that big data allows is an all-consuming surveillance state where you have a deep synthesis of capitalism with authoritarian control, or a reinvigorated welfare state where more and more things are available to everyone for free or very low cost
  • ...4 more annotations...
  • we can’t begin those discussions until we say, as a society, we want to at least try subordinating these potentials to the democratic project, rather than allow capitalism to do what it wants
  • I say in FALC that this isn’t a blueprint for utopia. All I’m saying is that there is a possibility for the end of scarcity, the end of work, a coming together of leisure and labour, physical and mental work. What do we want to do with it? It’s perfectly possible something different could emerge where you have this aggressive form of social value.
  • I think the thing that’s been beaten out of everyone since 2010 is one of the prevailing tenets of neoliberalism: work hard, you can be whatever you want to be, that you’ll get a job, be well paid and enjoy yourself.  In 2010, that disappeared overnight, the rules of the game changed. For the status quo to continue to administer itself,  it had to change common sense. You see this with Jordan Peterson; he’s saying you have to know your place and that’s what will make you happy. To me that’s the only future for conservative thought, how else do you mediate the inequality and unhappiness?
  • I don’t think we can rapidly decarbonise our economies without working people understanding that it’s in their self-interest. A green economy means better quality of life. It means more work. Luxury populism feeds not only into the green transition, but the rollout of Universal Basic Services and even further.
Ed Webb

Interoperability And Privacy: Squaring The Circle | Techdirt - 0 views

  • if there's one thing we've learned from more than a decade of Facebook scandals, it's that there's little reason to believe that Facebook possesses the requisite will and capabilities. Indeed, it may be that there is no automated system or system of human judgments that could serve as a moderator and arbiter of the daily lives of billions of people. Given Facebook's ambition to put more and more of our daily lives behind its walled garden, it's hard to see why we would ever trust Facebook to be the one to fix all that's wrong with Facebook.
  • Facebook users are eager for alternatives to the service, but are held back by the fact that the people they want to talk with are all locked within the company's walled garden
  • rather than using standards to describe how a good voting machine should work, the industry pushed a standard that described how their existing, flawed machines did work with some small changes in configurations. Had they succeeded, they could have simply slapped a "complies with IEEE standard" label on everything they were already selling and declared themselves to have fixed the problem... without making the serious changes needed to fix their systems, including requiring a voter-verified paper ballot.
  • ...13 more annotations...
  • the risk of trusting competition to an interoperability mandate is that it will create a new ecosystem where everything that's not forbidden is mandatory, freezing in place the current situation, in which Facebook and the other giants dominate and new entrants are faced with onerous compliance burdens that make it more difficult to start a new service, and limit those new services to interoperating in ways that are carefully designed to prevent any kind of competitive challenge
  • Facebook is a notorious opponent of adversarial interoperability. In 2008, Facebook successfully wielded a radical legal theory that allowed it to shut down Power Ventures, a competitor that allowed Facebook's users to use multiple social networks from a single interface. Facebook argued that by allowing users to log in and display Facebook with a different interface, even after receipt of a cease and desist letter telling Power Ventures to stop, the company had broken a Reagan-era anti-hacking law called the Computer Fraud and Abuse Act (CFAA). In other words, upsetting Facebook's investors made their conduct illegal.
  • Today, Facebook is viewed as holding all the cards because it has corralled everyone who might join a new service within its walled garden. But legal reforms to safeguard the right to adversarial interoperability would turn this on its head: Facebook would be the place that had conveniently organized all the people whom you might tempt to leave Facebook, and even supply you with the tools you need to target those people.
  • Such a tool would allow someone to use Facebook while minimizing how they are used by Facebook. For people who want to leave Facebook but whose friends, colleagues or fellow travelers are not ready to join them, a service like this could let Facebook vegans get out of the Facebook pool while still leaving a toe in its waters.
  • In a competitive market (which adversarial interoperability can help to bring into existence), even very large companies can't afford to enrage their customers
  • the audience for a legitimate adversarial interoperability product are the customers of the existing service that it connects to.
  • anyone using a Facebook mobile app might be exposing themselves to incredibly intrusive data-gathering, including some surprisingly creepy and underhanded tactics.
  • If users could use a third-party service to exchange private messages with friends, or to participate in a group they're a member of, they can avoid much (but not all) of this surveillance.
  • Facebook users (and even non-Facebook users) who want more privacy have a variety of options, none of them very good. Users can tweak Facebook's famously hard-to-understand privacy dashboard to lock down their accounts and bet that Facebook will honor their settings (this has not always been a good bet). Everyone can use tracker-blockers, ad-blockers and script-blockers to prevent Facebook from tracking them when they're not on Facebook, by watching how they interact with pages that have Facebook "Like" buttons and other beacons that let Facebook monitor activity elsewhere on the Internet. We're rightfully proud of our own tracker blocker, Privacy Badger, but it doesn't stop Facebook from tracking you if you have a Facebook account and you're using Facebook's service.
  • As Facebook's market power dwindled, so would the pressure that web publishers feel to embed Facebook trackers on their sites, so that non-Facebook users would not be as likely to be tracked as they use the Web.
  • Today, Facebook's scandals do not trigger mass departures from the service, and when users do leave, they tend to end up on Instagram, which is also owned by Facebook.
  • For users who have privacy needs -- and other needs -- beyond those the big platforms are willing to fulfill, it's important that we keep the door open to competitors (for-profit, nonprofit, hobbyist and individuals) who are willing to fill those needs.
  • helping Facebook's own users, or the users of any big service, to configure their experience to make their lives better should be legal and encouraged even (and especially) if it provides a path for users to either diversify their social media experience or move away entirely from the big, concentrated services. Either way, we'd be on our way to a more pluralistic, decentralized, diverse Internet
1 - 4 of 4
Showing 20 items per page