Skip to main content

Home/ Dystopias/ Group items tagged tools

Rss Feed Group items tagged

Ed Webb

Artificial intelligence, immune to fear or favour, is helping to make China's foreign p... - 0 views

  • Several prototypes of a diplomatic system using artificial intelligence are under development in China, according to researchers involved or familiar with the projects. One early-stage machine, built by the Chinese Academy of Sciences, is already being used by the Ministry of Foreign Affairs.
  • China’s ambition to become a world leader has significantly increased the burden and challenge to its diplomats. The “Belt and Road Initiative”, for instance, involves nearly 70 countries with 65 per cent of the world’s population. The unprecedented development strategy requires up to a US$900 billion investment each year for infrastructure construction, some in areas with high political, economic or environmental risk
  • researchers said the AI “policymaker” was a strategic decision support system, with experts stressing that it will be humans who will make any final decision
  • ...10 more annotations...
  • “Human beings can never get rid of the interference of hormones or glucose.”
  • “It would not even consider the moral factors that conflict with strategic goals,”
  • “If one side of the strategic game has artificial intelligence technology, and the other side does not, then this kind of strategic game is almost a one-way, transparent confrontation,” he said. “The actors lacking the assistance of AI will be at an absolute disadvantage in many aspects such as risk judgment, strategy selection, decision making and execution efficiency, and decision-making reliability,” he said.
  • “The entire strategic game structure will be completely out of balance.”
  • “AI can think many steps ahead of a human. It can think deeply in many possible scenarios and come up with the best strategy,”
  • A US Department of State spokesman said the agency had “many technological tools” to help it make decisions. There was, however, no specific information on AI that could be shared with the public,
  • The system, also known as geopolitical environment simulation and prediction platform, was used to vet “nearly all foreign investment projects” in recent years
  • One challenge to the development of AI policymaker is data sharing among Chinese government agencies. The foreign ministry, for instance, had been unable to get some data sets it needed because of administrative barriers
  • China is aggressively pushing AI into many sectors. The government is building a nationwide surveillance system capable of identifying any citizen by face within seconds. Research is also under way to introduce AI in nuclear submarines to help commanders making faster, more accurate decision in battle.
  • “AI can help us get more prepared for unexpected events. It can help find a scientific, rigorous solution within a short time.
Ed Webb

The migrant caravan "invasion" and America's epistemic crisis - Vox - 0 views

  • The intensity of belief on the right has begun to vary inversely with plausibility. Precisely because the “threat” posed by the caravan is facially absurd, believing in it — performing belief in it — is a powerful act of shared identity reinforcement, of tribal solidarity.
    • Ed Webb
       
      See (obviously) Orwell. Also Lisa Wedeens' great book on totalitarianism in Syria, Ambiguities of Domination.
  • Once that support system is in place, Trump is unbound, free to impose his fantasies on reality. He can campaign on Republicans protecting people with preexisting conditions even as the GOP sues to block such protections. He can brush off Mueller’s revelations and fire anyone who might threaten him. He can use imaginary Democratic voter fraud to cover up red-state voter suppression. He can use antifa as a pretext for deploying troops domestically.
  • Trump does not view himself as president of the whole country. He views himself as president of his white nationalist party — their leader in a war on liberals. He has all the tools of a head of state with which to prosecute that war. Currently, he is restrained only by the lingering professionalism of public servants and a few thin threads of institutional inertia.
  • ...6 more annotations...
  • The epistemic crisis Trump has accelerated is now morphing into a full-fledged crisis of democracy.
  • As Voltaire famously put it: “Anyone who has the power to make you believe absurdities has the power to make you commit injustices.”
  • The right, in all its organs, from social media to television to the president, is telling a well-worn, consistent story: Opposition from the left and Democrats is fraudulent, illegitimate, a foreign-funded conspiracy against the traditional white American way of life.
  • Having two versions of reality constantly clashing in public is cognitively and emotionally exhausting. To an average person following the news, the haze of charge and countercharge is overwhelming. And that is precisely what every autocrat wants.
  • every aspiring tyrant in modern history has made the independent media his first target
  • Then they go after the courts, the security services, and the military. Once they have a large base of support that will believe whatever they proclaim, follow them anywhere, support them in anything — it doesn’t have to be a majority, just an intense, activated minority — they can, practically speaking, get away with anything.
Ed Webb

How ethical is it for advertisers to target your mood? | Emily Bell | Opinion | The Gua... - 0 views

  • The effectiveness of psychographic targeting is one bet being made by an increasing number of media companies when it comes to interrupting your viewing experience with advertising messages.
  • “Across the board, articles that were in top emotional categories, such as love, sadness and fear, performed significantly better than articles that were not.”
  • ESPN and USA Today are also using psychographic rather than demographic targeting to sell to advertisers, including in ESPN’s case, the decision to not show you advertising at all if your team is losing.
  • ...9 more annotations...
  • Media companies using this technology claim it is now possible for the “mood” of the reader or viewer to be tracked in real time and the content of the advertising to be changed accordingly
  • ads targeted at readers based on their predicted moods rather than their previous behaviour improved the click-through rate by 40%.
  • Given that the average click through rate (the number of times anyone actually clicks on an ad) is about 0.4%, this number (in gross terms) is probably less impressive than it sounds.
  • Cambridge Analytica, the company that misused Facebook data and, according to its own claims, helped Donald Trump win the 2016 election, used psychographic segmentation.
  • For many years “contextual” ads served by not very intelligent algorithms were the bane of digital editors’ lives. Improvements in machine learning should help eradicate the horrible business of showing insurance advertising to readers in the middle of an article about a devastating fire.
  • The words “brand safety” are increasingly used by publishers when demonstrating products such as Project Feels. It is a way publishers can compete on micro-targeting with platforms such as Facebook and YouTube by pointing out that their targeting will not land you next to a conspiracy theory video about the dangers of chemtrails.
  • the exploitation of psychographics is not limited to the responsible and transparent scientists at the NYT. While publishers were showing these shiny new tools to advertisers, Amazon was advertising for a managing editor for its surveillance doorbell, Ring, which contacts your device when someone is at your door. An editor for a doorbell, how is that going to work? In all kinds of perplexing ways according to the ad. It’s “an exciting new opportunity within Ring to manage a team of news editors who deliver breaking crime news alerts to our neighbours. This position is best suited for a candidate with experience and passion for journalism, crime reporting, and people management.” So if instead of thinking about crime articles inspiring fear and advertising doorbells in the middle of them, what if you took the fear that the surveillance-device-cum-doorbell inspires and layered a crime reporting newsroom on top of it to make sure the fear is properly engaging?
  • The media has arguably already played an outsized role in making sure that people are irrationally scared, and now that practice is being strapped to the considerably more powerful engine of an Amazon product.
  • This will not be the last surveillance-based newsroom we see. Almost any product that produces large data feeds can also produce its own “news”. Imagine the Fitbit newsroom or the managing editor for traffic reports from dashboard cams – anything that has a live data feed emanating from it, in the age of the Internet of Things, can produce news.
Ed Webb

Scientific blinders: Learning from the moral failings of Nazi physicists - Bulletin of ... - 0 views

  • As the evening progressed, more and more questions concerning justice and ethics occurred to the physicists: Are atomic weapons inherently inhumane, and should they never be used? If the Germans had come to possess such weapons, what would be the world’s fate? What constitutes real patriotism in Nazi Germany—working for the regime’s success, or its defeat? The scientists expressed surprise and bafflement at their colleagues’ opinions, and their own views sometimes evolved from one moment to the next. The scattered, changing opinions captured in the Farm Hall transcripts highlight that, in their five years on the Nazi nuclear program, the German physicists had likely failed to wrestle meaningfully with these critical questions.
  • looking back at the Uranium Club serves to remind us scientists of how easy it is to focus on technical matters and avoid considering moral ones. This is especially true when the moral issues are perplexing, when any negative impacts seem distant, and when the science is exciting.
  • engineers who develop tracking or facial-recognition systems may be creating tools that can be purchased by repressive regimes intent on spying on and suppressing dissent. Accordingly, those researchers have a certain obligation to consider their role and the impact of their work.
  • ...2 more annotations...
  • reflecting seriously on the societal context of a research position may prompt a scientist to accept the job—and to take it upon herself or himself to help restrain unthinking innovation at work, by raising questions about whether every feature that can be added should in fact be implemented. (The same goes for whether certain lines of research should be pursued and particular findings published.)
  • The challenge for each of us, moving forward, is to ask ourselves and one another, hopefully far earlier in the research process than did Germany’s Walther Gerlach: “What are we working for?”
  •  
    If you get the opportunity see, or at least read, the plays The Physicists (Die Physiker) by Friedrich Dürrenmatt and Copenhagen by Michael Frayn.
Ed Webb

Smartphones are making us stupid - and may be a 'gateway drug' | The Lighthouse - 0 views

  • rather than making us smarter, mobile devices reduce our cognitive ability in measurable ways
  • “There’s lots of evidence showing that the information you learn on a digital device, doesn’t get retained very well and isn’t transferred across to the real world,”
  • “You’re also quickly conditioned to attend to lots of attention-grabbing signals, beeps and buzzes, so you jump from one task to the other and you don’t concentrate.”
  • ...16 more annotations...
  • Not only do smartphones affect our memory and our concentration, research shows they are addictive – to the point where they could be a ‘gateway drug’ making users more vulnerable to other addictions.
  • Smartphones are also linked to reduced social interaction, inadequate sleep, poor real-world navigation, and depression.
  • “The more time that kids spend on digital devices, the less empathetic they are, and the less they are able to process and recognise facial expressions, so their ability to actually communicate with each other is decreased.”
  • “Casino-funded research is designed to keep people gambling, and app software developers use exactly the same techniques. They have lots of buzzes and icons so you attend to them, they have things that move and flash so you notice them and keep your attention on the device.”
  • Around 90 per cent of US university students are thought to experience ‘phantom vibrations', so the researcher took a group to a desert location with no cell reception – and found that even after four days, around half of the students still thought their pocket was buzzing with Facebook or text notifications.
  • “Collaboration is a buzzword with software companies who are targeting schools to get kids to use these collaboration tools on their iPads – but collaboration decreases when you're using these devices,”
  • “All addiction is based on the same craving for a dopamine response, whether it's drug, gambling, alcohol or phone addiction,” he says. “As the dopamine response drops off, you need to increase the amount you need to get the same result, you want a little bit more next time. Neurologically, they all look the same.“We know – there are lots of studies on this – that once we form an addiction to something, we become more vulnerable to other addictions. That’s why there’s concerns around heavy users of more benign, easily-accessed drugs like alcohol and marijuana as there’s some correlation with usage of more physically addictive drugs like heroin, and neurological responses are the same.”
  • parents can also fall victim to screens which distract from their child’s activities or conversations, and most adults will experience this with friends and family members too.
  • “We also know that if you learn something on an iPad you are less likely to be able to transfer that to another device or to the real world,”
  • a series of studies have tested this with children who learn to construct a project with ‘digital’ blocks and then try the project with real blocks. “They can’t do it - they start from zero again,”
  • “Our brains can’t actually multitask, we have to switch our attention from one thing to another, and each time you switch, there's a cost to your attentional resources. After a few hours of this, we become very stressed.” That also causes us to forget things
  • A study from Norway recently tested how well kids remembered what they learned on screens. One group of students received information on a screen and were asked to memorise it; the second group received the same information on paper. Both groups were tested on their recall.Unsurprisingly, the children who received the paper version remembered more of the material. But the children with the electronic version were also found to be more stressed,
  • The famous ‘London taxi driver experiments’ found that memorising large maps caused the hippocampus to expand in size. Williams says that the reverse is going to happen if we don’t use our brain and memory to navigate. “Our brains are just like our muscles. We ‘use it or lose it’ – in other words, if we use navigation devices for directions rather than our brains, we will lose that ability.”
  • numerous studies also link smartphone use with sleeplessness and anxiety. “Some other interesting research has shown that the more friends you have on social media, the less friends you are likely to have in real life, the less actual contacts you have and the greater likelihood you have of depression,”
  • 12-month-old children whose carers regularly use smartphones have poorer facial expression perception
  • turning off software alarms and notifications, putting strict time limits around screen use, keeping screens out of bedrooms, minimising social media and replacing screens with paper books, paper maps and other non-screen activities can all help minimise harm from digital devices including smartphones
Ed Webb

What we still haven't learned from Gamergate - Vox - 0 views

  • Harassment and misogyny had been problems in the community for years before this; the deep resentment and anger toward women that powered Gamergate percolated for years on internet forums. Robert Evans, a journalist who specializes in extremist communities and the host of the Behind the Bastards podcast, described Gamergate to me as partly organic and partly born out of decades-long campaigns by white supremacists and extremists to recruit heavily from online forums. “Part of why Gamergate happened in the first place was because you had these people online preaching to these groups of disaffected young men,” he said. But what Gamergate had that those previous movements didn’t was an organized strategy, made public, cloaking itself as a political movement with a flimsy philosophical stance, its goals and targets amplified by the power of Twitter and a hashtag.
  • The hate campaign, we would later learn, was the moment when our ability to repress toxic communities and write them off as just “trolls” began to crumble. Gamergate ultimately gave way to something deeper, more violent, and more uncontrollable.
  • Police have to learn how to keep the rest of us safe from internet mobs
  • ...20 more annotations...
  • the justice system continues to be slow to understand the link between online harassment and real-life violence
  • In order to increase public safety this decade, it is imperative that police — and everyone else — become more familiar with the kinds of communities that engender toxic, militant systems of harassment, and the online and offline spaces where these communities exist. Increasingly, that means understanding social media’s dark corners, and the types of extremism they can foster.
  • Businesses have to learn when online outrage is manufactured
  • There’s a difference between organic outrage that arises because an employee actually does something outrageous, and invented outrage that’s an excuse to harass someone whom a group has already decided to target for unrelated reasons — for instance, because an employee is a feminist. A responsible business would ideally figure out which type of outrage is occurring before it punished a client or employee who was just doing their job.
  • Social media platforms didn’t learn how to shut down disingenuous conversations over ethics and free speech before they started to tear their cultures apart
  • Dedication to free speech over the appearance of bias is especially important within tech culture, where a commitment to protecting free speech is both a banner and an excuse for large corporations to justify their approach to content moderation — or lack thereof.
  • Reddit’s free-speech-friendly moderation stance resulted in the platform tacitly supporting pro-Gamergate subforums like r/KotakuInAction, which became a major contributor to Reddit’s growing alt-right community. Twitter rolled out a litany of moderation tools in the wake of Gamergate, intended to allow harassment targets to perpetually block, mute, and police their own harassers — without actually effectively making the site unwelcome for the harassers themselves. And YouTube and Facebook, with their algorithmic amplification of hateful and extreme content, made no effort to recognize the violence and misogyny behind pro-Gamergate content, or police them accordingly.
  • All of these platforms are wrestling with problems that seem to have grown beyond their control; it’s arguable that if they had reacted more swiftly to slow the growth of the internet’s most toxic and misogynistic communities back when those communities, particularly Gamergate, were still nascent, they could have prevented headaches in the long run — and set an early standard for how to deal with ever-broadening issues of extremist content online.
  • Violence against women is a predictor of other kinds of violence. We need to acknowledge it.
  • Somehow, the idea that all of that sexism and anti-feminist anger could be recruited, harnessed, and channeled into a broader white supremacist movement failed to generate any real alarm, even well into 2016
  • many of the perpetrators of real-world violence are radicalized online first
  • It remains difficult for many to accept the throughline from online abuse to real-world violence against women, much less the fact that violence against women, online and off, is a predictor of other kinds of real-world violence
  • Politicians and the media must take online “ironic” racism and misogyny seriously
  • Gamergate masked its misogyny in a coating of shrill yelling that had most journalists in 2014 writing off the whole incident as “satirical” and immature “trolling,” and very few correctly predicting that Gamergate’s trolling was the future of politics
  • Gamergate was all about disguising a sincere wish for violence and upheaval by dressing it up in hyperbole and irony in order to confuse outsiders and make it all seem less serious.
  • Gamergate simultaneously masqueraded as legitimate concern about ethics that demanded audiences take it seriously, and as total trolling that demanded audiences dismiss it entirely. Both these claims served to obfuscate its real aim — misogyny, and, increasingly, racist white supremacy
  • The public’s failure to understand and accept that the alt-right’s misogyny, racism, and violent rhetoric is serious goes hand in hand with its failure to understand and accept that such rhetoric is identical to that of President Trump
  • deploying offensive behavior behind a guise of mock outrage, irony, trolling, and outright misrepresentation, in order to mask the sincere extremism behind the message.
  • many members of the media, politicians, and members of the public still struggle to accept that Trump’s rhetoric is having violent consequences, despite all evidence to the contrary.
  • The movement’s insistence that it was about one thing (ethics in journalism) when it was about something else (harassing women) provided a case study for how extremists would proceed to drive ideological fissures through the foundations of democracy: by building a toxic campaign of hate beneath a veneer of denial.
Ed Webb

The Coronavirus and Our Future | The New Yorker - 0 views

  • I’ve spent my life writing science-fiction novels that try to convey some of the strangeness of the future. But I was still shocked by how much had changed, and how quickly.
  • the change that struck me seemed more abstract and internal. It was a change in the way we were looking at things, and it is still ongoing. The virus is rewriting our imaginations. What felt impossible has become thinkable. We’re getting a different sense of our place in history. We know we’re entering a new world, a new era. We seem to be learning our way into a new structure of feeling.
  • The Anthropocene, the Great Acceleration, the age of climate change—whatever you want to call it, we’ve been out of synch with the biosphere, wasting our children’s hopes for a normal life, burning our ecological capital as if it were disposable income, wrecking our one and only home in ways that soon will be beyond our descendants’ ability to repair. And yet we’ve been acting as though it were 2000, or 1990—as though the neoliberal arrangements built back then still made sense. We’ve been paralyzed, living in the world without feeling it.
  • ...24 more annotations...
  • We realize that what we do now, well or badly, will be remembered later on. This sense of enacting history matters. For some of us, it partly compensates for the disruption of our lives.
  • Actually, we’ve already been living in a historic moment. For the past few decades, we’ve been called upon to act, and have been acting in a way that will be scrutinized by our descendants. Now we feel it. The shift has to do with the concentration and intensity of what’s happening. September 11th was a single day, and everyone felt the shock of it, but our daily habits didn’t shift, except at airports; the President even urged us to keep shopping. This crisis is different. It’s a biological threat, and it’s global. Everyone has to change together to deal with it. That’s really history.
  • There are 7.8 billion people alive on this planet—a stupendous social and technological achievement that’s unnatural and unstable. It’s made possible by science, which has already been saving us. Now, though, when disaster strikes, we grasp the complexity of our civilization—we feel the reality, which is that the whole system is a technical improvisation that science keeps from crashing down
  • Today, in theory, everyone knows everything. We know that our accidental alteration of the atmosphere is leading us into a mass-extinction event, and that we need to move fast to dodge it. But we don’t act on what we know. We don’t want to change our habits. This knowing-but-not-acting is part of the old structure of feeling.
  • remember that you must die. Older people are sometimes better at keeping this in mind than younger people. Still, we’re all prone to forgetting death. It never seems quite real until the end, and even then it’s hard to believe. The reality of death is another thing we know about but don’t feel.
  • it is the first of many calamities that will likely unfold throughout this century. Now, when they come, we’ll be familiar with how they feel.
  • water shortages. And food shortages, electricity outages, devastating storms, droughts, floods. These are easy calls. They’re baked into the situation we’ve already created, in part by ignoring warnings that scientists have been issuing since the nineteen-sixties
  • Imagine what a food scare would do. Imagine a heat wave hot enough to kill anyone not in an air-conditioned space, then imagine power failures happening during such a heat wave.
  • science fiction is the realism of our time
  • Science-fiction writers don’t know anything more about the future than anyone else. Human history is too unpredictable; from this moment, we could descend into a mass-extinction event or rise into an age of general prosperity. Still, if you read science fiction, you may be a little less surprised by whatever does happen. Often, science fiction traces the ramifications of a single postulated change; readers co-create, judging the writers’ plausibility and ingenuity, interrogating their theories of history. Doing this repeatedly is a kind of training. It can help you feel more oriented in the history we’re making now. This radical spread of possibilities, good to bad, which creates such a profound disorientation; this tentative awareness of the emerging next stage—these are also new feelings in our time.
  • Do we believe in science? Go outside and you’ll see the proof that we do everywhere you look. We’re learning to trust our science as a society. That’s another part of the new structure of feeling.
  • This mixture of dread and apprehension and normality is the sensation of plague on the loose. It could be part of our new structure of feeling, too.
  • there are charismatic mega-ideas. “Flatten the curve” could be one of them. Immediately, we get it. There’s an infectious, deadly plague that spreads easily, and, although we can’t avoid it entirely, we can try to avoid a big spike in infections, so that hospitals won’t be overwhelmed and fewer people will die. It makes sense, and it’s something all of us can help to do. When we do it—if we do it—it will be a civilizational achievement: a new thing that our scientific, educated, high-tech species is capable of doing. Knowing that we can act in concert when necessary is another thing that will change us.
  • People who study climate change talk about “the tragedy of the horizon.” The tragedy is that we don’t care enough about those future people, our descendants, who will have to fix, or just survive on, the planet we’re now wrecking. We like to think that they’ll be richer and smarter than we are and so able to handle their own problems in their own time. But we’re creating problems that they’ll be unable to solve. You can’t fix extinctions, or ocean acidification, or melted permafrost, no matter how rich or smart you are. The fact that these problems will occur in the future lets us take a magical view of them. We go on exacerbating them, thinking—not that we think this, but the notion seems to underlie our thinking—that we will be dead before it gets too serious. The tragedy of the horizon is often something we encounter, without knowing it, when we buy and sell. The market is wrong; the prices are too low. Our way of life has environmental costs that aren’t included in what we pay, and those costs will be borne by our descendents. We are operating a multigenerational Ponzi scheme.
  • We’ve decided to sacrifice over these months so that, in the future, people won’t suffer as much as they would otherwise. In this case, the time horizon is so short that we are the future people.
  • Amid the tragedy and death, this is one source of pleasure. Even though our economic system ignores reality, we can act when we have to. At the very least, we are all freaking out together. To my mind, this new sense of solidarity is one of the few reassuring things to have happened in this century. If we can find it in this crisis, to save ourselves, then maybe we can find it in the big crisis, to save our children and theirs.
  • Thatcher said that “there is no such thing as society,” and Ronald Reagan said that “government is not the solution to our problem; government is the problem.” These stupid slogans marked the turn away from the postwar period of reconstruction and underpin much of the bullshit of the past forty years
  • We are individuals first, yes, just as bees are, but we exist in a larger social body. Society is not only real; it’s fundamental. We can’t live without it. And now we’re beginning to understand that this “we” includes many other creatures and societies in our biosphere and even in ourselves. Even as an individual, you are a biome, an ecosystem, much like a forest or a swamp or a coral reef. Your skin holds inside it all kinds of unlikely coöperations, and to survive you depend on any number of interspecies operations going on within you all at once. We are societies made of societies; there are nothing but societies. This is shocking news—it demands a whole new world view.
  • It’s as if the reality of citizenship has smacked us in the face.
  • The neoliberal structure of feeling totters. What might a post-capitalist response to this crisis include? Maybe rent and debt relief; unemployment aid for all those laid off; government hiring for contact tracing and the manufacture of necessary health equipment; the world’s militaries used to support health care; the rapid construction of hospitals.
  • If the project of civilization—including science, economics, politics, and all the rest of it—were to bring all eight billion of us into a long-term balance with Earth’s biosphere, we could do it. By contrast, when the project of civilization is to create profit—which, by definition, goes to only a few—much of what we do is actively harmful to the long-term prospects of our species.
  • Economics is a system for optimizing resources, and, if it were trying to calculate ways to optimize a sustainable civilization in balance with the biosphere, it could be a helpful tool. When it’s used to optimize profit, however, it encourages us to live within a system of destructive falsehoods. We need a new political economy by which to make our calculations. Now, acutely, we feel that need.
  • We’ll remember this even if we pretend not to. History is happening now, and it will have happened. So what will we do with that?
  • How we feel is shaped by what we value, and vice versa. Food, water, shelter, clothing, education, health care: maybe now we value these things more, along with the people whose work creates them. To survive the next century, we need to start valuing the planet more, too, since it’s our only home.
Ed Webb

The Biggest Social Media Operation You've Never Heard Of Is Run Out of Cyprus by Russia... - 0 views

  • The vast majority of the company’s content is apolitical—and that is certainly the way the company portrays itself.
  • But here’s the thing: TheSoul Publishing also posts history videos with a strong political tinge. Many of these videos are overtly pro-Russian. One video posted on Feb. 17, 2019, on the channel Smart Banana, which typically posts listicles and history videos, claims that Ukraine is part of Russia
  • the video gives a heavily sanitized version of Josef Stalin’s time in power and, bizarrely, suggests that Alaska was given to the United States by Soviet leader Nikita Khruschev
  • ...10 more annotations...
  • The video ends by displaying a future vision of Russian expansion that includes most of Europe (notably not Turkey), the Middle East and Asia
  • According to Nox Influencer, Bright Side alone is earning between $314,010 and 971,950 monthly, and 5-Minute Crafts is earning between $576,640 and $1,780,000 monthly through YouTube partner earning estimates. As a privately held company, TheSoul Publishing doesn’t have to disclose its earnings. But all the Cypriot-managed company has to do to earn money from YouTube is meet viewing thresholds and have an AdSense account. AdSense, a Google product, just requires that a company have a bank account, an email address and a phone number. To monetize to this magnitude of revenue, YouTube may have also collected tax information, if TheSoul Publishing organization is conducting what it defines as “U.S. activities.” It’s also possible that YouTube verified a physical address by sending a pin mailer.
  • According to publicly available information from the YouTube channels themselves—information provided to YouTube by the people who set up and operate the channels at TheSoul Publishing—as of August 2019, 21 of the 35 channels connected to TheSoul Publishing claim to be based in the U.S. Ten of the channels had no country listed. Zodiac Maniac was registered in the U.K, though TheSoul Publishing emphasizes that all of its operations are run out of Cyprus.
  •  Now I’ve Seen Everything was the only channel registered in the Russian Federation. That channel has more than 400 million views, which, according to the analytics tool Nox Influencer, come from a range of countries, including Russia and Eastern European and Central Asian countries—despite being an English-language channel
  • In another video on Smart Banana, which has more than 1 million views, the titular banana speculates on “12 Countries That May Not Survive the Next 20 Years”—including the United States, which the video argues may collapse because of political infighting and diverse political viewpoints
  • Facebook pages are not a direct way to increase profit unless a company is actively marketing merchandise or sales, which TheSoul Publishing does not appear to do. The pages coordinate posting, so one post will often appear on a number of different pages. To a digital advertiser, this makes perfect sense as a way to increase relevance and visibility, but it’s far from obvious what TheSoul Publishing might be advertising. Likewise, there’s no obvious financial benefit to posting original videos within Facebook. The company did not meaningfully clarify its Facebook strategy in response to questions on the subject.
  • Facebook forbids what it describes as “coordinated inauthentic behavior,” as its head of cybersecurity describes in this video. While TheSoul’s Publishing’s behavior is clearly coordinated, it is unclear that any of its behavior is inauthentic based on information I have reviewed.
  • One thing that TheSoul is definitely doing on Facebook, however, is buying ads—and, at least sometimes, it’s doing so in rubles on issues of national importance, targeting audiences in the United States. The page Bright Side has 44 million followers and currently lists no account administrators located in the United States, but as of Aug. 8, 2019, it had them in Cyprus, Russia, the United Kingdom, El Salvador, India, Ukraine and in locations “Not Available.” It used Facebook to post six political advertisements paid for in the Russian currency.
  • the point here is not that the ad buy is significant in and of itself. The point, rather, is that the company has developed a massive social media following and has a history of at least experimenting with distributing both pro-Russian and paid political content to that following
  • TheSoul’s political ads included the one below. The advertisement pushes viewers to an article about how “wonderful [it is] that Donald Trump earns less in a year than you do in a month.” The advertisement reached men, women, and people of unknown genders over the ages of 18, and began running on May 15, 2018. TheSoul Publishing spent less than a dollar on this advertisement, raising the question: why bother advertising at all?
Ed Webb

AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views

  • Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
  • Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
  • Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
  • ...9 more annotations...
  • Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.
  • output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem
  • Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.
  • the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few
  • the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place
  • the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.
  • employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.
  • too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity
  • We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.
Ed Webb

Lack of Transparency over Police Forces' Covert Use of Predictive Policing Software Rai... - 0 views

  • Currently, through the use of blanket exemption clauses – and without any clear legislative oversight – public access to information on systems that may be being used to surveil them remains opaque. Companies including Palantir, NSO Group, QuaDream, Dark Matter and Gamma Group are all exempt from disclosure under the precedent set by the police, along with another entity, Dataminr.
  • has helped police in the US monitor and break up Black Lives Matter and Muslim rights activism through social media monitoring. Dataminr software has also been used by the Ministry of Defence, Foreign Commonwealth and Development Office, and the Cabinet Office,
  • New research shows that, far from being a ‘neutral’ observational tool, Dataminr produces results that reflect its clients’ politics, business goals and ways of operating.
  • ...3 more annotations...
  • teaching the software to associate certain kinds of images, text and hashtags with a ‘dangerous’ protest results in politically and racially-biased definitions of what dangerous protests look like. This is because, to make these predictions, the system has to decide whether the event resembles other previous events that were labelled ‘dangerous’ – for example, past BLM protests. 
  • When in 2016 the ACLU proved that Dataminr’s interventions were contributing to racist policing, the company was subsequently banned from granting fusion centres in the US direct access to Twitter’s API. Fusion centres are state-owned and operated facilities and serve as focal points to gather, analyse and redistribute intelligence among state, local, tribal and territorial (SLTT), federal and private sector partners to detect criminal and terrorist activity.  However, US law enforcement found  a way around these limitations by continuing to receive Dataminr alerts outside of fusion centres.
  • Use of these technologies have, in the past, not been subject to public consultation and, without basic scrutiny at either a public or legislative level, there remains no solid mechanism for independent oversight of their use by law enforcement.
‹ Previous 21 - 31 of 31
Showing 20 items per page