Skip to main content

Home/ Dystopias/ Group items tagged attention

Rss Feed Group items tagged

Ed Webb

Why Doesn't Anyone Pay Attention Anymore? | HASTAC - 0 views

  • We also need to distinguish what scientists know about human neurophysiology from our all-too-human discomfort with cultural and social change.  I've been an English professor for over twenty years and have heard how students don't pay attention, can't read a long novel anymore, and are in decline against some unspecified norm of an idealized past quite literally every year that I have been in this profession. In fact, how we educators should address this dire problem was the focus of the very first faculty meeting I ever attended.
  • Whenever I hear about attentional issues in debased contemporary society, whether blamed on television, VCR's, rock music, or the desktop, I assume that the critic was probably, like me, the one student who actually read Moby Dick and who had little awareness that no one else did.
  • This is not really a discussion about the biology of attention; it is about the sociology of change.
  • ...3 more annotations...
  • The brain is always changed by what it does.  That's how we learn, from infancy on, and that's how a baby born in New York has different cultural patterns of behavior, language, gesture, interaction, socialization, and attention than a baby born the same day in Beijing. That's as true for the historical moment into which we are born as it is for the geographical location.  Our attention is shaped by all we do, and reshaped by all we do.  That is what learning is.  The best we can do as educators is find ways to improve our institutions of learning to help our kids be prepared for their future--not for our past.
  • I didn't find the article nearly as stigmatizing and retrograde as I do the knee-jerk Don't Tread on Me reactions of everyone I've seen respond--most of which amount to foolish technolibertarian celebrations of the anonymous savior Technology (Cathy, you don't do that there, even if you also have nothing good to say about the NYT piece).If anything, the article showed that these kids (like all of us!) are profoundly distressed by today's media ecology. They seem to have a far more subtle perspective on things than most others. Frankly I'm a bit gobstopped that everyone hates this article so much. As for the old chestnut that "we need new education for the information age," it's worth pointing out that there was no formal, standardized education system before the industrial age. Compulsory education is a century old experiment. And yes, it ought to be discarded. But that's a frightening prospect for almost everyone, including those who advocate for it. I wonder how many of the intelligentsia who raise their fists and cry, "We need a different education system!" still partake of the old system for their own kids. We don't in my house, for what it's worth, and it's a huge pain in the ass.
  • Cathy -- I really appreciate the distinctions you make between the "the biology of attention" and "the sociology of change." And I agree that more complex and nuanced conversations about technology's relationship to attention, diverstion, focus, and immersion will be more productive (than either nostalgia or utopic futurism). For example, it seems like a strange oversight (in the NYT piece) to bemoan the ability of "kids these days" to focus, read immersively, or Pay Attention, yet report without comment that these same kids can edit video for hours on end -- creative, immersive work which, I would imagine, requires more than a little focus. It seems that perhaps the question is not whether we can still pay attention or focus, but what those diverse forms of immersion within different media (will) look like.
  •  
    I recommend both this commentary and the original NYT piece to which it links and on which it comments.
Ed Webb

Sad by design | Eurozine - 0 views

  • ‘technological sadness’ – the default mental state of the online billions
  • If only my phone could gently weep. McLuhan’s ‘extensions of man’ has imploded right into the exhausted self.
  • Social reality is a corporate hybrid between handheld media and the psychic structure of the user. It’s a distributed form of social ranking that can no longer be reduced to the interests of state and corporate platforms. As online subjects, we too are implicit, far too deeply involved
  • ...20 more annotations...
  • Google and Facebook know how to utilize negative emotions, leading to the new system-wide goal: find personalized ways to make you feel bad
  • in Adam Greenfield’s Radical Technologies, where he notices that ‘it seems strange to assert that anything as broad as a class of technologies might have an emotional tenor, but the internet of things does. That tenor is sadness… a melancholy that rolls off it in waves and sheets. The entire pretext on which it depends is a milieu of continuously shattered attention, of overloaded awareness, and of gaps between people just barely annealed with sensors, APIs and scripts.’ It is a life ‘savaged by bullshit jobs, over-cranked schedules and long commutes, of intimacy stifled by exhaustion and the incapacity by exhaustion and the incapacity or unwillingness to be emotionally present.’
  • Omnipresent social media places a claim on our elapsed time, our fractured lives. We’re all sad in our very own way.4 As there are no lulls or quiet moments anymore, the result is fatigue, depletion and loss of energy. We’re becoming obsessed with waiting. How long have you been forgotten by your love ones? Time, meticulously measured on every app, tells us right to our face. Chronos hurts. Should I post something to attract attention and show I’m still here? Nobody likes me anymore. As the random messages keep relentlessly piling in, there’s no way to halt them, to take a moment and think it all through.
  • Unlike the blog entries of the Web 2.0 era, social media have surpassed the summary stage of the diary in a desperate attempt to keep up with real-time regime. Instagram Stories, for example, bring back the nostalgia of an unfolding chain of events – and then disappear at the end of the day, like a revenge act, a satire of ancient sentiments gone by. Storage will make the pain permanent. Better forget about it and move on
  • By browsing through updates, we’re catching up with machine time – at least until we collapse under the weight of participation fatigue. Organic life cycles are short-circuited and accelerated up to a point where the personal life of billions has finally caught up with cybernetics
  • The price of self-control in an age of instant gratification is high. We long to revolt against the restless zombie inside us, but we don’t know how.
  • Sadness arises at the point we’re exhausted by the online world.6 After yet another app session in which we failed to make a date, purchased a ticket and did a quick round of videos, the post-dopamine mood hits us hard. The sheer busyness and self-importance of the world makes you feel joyless. After a dive into the network we’re drained and feel socially awkward. The swiping finger is tired and we have to stop.
  • Much like boredom, sadness is not a medical condition (though never say never because everything can be turned into one). No matter how brief and mild, sadness is the default mental state of the online billions. Its original intensity gets dissipated, it seeps out, becoming a general atmosphere, a chronic background condition. Occasionally – for a brief moment – we feel the loss. A seething rage emerges. After checking for the tenth time what someone said on Instagram, the pain of the social makes us feel miserable, and we put the phone away. Am I suffering from the phantom vibration syndrome? Wouldn’t it be nice if we were offline? Why is life so tragic? He blocked me. At night, you read through the thread again. Do we need to quit again, to go cold turkey again? Others are supposed to move us, to arouse us, and yet we don’t feel anything anymore. The heart is frozen
  • If experience is the ‘habit of creating isolated moments within raw occurrence in order to save and recount them,’11 the desire to anaesthetize experience is a kind of immune response against ‘the stimulations of another modern novelty, the total aesthetic environment’.
  • unlike burn-out, sadness is a continuous state of mind. Sadness pops up the second events start to fade away – and now you’re down the rabbit hole once more. The perpetual now can no longer be captured and leaves us isolated, a scattered set of online subjects. What happens when the soul is caught in the permanent present? Is this what Franco Berardi calls the ‘slow cancellation of the future’? By scrolling, swiping and flipping, we hungry ghosts try to fill the existential emptiness, frantically searching for a determining sign – and failing
  • Millennials, as one recently explained to me, have grown up talking more openly about their state of mind. As work/life distinctions disappear, subjectivity becomes their core content. Confessions and opinions are externalized instantly. Individuation is no longer confined to the diary or small group of friends, but is shared out there, exposed for all to see.
  • Snapstreaks, the ‘best friends’ fire emoji next to a friend’s name indicating that ‘you and that special person in your life have snapped one another within 24 hours for at least two days in a row.’19 Streaks are considered a proof of friendship or commitment to someone. So it’s heartbreaking when you lose a streak you’ve put months of work into. The feature all but destroys the accumulated social capital when users are offline for a few days. The Snap regime forces teenagers, the largest Snapchat user group, to use the app every single day, making an offline break virtually impossible.20 While relationships amongst teens are pretty much always in flux, with friendships being on the edge and always questioned, Snap-induced feelings sync with the rapidly changing teenage body, making puberty even more intense
  • The bare-all nature of social media causes rifts between lovers who would rather not have this information. But in the information age, this does not bode well with the social pressure to participate in social networks.
  • dating apps like Tinder. These are described as time-killing machines – the reality game that overcomes boredom, or alternatively as social e-commerce – shopping my soul around. After many hours of swiping, suddenly there’s a rush of dopamine when someone likes you back. ‘The goal of the game is to have your egos boosted. If you swipe right and you match with a little celebration on the screen, sometimes that’s all that is needed. ‘We want to scoop up all our options immediately and then decide what we actually really want later.’25 On the other hand, ‘crippling social anxiety’ is when you match with somebody you are interested in, but you can’t bring yourself to send a message or respond to theirs ‘because oh god all I could think of was stupid responses or openers and she’ll think I’m an idiot and I am an idiot and…’
  • The metric to measure today’s symptoms would be time – or ‘attention’, as it is called in the industry. While for the archaic melancholic the past never passes, techno-sadness is caught in the perpetual now. Forward focused, we bet on acceleration and never mourn a lost object. The primary identification is there, in our hand. Everything is evident, on the screen, right in your face. Contrasted with the rich historical sources on melancholia, our present condition becomes immediately apparent. Whereas melancholy in the past was defined by separation from others, reduced contacts and reflection on oneself, today’s tristesse plays itself out amidst busy social (media) interactions. In Sherry Turkle’s phrase, we are alone together, as part of the crowd – a form of loneliness that is particularly cruel, frantic and tiring.
  • What we see today are systems that constantly disrupt the timeless aspect of melancholy.31 There’s no time for contemplation, or Weltschmerz. Social reality does not allow us to retreat.32 Even in our deepest state of solitude we’re surrounded by (online) others that babble on and on, demanding our attention
  • distraction does not pull us away, but instead draws us back into the social
  • The purpose of sadness by design is, as Paul B. Preciado calls it, ‘the production of frustrating satisfaction’.39 Should we have an opinion about internet-induced sadness? How can we address this topic without looking down on the online billions, without resorting to fast-food comparisons or patronizingly viewing people as fragile beings that need to be liberated and taken care of.
  • We overcome sadness not through happiness, but rather, as media theorist Andrew Culp has insisted, through a hatred of this world. Sadness occurs in situations where stagnant ‘becoming’ has turned into a blatant lie. We suffer, and there’s no form of absurdism that can offer an escape. Public access to a 21st-century version of dadaism has been blocked. The absence of surrealism hurts. What could our social fantasies look like? Are legal constructs such as creative commons and cooperatives all we can come up with? It seems we’re trapped in smoothness, skimming a surface littered with impressions and notifications. The collective imaginary is on hold. What’s worse, this banality itself is seamless, offering no indicators of its dangers and distortions. As a result, we’ve become subdued. Has the possibility of myth become technologically impossible?
  • We can neither return to mysticism nor to positivism. The naive act of communication is lost – and this is why we cry
Ed Webb

Smartphones are making us stupid - and may be a 'gateway drug' | The Lighthouse - 0 views

  • rather than making us smarter, mobile devices reduce our cognitive ability in measurable ways
  • “There’s lots of evidence showing that the information you learn on a digital device, doesn’t get retained very well and isn’t transferred across to the real world,”
  • “You’re also quickly conditioned to attend to lots of attention-grabbing signals, beeps and buzzes, so you jump from one task to the other and you don’t concentrate.”
  • ...16 more annotations...
  • Not only do smartphones affect our memory and our concentration, research shows they are addictive – to the point where they could be a ‘gateway drug’ making users more vulnerable to other addictions.
  • Smartphones are also linked to reduced social interaction, inadequate sleep, poor real-world navigation, and depression.
  • “The more time that kids spend on digital devices, the less empathetic they are, and the less they are able to process and recognise facial expressions, so their ability to actually communicate with each other is decreased.”
  • “Casino-funded research is designed to keep people gambling, and app software developers use exactly the same techniques. They have lots of buzzes and icons so you attend to them, they have things that move and flash so you notice them and keep your attention on the device.”
  • Around 90 per cent of US university students are thought to experience ‘phantom vibrations', so the researcher took a group to a desert location with no cell reception – and found that even after four days, around half of the students still thought their pocket was buzzing with Facebook or text notifications.
  • “Collaboration is a buzzword with software companies who are targeting schools to get kids to use these collaboration tools on their iPads – but collaboration decreases when you're using these devices,”
  • “All addiction is based on the same craving for a dopamine response, whether it's drug, gambling, alcohol or phone addiction,” he says. “As the dopamine response drops off, you need to increase the amount you need to get the same result, you want a little bit more next time. Neurologically, they all look the same.“We know – there are lots of studies on this – that once we form an addiction to something, we become more vulnerable to other addictions. That’s why there’s concerns around heavy users of more benign, easily-accessed drugs like alcohol and marijuana as there’s some correlation with usage of more physically addictive drugs like heroin, and neurological responses are the same.”
  • parents can also fall victim to screens which distract from their child’s activities or conversations, and most adults will experience this with friends and family members too.
  • “We also know that if you learn something on an iPad you are less likely to be able to transfer that to another device or to the real world,”
  • a series of studies have tested this with children who learn to construct a project with ‘digital’ blocks and then try the project with real blocks. “They can’t do it - they start from zero again,”
  • “Our brains can’t actually multitask, we have to switch our attention from one thing to another, and each time you switch, there's a cost to your attentional resources. After a few hours of this, we become very stressed.” That also causes us to forget things
  • A study from Norway recently tested how well kids remembered what they learned on screens. One group of students received information on a screen and were asked to memorise it; the second group received the same information on paper. Both groups were tested on their recall.Unsurprisingly, the children who received the paper version remembered more of the material. But the children with the electronic version were also found to be more stressed,
  • The famous ‘London taxi driver experiments’ found that memorising large maps caused the hippocampus to expand in size. Williams says that the reverse is going to happen if we don’t use our brain and memory to navigate. “Our brains are just like our muscles. We ‘use it or lose it’ – in other words, if we use navigation devices for directions rather than our brains, we will lose that ability.”
  • numerous studies also link smartphone use with sleeplessness and anxiety. “Some other interesting research has shown that the more friends you have on social media, the less friends you are likely to have in real life, the less actual contacts you have and the greater likelihood you have of depression,”
  • 12-month-old children whose carers regularly use smartphones have poorer facial expression perception
  • turning off software alarms and notifications, putting strict time limits around screen use, keeping screens out of bedrooms, minimising social media and replacing screens with paper books, paper maps and other non-screen activities can all help minimise harm from digital devices including smartphones
Ed Webb

Anti-piracy tool will harvest and market your emotions - Computerworld Blogs - 0 views

  • After being awarded a grant, Aralia Systems teamed up with Machine Vision Lab in what seems like a massive invasion of your privacy beyond "in the name of security." Building on existing cinema anti-piracy technology, these companies plan to add the ability to harvest your emotions. This is the part where it seems that filmgoers should be eligible to charge movie theater owners. At the very least, shouldn't it result in a significantly discounted movie ticket?  Machine Vision Lab's Dr Abdul Farooq told PhysOrg, "We plan to build on the capabilities of current technology used in cinemas to detect criminals making pirate copies of films with video cameras. We want to devise instruments that will be capable of collecting data that can be used by cinemas to monitor audience reactions to films and adverts and also to gather data about attention and audience movement. ... It is envisaged that once the technology has been fine tuned it could be used by market researchers in all kinds of settings, including monitoring reactions to shop window displays."  
  • The 3D camera data will "capture the audience as a whole as a texture."
  • the technology will enable companies to cash in on your emotions and sell that personal information as marketing data
  • ...4 more annotations...
  • "Within the cinema industry this tool will feed powerful marketing data that will inform film directors, cinema advertisers and cinemas with useful data about what audiences enjoy and what adverts capture the most attention. By measuring emotion and movement film companies and cinema advertising agencies can learn so much from their audiences that will help to inform creativity and strategy.” 
  • hey plan to fine-tune it to monitor our reactions to window displays and probably anywhere else the data can be used for surveillance and marketing.
  • Muslim women have got the right idea. Soon well all be wearing privacy tents.
  • In George Orwell's novel 1984, each home has a mandatory "telescreen," a large flat panel, something like a TV, but with the ability for the authorities to observer viewers in order to ensure they are watching all the required propaganda broadcasts and reacting with appropriate emotions. Problem viewers would be brought to the attention of the Thought Police. The telescreen, of course, could not be turned off. It is reassuring to know that our technology has finally caught up with Oceania's.
Ed Webb

P.J. O'Rourke: 'Very Little That Gets Blogged Is Of Very Much Worth' - Radio ... - 0 views

  • I'm no expert,
  • One thing a professional reporter knows that I'm not always sure that a person who isn't a reporter knows is that it is very easy to see part of an event and miss the more important part of an event, to see one thing when something else has happened. You have to be very aware of how complex most events are, and how narrow one's vision of most events are. And the police will tell you -- my father-in-law is a retired FBI agent -- and he will certainly tell you that there's nothing as unreliable as an eyewitness. You don't even have to go to the movies to see Rashomon. Just take any couple that you know that's been divorced and ask for his story of what happened, and then ask for her story of what happened.
  • It really isn't any one person. It's the experienced news organization that filters this out.
  • ...3 more annotations...
  • If I have a gripe with the Internet, it isn't short attention spans, it isn't blogging, or it isn't ease of idiot communication. It would be that the initial effect of the Internet, probably because it has academic origins rather than economic origins, was to devalue content. And I mean that in a gross monetary sense. The idea was that information was suddenly free. Information is not free. You [always] pay a price for information. Sometimes the price is just paying attention or being careful. But usually there's a monetary price involved because it costs money to get people out there who know what they're doing and have reasonable judgment and knowledge and perspective and background, and you don't want it to be free. It's not going to be terribly expensive, it's probably going to be cheaper than what newspapers have come to cost.
  • when it comes to the more important analytical side of news, where it is important say for instance for a news organization to have a team of experienced reporters in place with the background knowledge of the place they're in and contacts they've developed over time, you can't replace that with a random backpacker tweeting from Tajikistan
  • The job wasn't to speak truth to power. Anybody can speak truth to power if they're far enough away from the power.
Ed Webb

Charlie Brooker | Google Instant is trying to kill me | Comment is free | The Guardian - 0 views

  • I'm starting to feel like an unwitting test subject in a global experiment conducted by Google, in which it attempts to discover how much raw information it can inject directly into my hippocampus before I crumple to the floor and start fitting uncontrollably.
  • It's the internet on fast-forward, and it's aggressive – like trying to order from a waiter who keeps finishing your sentences while ramming spoonfuls of what he thinks you want directly into your mouth, so you can't even enjoy your blancmange without chewing a gobful of black pudding first.
  • Google may have released him from the physical misery of pressing enter, but it's destroyed his sense of perspective in the process.
  • ...2 more annotations...
  • My attention span was never great, but modern technology has halved it, and halved it again, and again and again, down to an atomic level, and now there's nothing discernible left. Back in that room, bombarded by alerts and emails, repeatedly tapping search terms into Google Instant for no good reason, playing mindless pinball with words and images, tumbling down countless little attention-vortexes, plunging into one split-second coma after another, I began to feel I was neither in control nor 100% physically present. I wasn't using the computer. The computer was using me – to keep its keys warm.
  • I'm rationing my internet usage and training my mind muscles for the future. Because I can see where it's heading: a service called Google Assault that doesn't even bother to guess what you want, and simply hurls random words and sounds and images at you until you dribble all the fluid out of your body. And I know it'll kill me, unless I train my brain to withstand and ignore it. For me, the war against the machines has started in earnest.
Ed Webb

Programmed for Love: The Unsettling Future of Robotics - The Chronicle Review - The Chr... - 0 views

  • Her prediction: Companies will soon sell robots designed to baby-sit children, replace workers in nursing homes, and serve as companions for people with disabilities. All of which to Turkle is demeaning, "transgressive," and damaging to our collective sense of humanity. It's not that she's against robots as helpers—building cars, vacuuming floors, and helping to bathe the sick are one thing. She's concerned about robots that want to be buddies, implicitly promising an emotional connection they can never deliver.
  • y: We are already cyborgs, reliant on digital devices in ways that many of us could not have imagined just a few years ago
  • "We are hard-wired that if something meets extremely primitive standards, either eye contact or recognition or very primitive mutual signaling, to accept it as an Other because as animals that's how we're hard-wired—to recognize other creatures out there."
  • ...4 more annotations...
  • "Can a broken robot break a child?" they asked. "We would not consider the ethics of having children play with a damaged copy of Microsoft Word or a torn Raggedy Ann doll. But sociable robots provoke enough emotion to make this ethical question feel very real."
  • "The concept of robots as baby sitters is, intellectually, one that ought to appeal to parents more than the idea of having a teenager or similarly inexperienced baby sitter responsible for the safety of their infants," he writes. "Their smoke-detection capabilities will be better than ours, and they will never be distracted for the brief moment it can take an infant to do itself some terrible damage or be snatched by a deranged stranger."
  • "What if we get used to relationships that are made to measure?" Turkle asks. "Is that teaching us that relationships can be just the way we want them?" After all, if a robotic partner were to become annoying, we could just switch it off.
  • We've reached a moment, she says, when we should make "corrections"—to develop social norms to help offset the feeling that we must check for messages even when that means ignoring the people around us. "Today's young people have a special vulnerability: Although always connected, they feel deprived of attention," she writes. "Some, as children, were pushed on swings while their parents spoke on cellphones. Now these same parents do their e-mail at the dinner table." One 17-year-old boy even told her that at least a robot would remember everything he said, contrary to his father, who often tapped at a BlackBerry during conversations.
Ed Webb

The Web Means the End of Forgetting - NYTimes.com - 1 views

  • for a great many people, the permanent memory bank of the Web increasingly means there are no second chances — no opportunities to escape a scarlet letter in your digital past. Now the worst thing you’ve done is often the first thing everyone knows about you.
  • a collective identity crisis. For most of human history, the idea of reinventing yourself or freely shaping your identity — of presenting different selves in different contexts (at home, at work, at play) — was hard to fathom, because people’s identities were fixed by their roles in a rigid social hierarchy. With little geographic or social mobility, you were defined not as an individual but by your village, your class, your job or your guild. But that started to change in the late Middle Ages and the Renaissance, with a growing individualism that came to redefine human identity. As people perceived themselves increasingly as individuals, their status became a function not of inherited categories but of their own efforts and achievements. This new conception of malleable and fluid identity found its fullest and purest expression in the American ideal of the self-made man, a term popularized by Henry Clay in 1832.
  • the dawning of the Internet age promised to resurrect the ideal of what the psychiatrist Robert Jay Lifton has called the “protean self.” If you couldn’t flee to Texas, you could always seek out a new chat room and create a new screen name. For some technology enthusiasts, the Web was supposed to be the second flowering of the open frontier, and the ability to segment our identities with an endless supply of pseudonyms, avatars and categories of friendship was supposed to let people present different sides of their personalities in different contexts. What seemed within our grasp was a power that only Proteus possessed: namely, perfect control over our shifting identities. But the hope that we could carefully control how others view us in different contexts has proved to be another myth. As social-networking sites expanded, it was no longer quite so easy to have segmented identities: now that so many people use a single platform to post constant status updates and photos about their private and public activities, the idea of a home self, a work self, a family self and a high-school-friends self has become increasingly untenable. In fact, the attempt to maintain different selves often arouses suspicion.
  • ...20 more annotations...
  • All around the world, political leaders, scholars and citizens are searching for responses to the challenge of preserving control of our identities in a digital world that never forgets. Are the most promising solutions going to be technological? Legislative? Judicial? Ethical? A result of shifting social norms and cultural expectations? Or some mix of the above?
  • These approaches share the common goal of reconstructing a form of control over our identities: the ability to reinvent ourselves, to escape our pasts and to improve the selves that we present to the world.
  • many technological theorists assumed that self-governing communities could ensure, through the self-correcting wisdom of the crowd, that all participants enjoyed the online identities they deserved. Wikipedia is one embodiment of the faith that the wisdom of the crowd can correct most mistakes — that a Wikipedia entry for a small-town mayor, for example, will reflect the reputation he deserves. And if the crowd fails — perhaps by turning into a digital mob — Wikipedia offers other forms of redress
  • In practice, however, self-governing communities like Wikipedia — or algorithmically self-correcting systems like Google — often leave people feeling misrepresented and burned. Those who think that their online reputations have been unfairly tarnished by an isolated incident or two now have a practical option: consulting a firm like ReputationDefender, which promises to clean up your online image. ReputationDefender was founded by Michael Fertik, a Harvard Law School graduate who was troubled by the idea of young people being forever tainted online by their youthful indiscretions. “I was seeing articles about the ‘Lord of the Flies’ behavior that all of us engage in at that age,” he told me, “and it felt un-American that when the conduct was online, it could have permanent effects on the speaker and the victim. The right to new beginnings and the right to self-definition have always been among the most beautiful American ideals.”
  • In the Web 3.0 world, Fertik predicts, people will be rated, assessed and scored based not on their creditworthiness but on their trustworthiness as good parents, good dates, good employees, good baby sitters or good insurance risks.
  • “Our customers include parents whose kids have talked about them on the Internet — ‘Mom didn’t get the raise’; ‘Dad got fired’; ‘Mom and Dad are fighting a lot, and I’m worried they’ll get a divorce.’ ”
  • as facial-recognition technology becomes more widespread and sophisticated, it will almost certainly challenge our expectation of anonymity in public
  • Ohm says he worries that employers would be able to use social-network-aggregator services to identify people’s book and movie preferences and even Internet-search terms, and then fire or refuse to hire them on that basis. A handful of states — including New York, California, Colorado and North Dakota — broadly prohibit employers from discriminating against employees for legal off-duty conduct like smoking. Ohm suggests that these laws could be extended to prevent certain categories of employers from refusing to hire people based on Facebook pictures, status updates and other legal but embarrassing personal information. (In practice, these laws might be hard to enforce, since employers might not disclose the real reason for their hiring decisions, so employers, like credit-reporting agents, might also be required by law to disclose to job candidates the negative information in their digital files.)
  • research group’s preliminary results suggest that if rumors spread about something good you did 10 years ago, like winning a prize, they will be discounted; but if rumors spread about something bad that you did 10 years ago, like driving drunk, that information has staying power
  • many people aren’t worried about false information posted by others — they’re worried about true information they’ve posted about themselves when it is taken out of context or given undue weight. And defamation law doesn’t apply to true information or statements of opinion. Some legal scholars want to expand the ability to sue over true but embarrassing violations of privacy — although it appears to be a quixotic goal.
  • Researchers at the University of Washington, for example, are developing a technology called Vanish that makes electronic data “self-destruct” after a specified period of time. Instead of relying on Google, Facebook or Hotmail to delete the data that is stored “in the cloud” — in other words, on their distributed servers — Vanish encrypts the data and then “shatters” the encryption key. To read the data, your computer has to put the pieces of the key back together, but they “erode” or “rust” as time passes, and after a certain point the document can no longer be read.
  • Plenty of anecdotal evidence suggests that young people, having been burned by Facebook (and frustrated by its privacy policy, which at more than 5,000 words is longer than the U.S. Constitution), are savvier than older users about cleaning up their tagged photos and being careful about what they post.
  • norms are already developing to recreate off-the-record spaces in public, with no photos, Twitter posts or blogging allowed. Milk and Honey, an exclusive bar on Manhattan’s Lower East Side, requires potential members to sign an agreement promising not to blog about the bar’s goings on or to post photos on social-networking sites, and other bars and nightclubs are adopting similar policies. I’ve been at dinners recently where someone has requested, in all seriousness, “Please don’t tweet this” — a custom that is likely to spread.
  • There’s already a sharp rise in lawsuits known as Twittergation — that is, suits to force Web sites to remove slanderous or false posts.
  • strategies of “soft paternalism” that might nudge people to hesitate before posting, say, drunken photos from Cancún. “We could easily think about a system, when you are uploading certain photos, that immediately detects how sensitive the photo will be.”
  • It’s sobering, now that we live in a world misleadingly called a “global village,” to think about privacy in actual, small villages long ago. In the villages described in the Babylonian Talmud, for example, any kind of gossip or tale-bearing about other people — oral or written, true or false, friendly or mean — was considered a terrible sin because small communities have long memories and every word spoken about other people was thought to ascend to the heavenly cloud. (The digital cloud has made this metaphor literal.) But the Talmudic villages were, in fact, far more humane and forgiving than our brutal global village, where much of the content on the Internet would meet the Talmudic definition of gossip: although the Talmudic sages believed that God reads our thoughts and records them in the book of life, they also believed that God erases the book for those who atone for their sins by asking forgiveness of those they have wronged. In the Talmud, people have an obligation not to remind others of their past misdeeds, on the assumption they may have atoned and grown spiritually from their mistakes. “If a man was a repentant [sinner],” the Talmud says, “one must not say to him, ‘Remember your former deeds.’ ” Unlike God, however, the digital cloud rarely wipes our slates clean, and the keepers of the cloud today are sometimes less forgiving than their all-powerful divine predecessor.
  • On the Internet, it turns out, we’re not entitled to demand any particular respect at all, and if others don’t have the empathy necessary to forgive our missteps, or the attention spans necessary to judge us in context, there’s nothing we can do about it.
  • Gosling is optimistic about the implications of his study for the possibility of digital forgiveness. He acknowledged that social technologies are forcing us to merge identities that used to be separate — we can no longer have segmented selves like “a home or family self, a friend self, a leisure self, a work self.” But although he told Facebook, “I have to find a way to reconcile my professor self with my having-a-few-drinks self,” he also suggested that as all of us have to merge our public and private identities, photos showing us having a few drinks on Facebook will no longer seem so scandalous. “You see your accountant going out on weekends and attending clown conventions, that no longer makes you think that he’s not a good accountant. We’re coming to terms and reconciling with that merging of identities.”
  • a humane society values privacy, because it allows people to cultivate different aspects of their personalities in different contexts; and at the moment, the enforced merging of identities that used to be separate is leaving many casualties in its wake.
  • we need to learn new forms of empathy, new ways of defining ourselves without reference to what others say about us and new ways of forgiving one another for the digital trails that will follow us forever
Ed Webb

Can Economists and Humanists Ever Be Friends? | The New Yorker - 0 views

  • There is something thrilling about the intellectual audacity of thinking that you can explain ninety per cent of behavior in a society with one mental tool.
  • education, which they believe is a form of domestication
  • there is no moral dimension to this economic analysis: utility is a fundamentally amoral concept
  • ...11 more annotations...
  • intellectual overextension is often found in economics, as Gary Saul Morson and Morton Schapiro explain in their wonderful book “Cents and Sensibility: What Economics Can Learn from the Humanities” (Princeton). Morson and Schapiro—one a literary scholar and the other an economist—draw on the distinction between hedgehogs and foxes made by Isaiah Berlin in a famous essay from the nineteen-fifties, invoking an ancient Greek fragment: “The fox knows many things, but the hedgehog one big thing.” Economists tend to be hedgehogs, forever on the search for a single, unifying explanation of complex phenomena. They love to look at a huge, complicated mass of human behavior and reduce it to an equation: the supply-and-demand curves; the Phillips curve, which links unemployment and inflation; or mb=mc, which links a marginal benefit to a marginal cost—meaning that the fourth slice of pizza is worth less to you than the first. These are powerful tools, which can be taken too far. Morson and Schapiro cite the example of Gary Becker, the Nobel laureate in economics in 1992. Becker is a hero to many in the field, but, for all the originality of his thinking, to outsiders he can stand for intellectual overconfidence. He thought that “the economic approach is a comprehensive one that is applicable to all human behavior.” Not some, not most—all
  • Becker analyzed, in his own words, “fertility, education, the uses of time, crime, marriage, social interactions, and other ‘sociological,’ ‘legal,’ and ‘political problems,’ ” before concluding that economics explained everything
  • The issue here is one of overreach: taking an argument that has worthwhile applications and extending it further than it usefully goes. Our motives are often not what they seem: true. This explains everything: not true. After all, it’s not as if the idea that we send signals about ourselves were news; you could argue that there is an entire social science, sociology, dedicated to the subject. Classic practitioners of that discipline study the signals we send and show how they are interpreted by those around us, as in Erving Goffman’s “The Presentation of Self in Everyday Life,” or how we construct an entire identity, both internally and externally, from the things we choose to be seen liking—the argument of Pierre Bourdieu’s masterpiece “Distinction.” These are rich and complicated texts, which show how rich and complicated human difference can be. The focus on signalling and unconscious motives in “The Elephant in the Brain,” however, goes the other way: it reduces complex, diverse behavior to simple rules.
  • “A traditional cost-benefit analysis could easily have led to the discontinuation of a project widely viewed as being among the most successful health interventions in African history.”
  • Another part of me, though, is done with it, with the imperialist ambitions of economics and its tendency to explain away differences, to ignore culture, to exalt reductionism. I want to believe Morson and Schapiro and Desai when they posit that the gap between economics and the humanities can be bridged, but my experience in both writing fiction and studying economics leads me to think that they’re wrong. The hedgehog doesn’t want to learn from the fox. The realist novel is a solemn enemy of equations. The project of reducing behavior to laws and the project of attending to human beings in all their complexity and specifics are diametrically opposed. Perhaps I’m only talking about myself, and this is merely an autobiographical reflection, rather than a general truth, but I think that if I committed any further to economics I would have to give up writing fiction. I told an economist I know about this, and he laughed. He said, “Sounds like you’re maximizing your utility.” 
  • finance is full of “attribution errors,” in which people view their successes as deserved and their failures as bad luck. Desai notes that in business, law, or pedagogy we can gauge success only after months or years; in finance, you can be graded hour by hour, day by day, and by plainly quantifiable measures. What’s more, he says, “the ‘discipline of the market’ shrouds all of finance in a meritocratic haze.” And so people who succeed in finance “are susceptible to developing massively outsized egos and appetites.”
  • one of the things I liked about economics, finance, and the language of money was their lack of hypocrisy. Modern life is full of cant, of people saying things they don’t quite believe. The money guys, in private, don’t go in for cant. They’re more like Mafia bosses. I have to admit that part of me resonates to that coldness.
  • Economics, Morson and Schapiro say, has three systematic biases: it ignores the role of culture, it ignores the fact that “to understand people one must tell stories about them,” and it constantly touches on ethical questions beyond its ken. Culture, stories, and ethics are things that can’t be reduced to equations, and economics accordingly has difficulty with them
  • There is something thrilling about the intellectual audacity of thinking that you can explain ninety per cent of behavior in a society with one mental tool
  • According to Hanson and Simler, these unschooled workers “won’t show up for work reliably on time, or they have problematic superstitions, or they prefer to get job instructions via indirect hints instead of direct orders, or they won’t accept tasks and roles that conflict with their culturally assigned relative status with co-workers, or they won’t accept being told to do tasks differently than they had done them before.”
  • The idea that Maya Angelou’s career amounts to nothing more than a writer shaking her tail feathers to attract the attention of a dominant male is not just misleading; it’s actively embarrassing.
Ed Webb

Where is the boundary between your phone and your mind? | US news | The Guardian - 1 views

  • Here’s a thought experiment: where do you end? Not your body, but you, the nebulous identity you think of as your “self”. Does it end at the limits of your physical form? Or does it include your voice, which can now be heard as far as outer space; your personal and behavioral data, which is spread out across the impossibly broad plane known as digital space; and your active online personas, which probably encompass dozens of different social media networks, text message conversations, and email exchanges? This is a question with no clear answer, and, as the smartphone grows ever more essential to our daily lives, that border’s only getting blurrier.
  • our minds have become even more radically extended than ever before
  • one of the essential differences between a smartphone and a piece of paper, which is that our relationship with our phones is reciprocal: we not only put information into the device, we also receive information from it, and, in that sense, it shapes our lives far more actively than would, say, a shopping list. The shopping list isn’t suggesting to us, based on algorithmic responses to our past and current shopping behavior, what we should buy; the phone is
  • ...10 more annotations...
  • American consumers spent five hours per day on their mobile devices, and showed a dizzying 69% year-over-year increase in time spent in apps like Facebook, Twitter, and YouTube. The prevalence of apps represents a concrete example of the movement away from the old notion of accessing the Internet through a browser and the new reality of the connected world and its myriad elements – news, social media, entertainment – being with us all the time
  • “In the 90s and even through the early 2000s, for many people, there was this way of thinking about cyberspace as a space that was somewhere else: it was in your computer. You went to your desktop to get there,” Weigel says. “One of the biggest shifts that’s happened and that will continue to happen is the undoing of a border that we used to perceive between the virtual and the physical world.”
  • While many of us think of the smartphone as a portal for accessing the outside world, the reciprocity of the device, as well as the larger pattern of our behavior online, means the portal goes the other way as well: it’s a means for others to access us
  • Weigel sees the unfettered access to our data, through our smartphone and browser use, of what she calls the big five tech companies – Apple, Alphabet (the parent company of Google), Microsoft, Facebook, and Amazon – as a legitimate problem for notions of democracy
  • an unfathomable amount of wealth, power, and direct influence on the consumer in the hands of just a few individuals – individuals who can affect billions of lives with a tweak in the code of their products
  • “This is where the fundamental democracy deficit comes from: you have this incredibly concentrated private power with zero transparency or democratic oversight or accountability, and then they have this unprecedented wealth of data about their users to work with,”
  • the rhetoric around the Internet was that the crowd would prevent the spread of misinformation, filtering it out like a great big hive mind; it would also help to prevent the spread of things like hate speech. Obviously, this has not been the case, and even the relatively successful experiments in this, such as Wikipedia, have a great deal of human governance that allows them to function properly
  • We should know and be aware of how these companies work, how they track our behavior, and how they make recommendations to us based on our behavior and that of others. Essentially, we need to understand the fundamental difference between our behavior IRL and in the digital sphere – a difference that, despite the erosion of boundaries, still stands
  • “Whether we know it or not, the connections that we make on the Internet are being used to cultivate an identity for us – an identity that is then sold to us afterward,” Lynch says. “Google tells you what questions to ask, and then it gives you the answers to those questions.”
  • It isn’t enough that the apps in our phone flatten all of the different categories of relationships we have into one broad group: friends, followers, connections. They go one step further than that. “You’re being told who you are all the time by Facebook and social media because which posts are coming up from your friends are due to an algorithm that is trying to get you to pay more attention to Facebook,” Lynch says. “That’s affecting our identity, because it affects who you think your friends are, because they’re the ones who are popping up higher on your feed.”
Ed Webb

Could fully automated luxury communism ever work? - 0 views

  • Having achieved a seamless, pervasive commodification of online sociality, Big Tech companies have turned their attention to infrastructure. Attempts by Google, Amazon and Facebook to achieve market leadership, in everything from AI to space exploration, risk a future defined by the battle for corporate monopoly.
  • The technologies are coming. They’re already here in certain instances. It’s the politics that surrounds them. We have alternatives: we can have public ownership of data in the citizen’s interest or it could be used as it is in China where you have a synthesis of corporate and state power
  • the two alternatives that big data allows is an all-consuming surveillance state where you have a deep synthesis of capitalism with authoritarian control, or a reinvigorated welfare state where more and more things are available to everyone for free or very low cost
  • ...4 more annotations...
  • we can’t begin those discussions until we say, as a society, we want to at least try subordinating these potentials to the democratic project, rather than allow capitalism to do what it wants
  • I say in FALC that this isn’t a blueprint for utopia. All I’m saying is that there is a possibility for the end of scarcity, the end of work, a coming together of leisure and labour, physical and mental work. What do we want to do with it? It’s perfectly possible something different could emerge where you have this aggressive form of social value.
  • I think the thing that’s been beaten out of everyone since 2010 is one of the prevailing tenets of neoliberalism: work hard, you can be whatever you want to be, that you’ll get a job, be well paid and enjoy yourself.  In 2010, that disappeared overnight, the rules of the game changed. For the status quo to continue to administer itself,  it had to change common sense. You see this with Jordan Peterson; he’s saying you have to know your place and that’s what will make you happy. To me that’s the only future for conservative thought, how else do you mediate the inequality and unhappiness?
  • I don’t think we can rapidly decarbonise our economies without working people understanding that it’s in their self-interest. A green economy means better quality of life. It means more work. Luxury populism feeds not only into the green transition, but the rollout of Universal Basic Services and even further.
Ed Webb

The Digital Maginot Line - 0 views

  • The Information World War has already been going on for several years. We called the opening skirmishes “media manipulation” and “hoaxes”, assuming that we were dealing with ideological pranksters doing it for the lulz (and that lulz were harmless). In reality, the combatants are professional, state-employed cyberwarriors and seasoned amateur guerrillas pursuing very well-defined objectives with military precision and specialized tools. Each type of combatant brings a different mental model to the conflict, but uses the same set of tools.
  • There are also small but highly-skilled cadres of ideologically-motivated shitposters whose skill at information warfare is matched only by their fundamental incomprehension of the real damage they’re unleashing for lulz. A subset of these are conspiratorial — committed truthers who were previously limited to chatter on obscure message boards until social platform scaffolding and inadvertently-sociopathic algorithms facilitated their evolution into leaderless cults able to spread a gospel with ease.
  • There’s very little incentive not to try everything: this is a revolution that is being A/B tested.
  • ...17 more annotations...
  • The combatants view this as a Hobbesian information war of all against all and a tactical arms race; the other side sees it as a peacetime civil governance problem.
  • Our most technically-competent agencies are prevented from finding and countering influence operations because of the concern that they might inadvertently engage with real U.S. citizens as they target Russia’s digital illegals and ISIS’ recruiters. This capability gap is eminently exploitable; why execute a lengthy, costly, complex attack on the power grid when there is relatively no cost, in terms of dollars as well as consequences, to attack a society’s ability to operate with a shared epistemology? This leaves us in a terrible position, because there are so many more points of failure
  • Cyberwar, most people thought, would be fought over infrastructure — armies of state-sponsored hackers and the occasional international crime syndicate infiltrating networks and exfiltrating secrets, or taking over critical systems. That’s what governments prepared and hired for; it’s what defense and intelligence agencies got good at. It’s what CSOs built their teams to handle. But as social platforms grew, acquiring standing audiences in the hundreds of millions and developing tools for precision targeting and viral amplification, a variety of malign actors simultaneously realized that there was another way. They could go straight for the people, easily and cheaply. And that’s because influence operations can, and do, impact public opinion. Adversaries can target corporate entities and transform the global power structure by manipulating civilians and exploiting human cognitive vulnerabilities at scale. Even actual hacks are increasingly done in service of influence operations: stolen, leaked emails, for example, were profoundly effective at shaping a national narrative in the U.S. election of 2016.
  • The substantial time and money spent on defense against critical-infrastructure hacks is one reason why poorly-resourced adversaries choose to pursue a cheap, easy, low-cost-of-failure psy-ops war instead
  • Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble. But ultimately the information war is about territory — just not the geographic kind. In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.
  • This shift from targeting infrastructure to targeting the minds of civilians was predictable. Theorists  like Edward Bernays, Hannah Arendt, and Marshall McLuhan saw it coming decades ago. As early as 1970, McLuhan wrote, in Culture is our Business, “World War III is a guerrilla information war with no division between military and civilian participation.”
  • The 2014-2016 influence operation playbook went something like this: a group of digital combatants decided to push a specific narrative, something that fit a long-term narrative but also had a short-term news hook. They created content: sometimes a full blog post, sometimes a video, sometimes quick visual memes. The content was posted to platforms that offer discovery and amplification tools. The trolls then activated collections of bots and sockpuppets to blanket the biggest social networks with the content. Some of the fake accounts were disposable amplifiers, used mostly to create the illusion of popular consensus by boosting like and share counts. Others were highly backstopped personas run by real human beings, who developed standing audiences and long-term relationships with sympathetic influencers and media; those accounts were used for precision messaging with the goal of reaching the press. Israeli company Psy Group marketed precisely these services to the 2016 Trump Presidential campaign; as their sales brochure put it, “Reality is a Matter of Perception”.
  • If an operation is effective, the message will be pushed into the feeds of sympathetic real people who will amplify it themselves. If it goes viral or triggers a trending algorithm, it will be pushed into the feeds of a huge audience. Members of the media will cover it, reaching millions more. If the content is false or a hoax, perhaps there will be a subsequent correction article – it doesn’t matter, no one will pay attention to it.
  • Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead. Hostile state intelligence services in particular are now increasingly adept at operating collections of human-operated precision personas, often called sockpuppets, or cyborgs, that will escape punishment under the the bot laws. They will simply work harder to ingratiate themselves with real American influencers, to join real American retweet rings. If combatants need to quickly spin up a digital mass movement, well-placed personas can rile up a sympathetic subreddit or Facebook Group populated by real people, hijacking a community in the way that parasites mobilize zombie armies.
  • Attempts to legislate away 2016 tactics primarily have the effect of triggering civil libertarians, giving them an opportunity to push the narrative that regulators just don’t understand technology, so any regulation is going to be a disaster.
  • The entities best suited to mitigate the threat of any given emerging tactic will always be the platforms themselves, because they can move fast when so inclined or incentivized. The problem is that many of the mitigation strategies advanced by the platforms are the information integrity version of greenwashing; they’re a kind of digital security theater, the TSA of information warfare
  • Algorithmic distribution systems will always be co-opted by the best resourced or most technologically capable combatants. Soon, better AI will rewrite the playbook yet again — perhaps the digital equivalent of  Blitzkrieg in its potential for capturing new territory. AI-generated audio and video deepfakes will erode trust in what we see with our own eyes, leaving us vulnerable both to faked content and to the discrediting of the actual truth by insinuation. Authenticity debates will commandeer media cycles, pushing us into an infinite loop of perpetually investigating basic facts. Chronic skepticism and the cognitive DDoS will increase polarization, leading to a consolidation of trust in distinct sets of right and left-wing authority figures – thought oligarchs speaking to entirely separate groups
  • platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors
  • What made democracies strong in the past — a strong commitment to free speech and the free exchange of ideas — makes them profoundly vulnerable in the era of democratized propaganda and rampant misinformation. We are (rightfully) concerned about silencing voices or communities. But our commitment to free expression makes us disproportionately vulnerable in the era of chronic, perpetual information war. Digital combatants know that once speech goes up, we are loathe to moderate it; to retain this asymmetric advantage, they push an all-or-nothing absolutist narrative that moderation is censorship, that spammy distribution tactics and algorithmic amplification are somehow part of the right to free speech.
  • We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.
  • Unceasing information war is one of the defining threats of our day. This conflict is already ongoing, but (so far, in the United States) it’s largely bloodless and so we aren’t acknowledging it despite the huge consequences hanging in the balance. It is as real as the Cold War was in the 1960s, and the stakes are staggeringly high: the legitimacy of government, the persistence of societal cohesion, even our ability to respond to the impending climate crisis.
  • Influence operations exploit divisions in our society using vulnerabilities in our information ecosystem. We have to move away from treating this as a problem of giving people better facts, or stopping some Russian bots, and move towards thinking about it as an ongoing battle for the integrity of our information infrastructure – easily as critical as the integrity of our financial markets.
Ed Webb

Nine million logs of Brits' road journeys spill onto the internet from password-less nu... - 0 views

  • In a blunder described as "astonishing and worrying," Sheffield City Council's automatic number-plate recognition (ANPR) system exposed to the internet 8.6 million records of road journeys made by thousands of people
  • The Register learned of the unprotected dashboard from infosec expert and author Chris Kubecka, working with freelance writer Gerard Janssen, who stumbled across it using search engine Censys.io. She said: "Was the public ever told the system would be in place and that the risks were reasonable? Was there an opportunity for public discourse – or, like in Hitchhiker's Guide to the Galaxy, were the plans in a planning office at an impossible or undisclosed location?"
  • The dashboard was taken offline within a few hours of The Register alerting officials. Sheffield City Council and South Yorkshire Police added: "As soon as this was brought to our attention we took action to deal with the immediate risk and ensure the information was no longer viewable externally. Both Sheffield City Council and South Yorkshire Police have also notified the Information Commissioner's Office. We will continue to investigate how this happened and do everything we can to ensure it will not happen again."
Ed Webb

John Lanchester reviews 'The Attention Merchants' by Tim Wu, 'Chaos Monkeys' ... - 1 views

  •  
    Excellent. Really excellent.
Ed Webb

Saudi Crown Prince Asks: What if a City, But It's a 105-Mile Line - 0 views

  • Vicious Saudi autocrat Mohamed bin Salman has a new vision for Neom, his plan for a massive, $500 billion, AI-powered, nominally legally independent city-state of the future on the border with Egypt and Jordan. When we last left the crown prince, he had reportedly commissioned 2,300-pages’ worth of proposals from Boston Consulting Group, McKinsey & Co. and Oliver Wyman boasting of possible amenities like holographic schoolteachers, cloud seeding to create rain, flying taxis, glow-in-the-dark beaches, a giant NASA-built artificial moon, and lots of robots: maids, cage fighters, and dinosaurs.
  • Now Salman has a bold new idea: One of the cities in Neom is a line. A line roughly 105-miles (170-kilometers) long and a five-minute walk wide, to be exact. No, really, it’s a line. The proposed city is a line that stretches across all of Saudi Arabia. That’s the plan.
  • “With zero cars, zero streets, and zero carbon emissions, you can fulfill all your daily requirements within a five minute walk,” the crown prince continued. “And you can travel from end to end within 20 minutes.”AdvertisementThe end-to-end in 20 minutes boast likely refers to some form of mass transit that doesn’t yet exist. That works out to a transit system running at about 317 mph (510 kph). That would be much faster than Japan’s famous Shinkansen train network, which is capped at 200 mph (321 kph). Some Japanese rail companies have tested maglev trains that have gone up to 373 mph (600 kph), though it’s nowhere near ready for primetime.
  • ...3 more annotations...
  • According to Bloomberg, Saudi officials project the Line will cost around $100-$200 billion of the $500 billion planned to be spent on Neom and will have a population of 1 million with 380,000 jobs by the year 2030. It will have one of the biggest airports in the world for some reason, which seems like a strange addition to a supposedly climate-friendly city.
  • The site also makes numerous hand wavy and vaguely menacing claims, including that “all businesses and communities” will have “over 90%” of their data processed by AI and robots:
  • Don’t pay attention to Saudi war crimes in Yemen, the prince’s brutal crackdowns on dissent, the hit squad that tortured journalist Jamal Khashoggi to death, and the other habitual human rights abuses that allow the Saudi monarchy to remain in power. Also, ignore that obstacles facing Neom include budgetary constraints, the forced eviction of tens of thousands of existing residents such as the Huwaitat tribe, coronavirus and oil shock, investor flight over human rights concerns, and the lingering questions of whether the whole project is a distraction from pressing domestic issues and/or a mirage conjured up by consulting firms pandering to the crown prince’s ego and hungry for lucrative fees. Nevermind you that there are numerous ways we could ensure the cities people already live in are prepared for climate change rather than blowing billions of dollars on a vanity project.
Ed Webb

AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views

  • Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
  • Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
  • Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
  • ...9 more annotations...
  • Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.
  • output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem
  • Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.
  • the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few
  • the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place
  • the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.
  • employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.
  • too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity
  • We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.
1 - 16 of 16
Showing 20 items per page