Skip to main content

Home/ Dystopias/ Group items tagged technology

Rss Feed Group items tagged

Ed Webb

The Vulture Transcript: Sci-Fi Author William Gibson on Why He Loves Twitter, Thinks Fa... - 0 views

  • If you’re born now, your native culture is global, to an increasing extent. There are things that are unknowable for futurists of any stripe, be they science-fiction writing charlatans like myself or anthropologists in the employ of large automobile companies who are paid to figure out what people might want in ten years. One of the things that’s unknowable is how humanity will use any new technology. No one imagines that we’d wind up with a world that looks like this on the basis of the technology that’s emerged in the last hundred years. Emergent technology is the most powerful single driver of change in the world, and it has been forever. Technology trumps politics. Technology trumps religion. It just does. And that’s why we are where we are now. It seems so self-evident to me that I can never go to that Technology: threat or menace? position. Okay, well, if we don’t do this, what are we going to do? This is not only what we do, it’s literally who we are as a species. We’ve become something other than what our ancestors were. I’m sitting here at age 52 with almost all of my own teeth. That didn’t used to happen. I’m a cyborg. I’m immune to any number of lethal diseases by virtue of technology. I’m sitting on top of this enormous pyramid of technology that starts with flint hand-axes and finds me in a hotel in Austin, Texas, talking to someone thousands of miles away on a telephone and that’s just what we do. At this point, we don’t have the option of not being technological creatures.
  • You’ve taken to Twitter (GreatDismal). I have indeed. I’ve taken to Twitter like a duck to water. Its simplicity allows the user to customize the experience with relatively little input from the Twitter entity itself. I hope they keep it simple. It works because it’s simple. I was never interested in Facebook or MySpace because the environment seemed too top-down mediated. They feel like malls to me. But Twitter actually feels like the street. You can bump into anybody on Twitter.
  • Twitter’s huge. There’s a whole culture of people on Twitter who do nothing but handicap racehorses. I’ll never go there. One commonality about people I follow is that they’re all doing what I’m doing: They’re all using it as novelty aggregation and out of that grows some sense of being part of a community. It’s a strange thing. There are countless millions of communities on Twitter. They occupy the same virtual space but they never see each other. They never interact. Really, the Twitter I’m always raving about is my Twitter.
  • ...1 more annotation...
  • The Civil War was scarcely more than 150 years ago. It’s yesterday. Race in American hasn’t been sorted out. This used to be a country that was run exclusively by white guys in suits. It’s not going to be a country that’s run exclusively by white guys in suits, and that doesn’t have anything to do with politics, it’s just demographics. That makes some people very uncomfortable. The tea party is like the GOP’s Southern strategy coming back to exact the real cost of that strategy.
Ed Webb

Americans on the future of science and technology: meh | Bryan Alexander - 2 views

  • our expectations are generally unexcited and restrained.  The bold imagination of 20th-century American visions seems to have gone for a nap.  As Smithsonian’s article notes, “Most Americans view the technology- driven future with a sense of hope. They just don’t want to live there.”  We’re actually less excited than that
  • When it comes to specific emerging technologies we often greet them with broad, deep skepticism and fear, including human genetic engineering, robotics, drones, and wearable computing: 66% think it would be a change for the worse if prospective parents could alter the DNA of their children to produce smarter, healthier, or more athletic offspring. 65% think it would be a change for the worse if lifelike robots become the primary caregivers for the elderly and people in poor health. 63% think it would be a change for the worse if personal and commercial drones are given permission to fly through most U.S. airspace. 53% of Americans think it would be a change for the worse if most people wear implants or other devices that constantly show them information about the world around them. (emphases in original)
  • The heroic days of NASA in the popular imagination are long flown: “One in three (33%) expect that humans will have colonized planets other than Earth.”  Indeed. a few more Americans actually see teleportation happening.
  • ...1 more annotation...
  • Overall, Americans look at the future of science and technology with some hefty amounts of skepticism and dismay.  Health care improvements do appeal to us, unsurprisingly, given our ageing demographics.  Classic futures themes of space and travel have withered in our collective mind.  I’m reminded of Bruce Sterling’s aphorism about the rest of the 21st century: “The future is about old people, in big cities, afraid of the sky.”
Ed Webb

AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views

  • Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
  • Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
  • Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
  • ...9 more annotations...
  • Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.
  • output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem
  • Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.
  • the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few
  • the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place
  • the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.
  • employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.
  • too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity
  • We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.
Ed Webb

Anti-piracy tool will harvest and market your emotions - Computerworld Blogs - 0 views

  • After being awarded a grant, Aralia Systems teamed up with Machine Vision Lab in what seems like a massive invasion of your privacy beyond "in the name of security." Building on existing cinema anti-piracy technology, these companies plan to add the ability to harvest your emotions. This is the part where it seems that filmgoers should be eligible to charge movie theater owners. At the very least, shouldn't it result in a significantly discounted movie ticket?  Machine Vision Lab's Dr Abdul Farooq told PhysOrg, "We plan to build on the capabilities of current technology used in cinemas to detect criminals making pirate copies of films with video cameras. We want to devise instruments that will be capable of collecting data that can be used by cinemas to monitor audience reactions to films and adverts and also to gather data about attention and audience movement. ... It is envisaged that once the technology has been fine tuned it could be used by market researchers in all kinds of settings, including monitoring reactions to shop window displays."  
  • The 3D camera data will "capture the audience as a whole as a texture."
  • the technology will enable companies to cash in on your emotions and sell that personal information as marketing data
  • ...4 more annotations...
  • "Within the cinema industry this tool will feed powerful marketing data that will inform film directors, cinema advertisers and cinemas with useful data about what audiences enjoy and what adverts capture the most attention. By measuring emotion and movement film companies and cinema advertising agencies can learn so much from their audiences that will help to inform creativity and strategy.” 
  • hey plan to fine-tune it to monitor our reactions to window displays and probably anywhere else the data can be used for surveillance and marketing.
  • Muslim women have got the right idea. Soon well all be wearing privacy tents.
  • In George Orwell's novel 1984, each home has a mandatory "telescreen," a large flat panel, something like a TV, but with the ability for the authorities to observer viewers in order to ensure they are watching all the required propaganda broadcasts and reacting with appropriate emotions. Problem viewers would be brought to the attention of the Thought Police. The telescreen, of course, could not be turned off. It is reassuring to know that our technology has finally caught up with Oceania's.
Ed Webb

The stories of Ray Bradbury. - By Nathaniel Rich - Slate Magazine - 0 views

  • Thanks to Fahrenheit 451, now required reading for every American middle-schooler, Bradbury is generally thought of as a writer of novels, but his talents—particularly his mastery of the diabolical premise and the brain-exploding revelation—are best suited to the short form.
  • The best stories have a strange familiarity about them. They're like long-forgotten acquaintances—you know you've met them somewhere before. There is, for instance, the tale of the time traveler who goes back into time and accidentally steps on a butterfly, thereby changing irrevocably the course of history ("A Sound of Thunder"). There's the one about the man who buys a robotic husband to live with his wife so that he can be free to travel and pursue adventure—that's "Marionettes, Inc." (Not to be confused with "I Sing the Body Electric!" about the man who buys a robotic grandmother to comfort his children after his wife dies.) Or "The Playground," about the father who changes places with his son so that he can spare his boy the cruelty of childhood—forgetting exactly how cruel childhood can be. The stories are familiar because they've been adapted, and plundered from, by countless other writers—in books, television shows, and films. To the extent that there is a mythology of our age, Bradbury is one of its creators.
  • "But Bradbury's skill is in evoking exactly how soul-annihilating that world is."    Of course, this also displays one of the key facts of Bradbury's work -- and a trend in science fiction that is often ignored. He's a reactionary of the first order, deeply distrustful of technology and even the notion of progress. Many science fiction writers had begun to rewrite the rules of women in space by the time Bradbury had women in long skirts hauling pots and pans over the Martian landscape. And even he wouldn't disagree. In his famous Playboy interview he responded to a question about predicting the future with, "It's 'prevent the future', that's the way I put it. Not predict it, prevent it."
  • ...5 more annotations...
  • And for the record, I've never understood why a writer who recognizes technology is labeled a "sci-fi writer", as if being a "sci-fi writer" were equal to being some sort of substandard, second-rate hack. The great Kurt Vonnegut managed to get stuck in that drawer after he recognized technolgy in his 1st novel "Player Piano". No matter that he turned out to be (imo) one of the greatest authors of the 20th century, perio
  • it's chilling how prescient he was about modern media culture in Fahrenheit 451. It's not a Luddite screed against TV. It's a speculative piece on what happens when we become divorced from the past and more attuned to images on the screen than we are to each other.
  • ite author of mine since I was in elementary school way back when mammoths roamed the earth. To me, he was an ardent enthusiast of technology, but also recognized its potential for seperating us from one another while at the same time seemingly making us more "connected" in a superficial and transitory way
  • Bradbury is undeniably skeptical of technology and the risks it brings, particularly the risk that what we'd now call "virtualization" will replace actual emotional, intellectual or physical experience. On the other hand, however, I don't think there's anybody who rhapsodizes about the imaginative possibilities of rocketships and robots the way Bradbury does, and he's built entire setpieces around the idea of technological wonders creating new experiences.    I'm not saying he doesn't have a Luddite streak, more that he has feet in both camps and is harder to pin down than a single label allows. And I'll also add that in his public pronouncements of late, the Luddite streak has come out more strongly--but I tend to put much of that down to the curmudgeonliness of a ninety-year-old man.
  • I don't think he is a luddite so much as he is the little voice that whispers "be careful what you wish for." We have been sold the beautiful myth that technology will buy us free time, but we are busier than ever. TV was supposed to enlighten the masses, instead we have "reality TV" and a news network that does not let facts get in the way of its ideological agenda. We romanticize childhood, ignoring children's aggressive impulses, then feed them on a steady diet of violent video games.  
Ed Webb

The Web Means the End of Forgetting - NYTimes.com - 1 views

  • for a great many people, the permanent memory bank of the Web increasingly means there are no second chances — no opportunities to escape a scarlet letter in your digital past. Now the worst thing you’ve done is often the first thing everyone knows about you.
  • a collective identity crisis. For most of human history, the idea of reinventing yourself or freely shaping your identity — of presenting different selves in different contexts (at home, at work, at play) — was hard to fathom, because people’s identities were fixed by their roles in a rigid social hierarchy. With little geographic or social mobility, you were defined not as an individual but by your village, your class, your job or your guild. But that started to change in the late Middle Ages and the Renaissance, with a growing individualism that came to redefine human identity. As people perceived themselves increasingly as individuals, their status became a function not of inherited categories but of their own efforts and achievements. This new conception of malleable and fluid identity found its fullest and purest expression in the American ideal of the self-made man, a term popularized by Henry Clay in 1832.
  • the dawning of the Internet age promised to resurrect the ideal of what the psychiatrist Robert Jay Lifton has called the “protean self.” If you couldn’t flee to Texas, you could always seek out a new chat room and create a new screen name. For some technology enthusiasts, the Web was supposed to be the second flowering of the open frontier, and the ability to segment our identities with an endless supply of pseudonyms, avatars and categories of friendship was supposed to let people present different sides of their personalities in different contexts. What seemed within our grasp was a power that only Proteus possessed: namely, perfect control over our shifting identities. But the hope that we could carefully control how others view us in different contexts has proved to be another myth. As social-networking sites expanded, it was no longer quite so easy to have segmented identities: now that so many people use a single platform to post constant status updates and photos about their private and public activities, the idea of a home self, a work self, a family self and a high-school-friends self has become increasingly untenable. In fact, the attempt to maintain different selves often arouses suspicion.
  • ...20 more annotations...
  • All around the world, political leaders, scholars and citizens are searching for responses to the challenge of preserving control of our identities in a digital world that never forgets. Are the most promising solutions going to be technological? Legislative? Judicial? Ethical? A result of shifting social norms and cultural expectations? Or some mix of the above?
  • These approaches share the common goal of reconstructing a form of control over our identities: the ability to reinvent ourselves, to escape our pasts and to improve the selves that we present to the world.
  • many technological theorists assumed that self-governing communities could ensure, through the self-correcting wisdom of the crowd, that all participants enjoyed the online identities they deserved. Wikipedia is one embodiment of the faith that the wisdom of the crowd can correct most mistakes — that a Wikipedia entry for a small-town mayor, for example, will reflect the reputation he deserves. And if the crowd fails — perhaps by turning into a digital mob — Wikipedia offers other forms of redress
  • In practice, however, self-governing communities like Wikipedia — or algorithmically self-correcting systems like Google — often leave people feeling misrepresented and burned. Those who think that their online reputations have been unfairly tarnished by an isolated incident or two now have a practical option: consulting a firm like ReputationDefender, which promises to clean up your online image. ReputationDefender was founded by Michael Fertik, a Harvard Law School graduate who was troubled by the idea of young people being forever tainted online by their youthful indiscretions. “I was seeing articles about the ‘Lord of the Flies’ behavior that all of us engage in at that age,” he told me, “and it felt un-American that when the conduct was online, it could have permanent effects on the speaker and the victim. The right to new beginnings and the right to self-definition have always been among the most beautiful American ideals.”
  • In the Web 3.0 world, Fertik predicts, people will be rated, assessed and scored based not on their creditworthiness but on their trustworthiness as good parents, good dates, good employees, good baby sitters or good insurance risks.
  • “Our customers include parents whose kids have talked about them on the Internet — ‘Mom didn’t get the raise’; ‘Dad got fired’; ‘Mom and Dad are fighting a lot, and I’m worried they’ll get a divorce.’ ”
  • as facial-recognition technology becomes more widespread and sophisticated, it will almost certainly challenge our expectation of anonymity in public
  • Ohm says he worries that employers would be able to use social-network-aggregator services to identify people’s book and movie preferences and even Internet-search terms, and then fire or refuse to hire them on that basis. A handful of states — including New York, California, Colorado and North Dakota — broadly prohibit employers from discriminating against employees for legal off-duty conduct like smoking. Ohm suggests that these laws could be extended to prevent certain categories of employers from refusing to hire people based on Facebook pictures, status updates and other legal but embarrassing personal information. (In practice, these laws might be hard to enforce, since employers might not disclose the real reason for their hiring decisions, so employers, like credit-reporting agents, might also be required by law to disclose to job candidates the negative information in their digital files.)
  • research group’s preliminary results suggest that if rumors spread about something good you did 10 years ago, like winning a prize, they will be discounted; but if rumors spread about something bad that you did 10 years ago, like driving drunk, that information has staying power
  • many people aren’t worried about false information posted by others — they’re worried about true information they’ve posted about themselves when it is taken out of context or given undue weight. And defamation law doesn’t apply to true information or statements of opinion. Some legal scholars want to expand the ability to sue over true but embarrassing violations of privacy — although it appears to be a quixotic goal.
  • Researchers at the University of Washington, for example, are developing a technology called Vanish that makes electronic data “self-destruct” after a specified period of time. Instead of relying on Google, Facebook or Hotmail to delete the data that is stored “in the cloud” — in other words, on their distributed servers — Vanish encrypts the data and then “shatters” the encryption key. To read the data, your computer has to put the pieces of the key back together, but they “erode” or “rust” as time passes, and after a certain point the document can no longer be read.
  • Plenty of anecdotal evidence suggests that young people, having been burned by Facebook (and frustrated by its privacy policy, which at more than 5,000 words is longer than the U.S. Constitution), are savvier than older users about cleaning up their tagged photos and being careful about what they post.
  • norms are already developing to recreate off-the-record spaces in public, with no photos, Twitter posts or blogging allowed. Milk and Honey, an exclusive bar on Manhattan’s Lower East Side, requires potential members to sign an agreement promising not to blog about the bar’s goings on or to post photos on social-networking sites, and other bars and nightclubs are adopting similar policies. I’ve been at dinners recently where someone has requested, in all seriousness, “Please don’t tweet this” — a custom that is likely to spread.
  • There’s already a sharp rise in lawsuits known as Twittergation — that is, suits to force Web sites to remove slanderous or false posts.
  • strategies of “soft paternalism” that might nudge people to hesitate before posting, say, drunken photos from Cancún. “We could easily think about a system, when you are uploading certain photos, that immediately detects how sensitive the photo will be.”
  • It’s sobering, now that we live in a world misleadingly called a “global village,” to think about privacy in actual, small villages long ago. In the villages described in the Babylonian Talmud, for example, any kind of gossip or tale-bearing about other people — oral or written, true or false, friendly or mean — was considered a terrible sin because small communities have long memories and every word spoken about other people was thought to ascend to the heavenly cloud. (The digital cloud has made this metaphor literal.) But the Talmudic villages were, in fact, far more humane and forgiving than our brutal global village, where much of the content on the Internet would meet the Talmudic definition of gossip: although the Talmudic sages believed that God reads our thoughts and records them in the book of life, they also believed that God erases the book for those who atone for their sins by asking forgiveness of those they have wronged. In the Talmud, people have an obligation not to remind others of their past misdeeds, on the assumption they may have atoned and grown spiritually from their mistakes. “If a man was a repentant [sinner],” the Talmud says, “one must not say to him, ‘Remember your former deeds.’ ” Unlike God, however, the digital cloud rarely wipes our slates clean, and the keepers of the cloud today are sometimes less forgiving than their all-powerful divine predecessor.
  • On the Internet, it turns out, we’re not entitled to demand any particular respect at all, and if others don’t have the empathy necessary to forgive our missteps, or the attention spans necessary to judge us in context, there’s nothing we can do about it.
  • Gosling is optimistic about the implications of his study for the possibility of digital forgiveness. He acknowledged that social technologies are forcing us to merge identities that used to be separate — we can no longer have segmented selves like “a home or family self, a friend self, a leisure self, a work self.” But although he told Facebook, “I have to find a way to reconcile my professor self with my having-a-few-drinks self,” he also suggested that as all of us have to merge our public and private identities, photos showing us having a few drinks on Facebook will no longer seem so scandalous. “You see your accountant going out on weekends and attending clown conventions, that no longer makes you think that he’s not a good accountant. We’re coming to terms and reconciling with that merging of identities.”
  • a humane society values privacy, because it allows people to cultivate different aspects of their personalities in different contexts; and at the moment, the enforced merging of identities that used to be separate is leaving many casualties in its wake.
  • we need to learn new forms of empathy, new ways of defining ourselves without reference to what others say about us and new ways of forgiving one another for the digital trails that will follow us forever
Ed Webb

Hayabusa2 and the unfolding future of space exploration | Bryan Alexander - 0 views

  • What might this tell us about the future?  Let’s consider Ryugu as a datapoint or story for where space exploration might head next.
  • There isn’t a lot of press coverage beyond Japan (ah, once again I wish I read Japanese), if I go by Google News headlines.  There’s nothing on the CNN.com homepage now, other than typical spatters of dread and celebrity; the closest I can find is a link to a story about Musk’s space tourism project, which a Japanese billionaire will ride.  Nothing on Fox News or MSNBC’s main pages.  BBC News at least has a link halfway down its main page.
  • Hayabusa is a Japanese project, not an American one, and national interest counts for a lot.  No humans were involved, so human interest and story are absent.  Perhaps the whole project looks too science-y for a culture that spins into post-truthiness, contains some serious anti-science and anti-technology strands, or just finds science stories too dry.  Or maybe the American media outlets think Americans just aren’t that into space in particular in 2018.
  • ...13 more annotations...
  • Hayabusa2 reminds us that space exploration is more multinational and more disaggregated than ever.  Besides JAXA there are space programs being build up by China and India, including robot craft, astronauts (taikonauts, for China, vyomanauts, for India), and space stations.  The Indian Mars Orbiter still circles the fourth planet. The European Space Agency continues to develop satellites and launch rockets, like the JUICE (JUpiter ICy moons Explorer).  Russia is doing some mixture of commercial spaceflight, ISS maintenance, exploration, and geopoliticking.  For these nations space exploration holds out a mixture of prestige, scientific and engineering development, and possible commercial return.
  • Bezos, Musk, and others live out a Robert Heinlein story by building up their own personal space efforts.  This is, among other things, a sign of how far American wealth has grown, and how much of the elite are connected to technical skills (as opposed to inherited wealth).  It’s an effect of plutocracy, as I’ve said before.  Yuri Milner might lead the first interstellar mission with his Breakthrough Starshot plan.
  • Privatization of space seems likely to continue.
  • Uneven development is also likely, as different programs struggle to master different stations in the space path.  China may assemble a space station while Japan bypasses orbital platforms for the moon, private cubesats head into the deep solar system and private companies keep honing their Earth orbital launch skills.
  • Surely the challenges of getting humans and robots further into space will elicit interesting projects that can be used Earthside.  Think about health breakthroughs needed to keep humans alive in environments scoured by radiation, or AI to guide robots through complex situations.
  • robots continue to be cheap, far easier to operate, capable of enduring awful stresses, and happy to send gorgeous data back our way
  • Japan seems committed to creating a lunar colony.  Musk and Bezos burn with the old science fiction and NASA hunger for shipping humans into the great dark.  The lure of Mars seems to be a powerful one, and a multinational, private versus public race could seize the popular imagination.  Older people may experience a rush of nostalgia for the glorious space race of their youth.
  • This competition could turn malign, of course.  Recall that the 20th century’s space races grew out of warfare, and included many plans for combat and destruction. Nayef Al-Rodhan hints at possible strains in international cooperation: The possible fragmentation of outer space research activities in the post-ISS period would constitute a break-up of an international alliance that has fostered unprecedented cooperation between engineers and scientists from rival geopolitical powers – aside from China. The ISS represents perhaps the pinnacle of post-Cold War cooperation and has allowed for the sharing and streamlining of work methods and differing norms. In a current period of tense relations, it is worrying that the US and Russia may be ending an important phase of cooperation.
  • Space could easily become the ground for geopolitical struggles once more, and possibly a flashpoint as well.  Nationalism, neonationalism, nativism could power such stresses
  • Enough of an off-Earth settlement could lead to further forays, once we bypass the terrible problem of getting off the planet’s surface, and if we can develop new ways to fuel and sustain craft in space.  The desire to connect with that domain might help spur the kind of space elevator which will ease Earth-to-orbit challenges.
  • The 1960s space race saw the emergence of a kind of astronaut cult.  The Soviet space program’s Russian roots included a mystical tradition.  We could see a combination of nostalgia from older folks and can-do optimism from younger people, along with further growth in STEM careers and interest.  Dialectically we should expect the opposite.  A look back at the US-USSR space race shows criticism and opposition ranging from the arts (Gil Scott-Heron’s “Whitey on the Moon”, Jello Biafra’s “Why I’m Glad the Space Shuttle Blew Up”) to opinion polls (in the US NASA only won real support for the year around Apollo 11, apparently).  We can imagine all kinds of political opposition to a 21st century space race, from people repeating the old Earth versus space spending canard to nationalistic statements (“Let Japan land on Deimos.  We have enough to worry about here in Chicago”) to environmental concerns to religious ones.  Concerns about vast wealth and inequality could well target space.
  • How will we respond when, say, twenty space tourists crash into a lunar crater and die, in agony, on YouTube?
  • That’s a lot to hang on one Japanese probe landing two tiny ‘bots on an asteroid in 2018, I know.  But Hayabusa2 is such a signal event that it becomes a fine story to think through.
Ed Webb

DK Matai: The Rise of The Bio-Info-Nano Singularity - 0 views

  • The human capability for information processing is limited, yet there is an accelerating change in the development and deployment of new technology. This relentless wave upon wave of new information and technology causes an overload on the human mind by eventually flooding it. The resulting acopia -- inability to cope -- has to be solved by the use of ever more sophisticated information intelligence. Extrapolating these capabilities suggests the near-term emergence and visibility of self-improving neural networks, "artificial" intelligence, quantum algorithms, quantum computing and super-intelligence. This metamorphosis is so much beyond present human capabilities that it becomes impossible to understand it with the pre-conceptions and conditioning of the present mindset, societal make-up and existing technology
  • The Bio-Info-Nano Singularity is a transcendence to a wholly new regime of mind, society and technology, in which we have to learn to think in a new way in order to survive as a species.
  • What is globalized human society going to do with the mass of unemployed human beings that are rendered obsolete by the approaching super-intelligence of the Bio-Info-Nano Singularity?
  • ...5 more annotations...
  • Nothing futurists predict ever comes true, but, by the time the time comes, everybody has forgotten they said it--and then they are free to say something else that never will come true but that everybody will have forgotten they said by the time the time come
  • Most of us will become poisoned troglodytes in a techno dystopia
  • Any engineer can make 'stuff' go faster, kill deader, sort quicker, fly higher, record sharper, destroy more completely, etc.. We have a surfeit of that kind of creativity. What we need is some kind of genius to create a society that treats each other with equality, justice, caring and cooperativeness. The concept of 'singularity' doesn't excite me nearly as much as the idea that sometime we might be able to move beyond the civilization level of a troop of chimpanzees. I'm hoping that genius comes before we manage to destroy what little civilization we have with all our neat "stuff"
  • There's a lot of abstraction in this article, which is a trend of what I have read of a number of various movements taking up the Singularity cause. This nebulous but optimistic prediction of an incomprehensibly advanced future, wherein through technology and science we achieve quasi-immortality, or absolute control of thought, omniscience, or transcendence from the human entirely
  • Welcome to the Frankenstein plot. This is a very common Hollywood plot, the idea of a manmade creation running amok. The concept that the author describes can also be described as an asymtotic curve on a graph where scientific achievement parallels time at first then gradually begins to go vertical until infinite scientific knowledge and invention occurs in an incredibly short time.
Ed Webb

Drones Get Ready to Fly, Unseen, Into Everyday Life - WSJ.com - 0 views

  • An unmanned aircraft that can fly a predetermined route costs a few hundred bucks to build and can be operated by iPhone.
  • the Federal Aviation Administration limits domestic use of drones to the government, and even those are under tight restrictions. FAA spokeswoman Laura Brown said the agency is working with private industry on standards that might allow broader use once drone technology evolves. When it comes to paparazzi use of drones, she said, "our primary concern with that would be safety issues."
  • The ability to share software and hardware designs on the Internet has sped drone development, said Christopher Anderson, founder of the website DIY Drones, a clearinghouse for the nearly 12,000 drone hobbyistsaround the world.
  • ...5 more annotations...
  • The goal is to make a drone that can stabilize itself and track its target. Given the rapid evolution of technology, Mr. Anderson said, "that's now a technically trivial task."
  • As a parent of a 3-year-old, she said, she could use the same technology to track her daughter on her way to school (she would need to plant an electronic bug in her lunch box or backpack). That would "bring a whole new meaning to a hover parent," she said. Schools could even use drones for perimeter control.
  • human nature being what it is, it won't take long for the technology to be embraced for less noble ends. Could nosey neighbors use a drone to monitor who isn't picking up after their dogs? "That's possible," said Henry Crumpton, a former top CIA counterterrorism official who is now chairman of a company that develops drones—including one that can take off vertically, fly through a window and hover silently over your breakfast table. "The only thing you're bounded by is your imagination—and the FAA in the United States," he said.
  • it's just a matter of time before drone technology and safety improvements make the gadgets a common part of the urban landscape.
  • we could all be wandering around with little networks of vehicles flying over our heads spying on us,
Ed Webb

WIRED - 0 views

  • Over the past two years, RealNetworks has developed a facial recognition tool that it hopes will help schools more accurately monitor who gets past their front doors. Today, the company launched a website where school administrators can download the tool, called SAFR, for free and integrate it with their own camera systems
  • how to balance privacy and security in a world that is starting to feel like a scene out of Minority Report
  • facial recognition technology often misidentifies black people and women at higher rates than white men
  • ...7 more annotations...
  • "The use of facial recognition in schools creates an unprecedented level of surveillance and scrutiny," says John Cusick, a fellow at the Legal Defense Fund. "It can exacerbate racial disparities in terms of how schools are enforcing disciplinary codes and monitoring their students."
  • The school would ask adults, not kids, to register their faces with the SAFR system. After they registered, they’d be able to enter the school by smiling at a camera at the front gate. (Smiling tells the software that it’s looking at a live person and not, for instance, a photograph). If the system recognizes the person, the gates automatically unlock
  • The software can predict a person's age and gender, enabling schools to turn off access for people below a certain age. But Glaser notes that if other schools want to register students going forward, they can
  • There are no guidelines about how long the facial data gets stored, how it’s used, or whether people need to opt in to be tracked.
  • Schools could, for instance, use facial recognition technology to monitor who's associating with whom and discipline students differently as a result. "It could criminalize friendships," says Cusick of the Legal Defense Fund.
  • SAFR boasts a 99.8 percent overall accuracy rating, based on a test, created by the University of Massachusetts, that vets facial recognition systems. But Glaser says the company hasn’t tested whether the tool is as good at recognizing black and brown faces as it is at recognizing white ones. RealNetworks deliberately opted not to have the software proactively predict ethnicity, the way it predicts age and gender, for fear of it being used for racial profiling. Still, testing the tool's accuracy among different demographics is key. Research has shown that many top facial recognition tools are particularly bad at recognizing black women
  • "It's tempting to say there's a technological solution, that we're going to find the dangerous people, and we're going to stop them," she says. "But I do think a large part of that is grasping at straws."
Ed Webb

An Algorithm Summarizes Lengthy Text Surprisingly Well - MIT Technology Review - 0 views

  • As information overload grows ever worse, computers may become our only hope for handling a growing deluge of documents. And it may become routine to rely on a machine to analyze and paraphrase articles, research papers, and other text for you.
  • Parsing language remains one of the grand challenges of artificial intelligence (see “AI’s Language Problem”). But it’s a challenge with enormous commercial potential. Even limited linguistic intelligence—the ability to parse spoken or written queries, and to respond in more sophisticated and coherent ways—could transform personal computing. In many specialist fields—like medicine, scientific research, and law—condensing information and extracting insights could have huge commercial benefits.
  • The system experiments in order to generate summaries of its own using a process called reinforcement learning. Inspired by the way animals seem to learn, this involves providing positive feedback for actions that lead toward a particular objective. Reinforcement learning has been used to train computers to do impressive new things, like playing complex games or controlling robots (see “10 Breakthrough Technologies 2017: Reinforcement Learning”). Those working on conversational interfaces are increasingly now looking at reinforcement learning as a way to improve their systems.
  • ...1 more annotation...
  • “At some point, we have to admit that we need a little bit of semantics and a little bit of syntactic knowledge in these systems in order for them to be fluid and fluent,”
Ed Webb

Sad by design | Eurozine - 0 views

  • ‘technological sadness’ – the default mental state of the online billions
  • If only my phone could gently weep. McLuhan’s ‘extensions of man’ has imploded right into the exhausted self.
  • Social reality is a corporate hybrid between handheld media and the psychic structure of the user. It’s a distributed form of social ranking that can no longer be reduced to the interests of state and corporate platforms. As online subjects, we too are implicit, far too deeply involved
  • ...20 more annotations...
  • Google and Facebook know how to utilize negative emotions, leading to the new system-wide goal: find personalized ways to make you feel bad
  • in Adam Greenfield’s Radical Technologies, where he notices that ‘it seems strange to assert that anything as broad as a class of technologies might have an emotional tenor, but the internet of things does. That tenor is sadness… a melancholy that rolls off it in waves and sheets. The entire pretext on which it depends is a milieu of continuously shattered attention, of overloaded awareness, and of gaps between people just barely annealed with sensors, APIs and scripts.’ It is a life ‘savaged by bullshit jobs, over-cranked schedules and long commutes, of intimacy stifled by exhaustion and the incapacity by exhaustion and the incapacity or unwillingness to be emotionally present.’
  • Omnipresent social media places a claim on our elapsed time, our fractured lives. We’re all sad in our very own way.4 As there are no lulls or quiet moments anymore, the result is fatigue, depletion and loss of energy. We’re becoming obsessed with waiting. How long have you been forgotten by your love ones? Time, meticulously measured on every app, tells us right to our face. Chronos hurts. Should I post something to attract attention and show I’m still here? Nobody likes me anymore. As the random messages keep relentlessly piling in, there’s no way to halt them, to take a moment and think it all through.
  • Unlike the blog entries of the Web 2.0 era, social media have surpassed the summary stage of the diary in a desperate attempt to keep up with real-time regime. Instagram Stories, for example, bring back the nostalgia of an unfolding chain of events – and then disappear at the end of the day, like a revenge act, a satire of ancient sentiments gone by. Storage will make the pain permanent. Better forget about it and move on
  • By browsing through updates, we’re catching up with machine time – at least until we collapse under the weight of participation fatigue. Organic life cycles are short-circuited and accelerated up to a point where the personal life of billions has finally caught up with cybernetics
  • The price of self-control in an age of instant gratification is high. We long to revolt against the restless zombie inside us, but we don’t know how.
  • Sadness arises at the point we’re exhausted by the online world.6 After yet another app session in which we failed to make a date, purchased a ticket and did a quick round of videos, the post-dopamine mood hits us hard. The sheer busyness and self-importance of the world makes you feel joyless. After a dive into the network we’re drained and feel socially awkward. The swiping finger is tired and we have to stop.
  • Much like boredom, sadness is not a medical condition (though never say never because everything can be turned into one). No matter how brief and mild, sadness is the default mental state of the online billions. Its original intensity gets dissipated, it seeps out, becoming a general atmosphere, a chronic background condition. Occasionally – for a brief moment – we feel the loss. A seething rage emerges. After checking for the tenth time what someone said on Instagram, the pain of the social makes us feel miserable, and we put the phone away. Am I suffering from the phantom vibration syndrome? Wouldn’t it be nice if we were offline? Why is life so tragic? He blocked me. At night, you read through the thread again. Do we need to quit again, to go cold turkey again? Others are supposed to move us, to arouse us, and yet we don’t feel anything anymore. The heart is frozen
  • If experience is the ‘habit of creating isolated moments within raw occurrence in order to save and recount them,’11 the desire to anaesthetize experience is a kind of immune response against ‘the stimulations of another modern novelty, the total aesthetic environment’.
  • unlike burn-out, sadness is a continuous state of mind. Sadness pops up the second events start to fade away – and now you’re down the rabbit hole once more. The perpetual now can no longer be captured and leaves us isolated, a scattered set of online subjects. What happens when the soul is caught in the permanent present? Is this what Franco Berardi calls the ‘slow cancellation of the future’? By scrolling, swiping and flipping, we hungry ghosts try to fill the existential emptiness, frantically searching for a determining sign – and failing
  • Millennials, as one recently explained to me, have grown up talking more openly about their state of mind. As work/life distinctions disappear, subjectivity becomes their core content. Confessions and opinions are externalized instantly. Individuation is no longer confined to the diary or small group of friends, but is shared out there, exposed for all to see.
  • Snapstreaks, the ‘best friends’ fire emoji next to a friend’s name indicating that ‘you and that special person in your life have snapped one another within 24 hours for at least two days in a row.’19 Streaks are considered a proof of friendship or commitment to someone. So it’s heartbreaking when you lose a streak you’ve put months of work into. The feature all but destroys the accumulated social capital when users are offline for a few days. The Snap regime forces teenagers, the largest Snapchat user group, to use the app every single day, making an offline break virtually impossible.20 While relationships amongst teens are pretty much always in flux, with friendships being on the edge and always questioned, Snap-induced feelings sync with the rapidly changing teenage body, making puberty even more intense
  • The bare-all nature of social media causes rifts between lovers who would rather not have this information. But in the information age, this does not bode well with the social pressure to participate in social networks.
  • dating apps like Tinder. These are described as time-killing machines – the reality game that overcomes boredom, or alternatively as social e-commerce – shopping my soul around. After many hours of swiping, suddenly there’s a rush of dopamine when someone likes you back. ‘The goal of the game is to have your egos boosted. If you swipe right and you match with a little celebration on the screen, sometimes that’s all that is needed. ‘We want to scoop up all our options immediately and then decide what we actually really want later.’25 On the other hand, ‘crippling social anxiety’ is when you match with somebody you are interested in, but you can’t bring yourself to send a message or respond to theirs ‘because oh god all I could think of was stupid responses or openers and she’ll think I’m an idiot and I am an idiot and…’
  • The metric to measure today’s symptoms would be time – or ‘attention’, as it is called in the industry. While for the archaic melancholic the past never passes, techno-sadness is caught in the perpetual now. Forward focused, we bet on acceleration and never mourn a lost object. The primary identification is there, in our hand. Everything is evident, on the screen, right in your face. Contrasted with the rich historical sources on melancholia, our present condition becomes immediately apparent. Whereas melancholy in the past was defined by separation from others, reduced contacts and reflection on oneself, today’s tristesse plays itself out amidst busy social (media) interactions. In Sherry Turkle’s phrase, we are alone together, as part of the crowd – a form of loneliness that is particularly cruel, frantic and tiring.
  • What we see today are systems that constantly disrupt the timeless aspect of melancholy.31 There’s no time for contemplation, or Weltschmerz. Social reality does not allow us to retreat.32 Even in our deepest state of solitude we’re surrounded by (online) others that babble on and on, demanding our attention
  • distraction does not pull us away, but instead draws us back into the social
  • The purpose of sadness by design is, as Paul B. Preciado calls it, ‘the production of frustrating satisfaction’.39 Should we have an opinion about internet-induced sadness? How can we address this topic without looking down on the online billions, without resorting to fast-food comparisons or patronizingly viewing people as fragile beings that need to be liberated and taken care of.
  • We overcome sadness not through happiness, but rather, as media theorist Andrew Culp has insisted, through a hatred of this world. Sadness occurs in situations where stagnant ‘becoming’ has turned into a blatant lie. We suffer, and there’s no form of absurdism that can offer an escape. Public access to a 21st-century version of dadaism has been blocked. The absence of surrealism hurts. What could our social fantasies look like? Are legal constructs such as creative commons and cooperatives all we can come up with? It seems we’re trapped in smoothness, skimming a surface littered with impressions and notifications. The collective imaginary is on hold. What’s worse, this banality itself is seamless, offering no indicators of its dangers and distortions. As a result, we’ve become subdued. Has the possibility of myth become technologically impossible?
  • We can neither return to mysticism nor to positivism. The naive act of communication is lost – and this is why we cry
Ed Webb

Border Patrol, Israel's Elbit Put Reservation Under Surveillance - 0 views

  • The vehicle is parked where U.S. Customs and Border Protection will soon construct a 160-foot surveillance tower capable of continuously monitoring every person and vehicle within a radius of up to 7.5 miles. The tower will be outfitted with high-definition cameras with night vision, thermal sensors, and ground-sweeping radar, all of which will feed real-time data to Border Patrol agents at a central operating station in Ajo, Arizona. The system will store an archive with the ability to rewind and track individuals’ movements across time — an ability known as “wide-area persistent surveillance.” CBP plans 10 of these towers across the Tohono O’odham reservation, which spans an area roughly the size of Connecticut. Two will be located near residential areas, including Rivas’s neighborhood, which is home to about 50 people. To build them, CBP has entered a $26 million contract with the U.S. division of Elbit Systems, Israel’s largest military company.
  • U.S. borderlands have become laboratories for new systems of enforcement and control
  • these same systems often end up targeting other marginalized populations as well as political dissidents
  • ...16 more annotations...
  • the spread of persistent surveillance technologies is particularly worrisome because they remove any limit on how much information police can gather on a person’s movements. “The border is the natural place for the government to start using them, since there is much more public support to deploy these sorts of intrusive technologies there,”
  • the company’s ultimate goal is to build a “layer” of electronic surveillance equipment across the entire perimeter of the U.S. “Over time, we’ll expand not only to the northern border, but to the ports and harbors across the country,”
  • In addition to fixed and mobile surveillance towers, other technology that CBP has acquired and deployed includes blimps outfitted with high-powered ground and air radar, sensors buried underground, and facial recognition software at ports of entry. CBP’s drone fleet has been described as the largest of any U.S. agency outside the Department of Defense
  • Nellie Jo David, a Tohono O’odham tribal member who is writing her dissertation on border security issues at the University of Arizona, says many younger people who have been forced by economic circumstances to work in nearby cities are returning home less and less, because they want to avoid the constant surveillance and harassment. “It’s especially taken a toll on our younger generations.”
  • Border militarism has been spreading worldwide owing to neoliberal economic policies, wars, and the onset of the climate crisis, all of which have contributed to the uprooting of increasingly large numbers of people, notes Reece Jones
  • In the U.S., leading companies with border security contracts include long-established contractors such as Lockheed Martin in addition to recent upstarts such as Anduril Industries, founded by tech mogul Palmer Luckey to feed the growing market for artificial intelligence and surveillance sensors — primarily in the borderlands. Elbit Systems has frequently touted a major advantage over these competitors: the fact that its products are “field-proven” on Palestinians
  • Verlon Jose, then-tribal vice chair, said that many nation members calculated that the towers would help dissuade the federal government from building a border wall across their lands. The Tohono O’odham are “only as sovereign as the federal government allows us to be,”
  • Leading Democrats have argued for the development of an ever-more sophisticated border surveillance state as an alternative to Trump’s border wall. “The positive, shall we say, almost technological wall that can be built is what we should be doing,” House Speaker Nancy Pelosi said in January. But for those crossing the border, the development of this surveillance apparatus has already taken a heavy toll. In January, a study published by researchers from the University of Arizona and Earlham College found that border surveillance towers have prompted migrants to cross along more rugged and circuitous pathways, leading to greater numbers of deaths from dehydration, exhaustion, and exposure.
  • “Walls are not only a question of blocking people from moving, but they are also serving as borders or frontiers between where you enter the surveillance state,” she said. “The idea is that at the very moment you step near the border, Elbit will catch you. Something similar happens in Palestine.”
  • CBP is by far the largest law enforcement entity in the U.S., with 61,400 employees and a 2018 budget of $16.3 billion — more than the militaries of Iran, Mexico, Israel, and Pakistan. The Border Patrol has jurisdiction 100 miles inland from U.S. borders, making roughly two-thirds of the U.S. population theoretically subject to its operations, including the entirety of the Tohono O’odham reservation
  • Between 2013 and 2016, for example, roughly 40 percent of Border Patrol seizures at immigration enforcement checkpoints involved 1 ounce or less of marijuana confiscated from U.S. citizens.
  • the agency uses its sprawling surveillance apparatus for purposes other than border enforcement
  • documents obtained via public records requests suggest that CBP drone flights included surveillance of Dakota Access pipeline protests
  • CBP’s repurposing of the surveillance tower and drones to surveil dissidents hints at other possible abuses. “It’s a reminder that technologies that are sold for one purpose, such as protecting the border or stopping terrorists — or whatever the original justification may happen to be — so often get repurposed for other reasons, such as targeting protesters.”
  • The impacts of the U.S. border on Tohono O’odham people date to the mid-19th century. The tribal nation’s traditional land extended 175 miles into Mexico before being severed by the 1853 Gadsden Purchase, a U.S. acquisition of land from the Mexican government. As many as 2,500 of the tribe’s more than 30,000 members still live on the Mexican side. Tohono O’odham people used to travel between the United States and Mexico fairly easily on roads without checkpoints to visit family, perform ceremonies, or obtain health care. But that was before the Border Patrol arrived en masse in the mid-2000s, turning the reservation into something akin to a military occupation zone. Residents say agents have administered beatings, used pepper spray, pulled people out of vehicles, shot two Tohono O’odham men under suspicious circumstances, and entered people’s homes without warrants. “It is apartheid here,” Ofelia Rivas says. “We have to carry our papers everywhere. And everyone here has experienced the Border Patrol’s abuse in some way.”
  • Tohono O’odham people have developed common cause with other communities struggling against colonization and border walls. David is among numerous activists from the U.S. and Mexican borderlands who joined a delegation to the West Bank in 2017, convened by Stop the Wall, to build relationships and learn about the impacts of Elbit’s surveillance systems. “I don’t feel safe with them taking over my community, especially if you look at what’s going on in Palestine — they’re bringing the same thing right over here to this land,” she says. “The U.S. government is going to be able to surveil basically anybody on the nation.”
Ed Webb

Clear backpacks, monitored emails: life for US students under constant surveillance | E... - 0 views

  • This level of surveillance is “not too over-the-top”, Ingrid said, and she feels her classmates are generally “accepting” of it.
  • One leading student privacy expert estimated that as many as a third of America’s roughly 15,000 school districts may already be using technology that monitors students’ emails and documents for phrases that might flag suicidal thoughts, plans for a school shooting, or a range of other offenses.
  • When Dapier talks with other teen librarians about the issue of school surveillance, “we’re very alarmed,” he said. “It sort of trains the next generation that [surveillance] is normal, that it’s not an issue. What is the next generation’s Mark Zuckerberg going to think is normal?
  • ...13 more annotations...
  • Some parents said they were alarmed and frightened by schools’ new monitoring technologies. Others said they were conflicted, seeing some benefits to schools watching over what kids are doing online, but uncertain if their schools were striking the right balance with privacy concerns. Many said they were not even sure what kind of surveillance technology their schools might be using, and that the permission slips they had signed when their kids brought home school devices had told them almost nothing
  • “They’re so unclear that I’ve just decided to cut off the research completely, to not do any of it.”
  • As of 2018, at least 60 American school districts had also spent more than $1m on separate monitoring technology to track what their students were saying on public social media accounts, an amount that spiked sharply in the wake of the 2018 Parkland school shooting, according to the Brennan Center for Justice, a progressive advocacy group that compiled and analyzed school contracts with a subset of surveillance companies.
  • “They are all mandatory, and the accounts have been created before we’ve even been consulted,” he said. Parents are given almost no information about how their children’s data is being used, or the business models of the companies involved. Any time his kids complete school work through a digital platform, they are generating huge amounts of very personal, and potentially very valuable, data. The platforms know what time his kids do their homework, and whether it’s done early or at the last minute. They know what kinds of mistakes his kids make on math problems.
  • Felix, now 12, said he is frustrated that the school “doesn’t really [educate] students on what is OK and what is not OK. They don’t make it clear when they are tracking you, or not, or what platforms they track you on. “They don’t really give you a list of things not to do,” he said. “Once you’re in trouble, they act like you knew.”
  • “It’s the school as panopticon, and the sweeping searchlight beams into homes, now, and to me, that’s just disastrous to intellectual risk-taking and creativity.”
  • Many parents also said that they wanted more transparency and more parental control over surveillance. A few years ago, Ben, a tech professional from Maryland, got a call from his son’s principal to set up an urgent meeting. His son, then about nine or 10-years old, had opened up a school Google document and typed “I want to kill myself.” It was not until he and his son were in a serious meeting with school officials that Ben found out what happened: his son had typed the words on purpose, curious about what would happen. “The smile on his face gave away that he was testing boundaries, and not considering harming himself,” Ben said. (He asked that his last name and his son’s school district not be published, to preserve his son’s privacy.) The incident was resolved easily, he said, in part because Ben’s family already had close relationships with the school administrators.
  • there is still no independent evaluation of whether this kind of surveillance technology actually works to reduce violence and suicide.
  • Certain groups of students could easily be targeted by the monitoring more intensely than others, she said. Would Muslim students face additional surveillance? What about black students? Her daughter, who is 11, loves hip-hop music. “Maybe some of that language could be misconstrued, by the wrong ears or the wrong eyes, as potentially violent or threatening,” she said.
  • The Parent Coalition for Student Privacy was founded in 2014, in the wake of parental outrage over the attempt to create a standardized national database that would track hundreds of data points about public school students, from their names and social security numbers to their attendance, academic performance, and disciplinary and behavior records, and share the data with education tech companies. The effort, which had been funded by the Gates Foundation, collapsed in 2014 after fierce opposition from parents and privacy activists.
  • “More and more parents are organizing against the onslaught of ed tech and the loss of privacy that it entails. But at the same time, there’s so much money and power and political influence behind these groups,”
  • some privacy experts – and students – said they are concerned that surveillance at school might actually be undermining students’ wellbeing
  • “I do think the constant screen surveillance has affected our anxiety levels and our levels of depression.” “It’s over-guarding kids,” she said. “You need to let them make mistakes, you know? That’s kind of how we learn.”
Ed Webb

Iran Says Face Recognition Will ID Women Breaking Hijab Laws | WIRED - 0 views

  • After Iranian lawmakers suggested last year that face recognition should be used to police hijab law, the head of an Iranian government agency that enforces morality law said in a September interview that the technology would be used “to identify inappropriate and unusual movements,” including “failure to observe hijab laws.” Individuals could be identified by checking faces against a national identity database to levy fines and make arrests, he said.
  • Iran’s government has monitored social media to identify opponents of the regime for years, Grothe says, but if government claims about the use of face recognition are true, it’s the first instance she knows of a government using the technology to enforce gender-related dress law.
  • Mahsa Alimardani, who researches freedom of expression in Iran at the University of Oxford, has recently heard reports of women in Iran receiving citations in the mail for hijab law violations despite not having had an interaction with a law enforcement officer. Iran’s government has spent years building a digital surveillance apparatus, Alimardani says. The country’s national identity database, built in 2015, includes biometric data like face scans and is used for national ID cards and to identify people considered dissidents by authorities.
  • ...5 more annotations...
  • Decades ago, Iranian law required women to take off headscarves in line with modernization plans, with police sometimes forcing women to do so. But hijab wearing became compulsory in 1979 when the country became a theocracy.
  • Shajarizadeh and others monitoring the ongoing outcry have noticed that some people involved in the protests are confronted by police days after an alleged incident—including women cited for not wearing a hijab. “Many people haven't been arrested in the streets,” she says. “They were arrested at their homes one or two days later.”
  • Some face recognition in use in Iran today comes from Chinese camera and artificial intelligence company Tiandy. Its dealings in Iran were featured in a December 2021 report from IPVM, a company that tracks the surveillance and security industry.
  • US Department of Commerce placed sanctions on Tiandy, citing its role in the repression of Uyghur Muslims in China and the provision of technology originating in the US to Iran’s Revolutionary Guard. The company previously used components from Intel, but the US chipmaker told NBC last month that it had ceased working with the Chinese company.
  • When Steven Feldstein, a former US State Department surveillance expert, surveyed 179 countries between 2012 and 2020, he found that 77 now use some form of AI-driven surveillance. Face recognition is used in 61 countries, more than any other form of digital surveillance technology, he says.
Ed Webb

Artificial meat? Food for thought by 2050 | Environment | The Guardian - 0 views

  • even with new technologies such as genetic modification and nanotechnology, hundreds of millions of people may still go hungry owing to a combination of climate change, water shortages and increasing food consumption.
  • Many low-tech ways are considered to effectively increase yields, such as reducing the 30-40% food waste that occurs both in rich and poor countries. If developing countries had better storage facilities and supermarkets and consumers in rich countries bought only what they needed, there would be far more food available.
  • wo "wild cards" could transform global meat and milk production. "One is artificial meat, which is made in a giant vat, and the other is nanotechnology, which is expected to become more important as a vehicle for delivering medication to livestock."
  • ...4 more annotations...
  • One of the gloomiest assessments comes from a team of British and South African economists who say that a vast effort must be made in agricultural research to create a new green revolution, but that seven multinational corporations, led by Monsanto, now dominate the global technology field.
  • a threat to the global commons in agricultural technology on which the green revolution has depended
  • Up to 70% of the energy needed to grow and supply food at present is fossil-fuel based which in turn contributes to climate change
  • The 21 papers published today in a special open access edition of the philosophical transactions of the royalsociety.org are part of a UK government Foresight study on the future of the global food industry. The final report will be published later this year in advance of the UN climate talks in Cancun, Mexico.
Ed Webb

The Digital Maginot Line - 0 views

  • The Information World War has already been going on for several years. We called the opening skirmishes “media manipulation” and “hoaxes”, assuming that we were dealing with ideological pranksters doing it for the lulz (and that lulz were harmless). In reality, the combatants are professional, state-employed cyberwarriors and seasoned amateur guerrillas pursuing very well-defined objectives with military precision and specialized tools. Each type of combatant brings a different mental model to the conflict, but uses the same set of tools.
  • There are also small but highly-skilled cadres of ideologically-motivated shitposters whose skill at information warfare is matched only by their fundamental incomprehension of the real damage they’re unleashing for lulz. A subset of these are conspiratorial — committed truthers who were previously limited to chatter on obscure message boards until social platform scaffolding and inadvertently-sociopathic algorithms facilitated their evolution into leaderless cults able to spread a gospel with ease.
  • There’s very little incentive not to try everything: this is a revolution that is being A/B tested.
  • ...17 more annotations...
  • The combatants view this as a Hobbesian information war of all against all and a tactical arms race; the other side sees it as a peacetime civil governance problem.
  • Our most technically-competent agencies are prevented from finding and countering influence operations because of the concern that they might inadvertently engage with real U.S. citizens as they target Russia’s digital illegals and ISIS’ recruiters. This capability gap is eminently exploitable; why execute a lengthy, costly, complex attack on the power grid when there is relatively no cost, in terms of dollars as well as consequences, to attack a society’s ability to operate with a shared epistemology? This leaves us in a terrible position, because there are so many more points of failure
  • Cyberwar, most people thought, would be fought over infrastructure — armies of state-sponsored hackers and the occasional international crime syndicate infiltrating networks and exfiltrating secrets, or taking over critical systems. That’s what governments prepared and hired for; it’s what defense and intelligence agencies got good at. It’s what CSOs built their teams to handle. But as social platforms grew, acquiring standing audiences in the hundreds of millions and developing tools for precision targeting and viral amplification, a variety of malign actors simultaneously realized that there was another way. They could go straight for the people, easily and cheaply. And that’s because influence operations can, and do, impact public opinion. Adversaries can target corporate entities and transform the global power structure by manipulating civilians and exploiting human cognitive vulnerabilities at scale. Even actual hacks are increasingly done in service of influence operations: stolen, leaked emails, for example, were profoundly effective at shaping a national narrative in the U.S. election of 2016.
  • The substantial time and money spent on defense against critical-infrastructure hacks is one reason why poorly-resourced adversaries choose to pursue a cheap, easy, low-cost-of-failure psy-ops war instead
  • Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble. But ultimately the information war is about territory — just not the geographic kind. In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.
  • This shift from targeting infrastructure to targeting the minds of civilians was predictable. Theorists  like Edward Bernays, Hannah Arendt, and Marshall McLuhan saw it coming decades ago. As early as 1970, McLuhan wrote, in Culture is our Business, “World War III is a guerrilla information war with no division between military and civilian participation.”
  • The 2014-2016 influence operation playbook went something like this: a group of digital combatants decided to push a specific narrative, something that fit a long-term narrative but also had a short-term news hook. They created content: sometimes a full blog post, sometimes a video, sometimes quick visual memes. The content was posted to platforms that offer discovery and amplification tools. The trolls then activated collections of bots and sockpuppets to blanket the biggest social networks with the content. Some of the fake accounts were disposable amplifiers, used mostly to create the illusion of popular consensus by boosting like and share counts. Others were highly backstopped personas run by real human beings, who developed standing audiences and long-term relationships with sympathetic influencers and media; those accounts were used for precision messaging with the goal of reaching the press. Israeli company Psy Group marketed precisely these services to the 2016 Trump Presidential campaign; as their sales brochure put it, “Reality is a Matter of Perception”.
  • If an operation is effective, the message will be pushed into the feeds of sympathetic real people who will amplify it themselves. If it goes viral or triggers a trending algorithm, it will be pushed into the feeds of a huge audience. Members of the media will cover it, reaching millions more. If the content is false or a hoax, perhaps there will be a subsequent correction article – it doesn’t matter, no one will pay attention to it.
  • Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead. Hostile state intelligence services in particular are now increasingly adept at operating collections of human-operated precision personas, often called sockpuppets, or cyborgs, that will escape punishment under the the bot laws. They will simply work harder to ingratiate themselves with real American influencers, to join real American retweet rings. If combatants need to quickly spin up a digital mass movement, well-placed personas can rile up a sympathetic subreddit or Facebook Group populated by real people, hijacking a community in the way that parasites mobilize zombie armies.
  • Attempts to legislate away 2016 tactics primarily have the effect of triggering civil libertarians, giving them an opportunity to push the narrative that regulators just don’t understand technology, so any regulation is going to be a disaster.
  • The entities best suited to mitigate the threat of any given emerging tactic will always be the platforms themselves, because they can move fast when so inclined or incentivized. The problem is that many of the mitigation strategies advanced by the platforms are the information integrity version of greenwashing; they’re a kind of digital security theater, the TSA of information warfare
  • Algorithmic distribution systems will always be co-opted by the best resourced or most technologically capable combatants. Soon, better AI will rewrite the playbook yet again — perhaps the digital equivalent of  Blitzkrieg in its potential for capturing new territory. AI-generated audio and video deepfakes will erode trust in what we see with our own eyes, leaving us vulnerable both to faked content and to the discrediting of the actual truth by insinuation. Authenticity debates will commandeer media cycles, pushing us into an infinite loop of perpetually investigating basic facts. Chronic skepticism and the cognitive DDoS will increase polarization, leading to a consolidation of trust in distinct sets of right and left-wing authority figures – thought oligarchs speaking to entirely separate groups
  • platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors
  • What made democracies strong in the past — a strong commitment to free speech and the free exchange of ideas — makes them profoundly vulnerable in the era of democratized propaganda and rampant misinformation. We are (rightfully) concerned about silencing voices or communities. But our commitment to free expression makes us disproportionately vulnerable in the era of chronic, perpetual information war. Digital combatants know that once speech goes up, we are loathe to moderate it; to retain this asymmetric advantage, they push an all-or-nothing absolutist narrative that moderation is censorship, that spammy distribution tactics and algorithmic amplification are somehow part of the right to free speech.
  • We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.
  • Unceasing information war is one of the defining threats of our day. This conflict is already ongoing, but (so far, in the United States) it’s largely bloodless and so we aren’t acknowledging it despite the huge consequences hanging in the balance. It is as real as the Cold War was in the 1960s, and the stakes are staggeringly high: the legitimacy of government, the persistence of societal cohesion, even our ability to respond to the impending climate crisis.
  • Influence operations exploit divisions in our society using vulnerabilities in our information ecosystem. We have to move away from treating this as a problem of giving people better facts, or stopping some Russian bots, and move towards thinking about it as an ongoing battle for the integrity of our information infrastructure – easily as critical as the integrity of our financial markets.
Ed Webb

Google and Apple Digital Mapping | Data Collection - 0 views

  • There is a sense, in fact, in which mapping is the essence of what Google does. The company likes to talk about services such as Maps and Earth as if they were providing them for fun - a neat, free extra as a reward for using their primary offering, the search box. But a search engine, in some sense, is an attempt to map the world of information - and when you can combine that conceptual world with the geographical one, the commercial opportunities suddenly explode.
  • In a world of GPS-enabled smartphones, you're not just consulting Google's or Apple's data stores when you consult a map: you're adding to them.
  • There's no technical reason why, perhaps in return for a cheaper phone bill, you mightn't consent to be shown not the quickest route between two points, but the quickest route that passes at least one Starbucks. If you're looking at the world through Google glasses, who determines which aspects of "augmented reality" data you see - and did they pay for the privilege?
  • ...6 more annotations...
  • "The map is mapping us," says Martin Dodge, a senior lecturer in human geography at Manchester University. "I'm not paranoid, but I am quite suspicious and cynical about products that appear to be innocent and neutral, but that are actually vacuuming up all kinds of behavioural and attitudinal data."
  • it's hard to interpret the occasional aerial snapshot of your garden as a big issue when the phone in your pocket is assembling a real-time picture of your movements, preferences and behaviour
  • "There's kind of a fine line that you run," said Ed Parsons, Google's chief geospatial technologist, in a session at the Aspen Ideas Festival in Colorado, "between this being really useful, and it being creepy."
  • "Google and Apple are saying that they want control over people's real and imagined space."
  • It can be easy to assume that maps are objective: that the world is out there, and that a good map is one that represents it accurately. But that's not true. Any square kilometre of the planet can be described in an infinite number of ways: in terms of its natural features, its weather, its socio-economic profile, or what you can buy in the shops there. Traditionally, the interests reflected in maps have been those of states and their armies, because they were the ones who did the map-making, and the primary use of many such maps was military. (If you had the better maps, you stood a good chance of winning the battle. The logo of Britain's Ordnance Survey still includes a visual reference to the 18th-century War Department.) Now, the power is shifting. "Every map," the cartography curator Lucy Fellowes once said, "is someone's way of getting you to look at the world his or her way."
  • The question cartographers are always being asked at cocktail parties, says Heyman, is whether there's really any map-making still left to do: we've mapped the whole planet already, haven't we? The question could hardly be more misconceived. We are just beginning to grasp what it means to live in a world in which maps are everywhere - and in which, by using maps, we are mapped ourselves.
weismans95

We Are Already Cyborgs - YouTube - 2 views

  •  
    Technology+evolution?
1 - 20 of 166 Next › Last »
Showing 20 items per page