Skip to main content

Home/ Dystopias/ Group items tagged recognition

Rss Feed Group items tagged

Ed Webb

WIRED - 0 views

  • Over the past two years, RealNetworks has developed a facial recognition tool that it hopes will help schools more accurately monitor who gets past their front doors. Today, the company launched a website where school administrators can download the tool, called SAFR, for free and integrate it with their own camera systems
  • how to balance privacy and security in a world that is starting to feel like a scene out of Minority Report
  • facial recognition technology often misidentifies black people and women at higher rates than white men
  • ...7 more annotations...
  • "The use of facial recognition in schools creates an unprecedented level of surveillance and scrutiny," says John Cusick, a fellow at the Legal Defense Fund. "It can exacerbate racial disparities in terms of how schools are enforcing disciplinary codes and monitoring their students."
  • The school would ask adults, not kids, to register their faces with the SAFR system. After they registered, they’d be able to enter the school by smiling at a camera at the front gate. (Smiling tells the software that it’s looking at a live person and not, for instance, a photograph). If the system recognizes the person, the gates automatically unlock
  • The software can predict a person's age and gender, enabling schools to turn off access for people below a certain age. But Glaser notes that if other schools want to register students going forward, they can
  • There are no guidelines about how long the facial data gets stored, how it’s used, or whether people need to opt in to be tracked.
  • Schools could, for instance, use facial recognition technology to monitor who's associating with whom and discipline students differently as a result. "It could criminalize friendships," says Cusick of the Legal Defense Fund.
  • SAFR boasts a 99.8 percent overall accuracy rating, based on a test, created by the University of Massachusetts, that vets facial recognition systems. But Glaser says the company hasn’t tested whether the tool is as good at recognizing black and brown faces as it is at recognizing white ones. RealNetworks deliberately opted not to have the software proactively predict ethnicity, the way it predicts age and gender, for fear of it being used for racial profiling. Still, testing the tool's accuracy among different demographics is key. Research has shown that many top facial recognition tools are particularly bad at recognizing black women
  • "It's tempting to say there's a technological solution, that we're going to find the dangerous people, and we're going to stop them," she says. "But I do think a large part of that is grasping at straws."
Ed Webb

Wearing a mask won't stop facial recognition anymore - The coronavirus is prompting fac... - 0 views

  • expanding this system to a wider group of people would be hard. When a population reaches a certain scale, the system is likely to encounter people with similar eyes.This might be why most commercial facial recognition systems that can identify masked faces seem limited to small-scale applications
  • Many residential communities, especially in areas hit hardest by the virus, have been limiting entry to residents only. Minivision introduced the new algorithm to its facial recognition gate lock systems in communities in Nanjing to quickly recognize residents without the need to take off masks.
  • SenseTime, which announced the rollout of its face mask-busting tech last week, explained that its algorithm is designed to read 240 facial feature key points around the eyes, mouth and nose. It can make a match using just the parts of the face that are visible.
  • ...1 more annotation...
  • New forms of facial recognition can now recognize not just people wearing masks over their mouths, but also people in scarves and even with fake beards. And the technology is already rolling out in China because of one unexpected event: The coronavirus outbreak.
Ed Webb

Iran Says Face Recognition Will ID Women Breaking Hijab Laws | WIRED - 0 views

  • After Iranian lawmakers suggested last year that face recognition should be used to police hijab law, the head of an Iranian government agency that enforces morality law said in a September interview that the technology would be used “to identify inappropriate and unusual movements,” including “failure to observe hijab laws.” Individuals could be identified by checking faces against a national identity database to levy fines and make arrests, he said.
  • Iran’s government has monitored social media to identify opponents of the regime for years, Grothe says, but if government claims about the use of face recognition are true, it’s the first instance she knows of a government using the technology to enforce gender-related dress law.
  • Mahsa Alimardani, who researches freedom of expression in Iran at the University of Oxford, has recently heard reports of women in Iran receiving citations in the mail for hijab law violations despite not having had an interaction with a law enforcement officer. Iran’s government has spent years building a digital surveillance apparatus, Alimardani says. The country’s national identity database, built in 2015, includes biometric data like face scans and is used for national ID cards and to identify people considered dissidents by authorities.
  • ...5 more annotations...
  • Decades ago, Iranian law required women to take off headscarves in line with modernization plans, with police sometimes forcing women to do so. But hijab wearing became compulsory in 1979 when the country became a theocracy.
  • Shajarizadeh and others monitoring the ongoing outcry have noticed that some people involved in the protests are confronted by police days after an alleged incident—including women cited for not wearing a hijab. “Many people haven't been arrested in the streets,” she says. “They were arrested at their homes one or two days later.”
  • Some face recognition in use in Iran today comes from Chinese camera and artificial intelligence company Tiandy. Its dealings in Iran were featured in a December 2021 report from IPVM, a company that tracks the surveillance and security industry.
  • US Department of Commerce placed sanctions on Tiandy, citing its role in the repression of Uyghur Muslims in China and the provision of technology originating in the US to Iran’s Revolutionary Guard. The company previously used components from Intel, but the US chipmaker told NBC last month that it had ceased working with the Chinese company.
  • When Steven Feldstein, a former US State Department surveillance expert, surveyed 179 countries between 2012 and 2020, he found that 77 now use some form of AI-driven surveillance. Face recognition is used in 61 countries, more than any other form of digital surveillance technology, he says.
Ed Webb

Face Recognition Moves From Sci-Fi to Social Media - NYTimes.com - 0 views

  • the democratization of surveillance — may herald the end of anonymity
    • Ed Webb
       
      Democratization means putting this at the command of citizens, not of unaccountable corporations.
  • facial recognition is proliferating so quickly that some regulators in the United States and Europe are playing catch-up. On the one hand, they say, the technology has great business potential. On the other, because facial recognition works by analyzing and storing people’s unique facial measurements, it also entails serious privacy risks
  • researchers also identified the interests and predicted partial Social Security numbers of some students.
  • ...4 more annotations...
  • marketers could someday use more invasive techniques to identify random people on the street along with, say, their credit scores
  • “You might think it’s cool, or you might think it’s creepy, depending on the context,”
  • many users do not understand that Facebook’s tag suggestion feature involves storing people’s biometric data to re-identify them in later photos
  • Mr. Caspar said last week that he was disappointed with the negotiations with Facebook and that his office was now preparing to take legal action over the company’s biometric database. Facebook told a German broadcaster that its tag suggestion feature complied with European data protection laws. “There are many risks,” Mr. Caspar says. “People should be able to choose if they want to accept these risks, or not accept them.” He offered a suggestion for Americans, “Users in the United States have good reason to raise their voices to get the same right.”
Ed Webb

Facebook Acquires Israeli Facial Recognition Company - NYTimes.com - 0 views

  • Facebook’s short term future, particularly on Wall Street, depends in large part on how it takes advantage of cellphones and tablets – and how it spins money from one of its singular assets: pictures of babies, weddings, vacations and parties.Face.com’s technology is designed not only  to identify individuals but also their gender and age.
Ed Webb

We, The Technocrats - blprnt - Medium - 2 views

  • Silicon Valley’s go-to linguistic dodge: the collective we
  • “What kind of a world do we want to live in?”
  • Big tech’s collective we is its ‘all lives matter’, a way to soft-pedal concerns about privacy while refusing to speak directly to dangerous inequalities.
  • ...7 more annotations...
  • One two-letter word cannot possibly hold all of the varied experiences of data, specifically those of the people are at the most immediate risk: visible minorities, LGBTQ+ people, indigenous communities, the elderly, the disabled, displaced migrants, the incarcerated
  • At least twenty-six states allow the FBI to perform facial recognition searches against their databases of images from drivers licenses and state IDs, despite the fact that the FBI’s own reports have indicated that facial recognition is less accurate for black people. Black people, already at a higher risk of arrest and incarceration than other Americans, feel these data systems in a much different way than I do
  • last week, the Department of Justice passed a brief to the Supreme Court arguing that sex discrimination protections do not extend to transgender people. If this ruling were to be supported, it would immediately put trans women and men at more risk than others from the surveillant data technologies that are becoming more and more common in the workplace. Trans people will be put in distinct danger — a reality that is lost when they are folded neatly into a communal we
  • I looked at the list of speakers for the conference in Brussels to get an idea of the particular we of Cook’s audience, which included Mark Zuckerberg, Google’s CEO Sundar Pichai and the King of Spain. Of the presenters, 57% were men and 83% where white. Only 4 of the 132 people on stage were black.
  • another we that Tim Cook necessarily speaks on the behalf of: privileged men in tech. This we includes Mark and Sundar; it includes 60% of Silicon Valley and 91% of its equity. It is this we who have reaped the most benefit from Big Data and carried the least risk, all while occupying the most time on stage
  • Here’s a more urgent question for us, one that doesn’t ask what we want but instead what they need:How can this new data world be made safer for the people who are facing real risks, right now?
  • “The act of listening has greater ethical potential than speaking” — Julietta Singh
Ed Webb

Border Patrol, Israel's Elbit Put Reservation Under Surveillance - 0 views

  • The vehicle is parked where U.S. Customs and Border Protection will soon construct a 160-foot surveillance tower capable of continuously monitoring every person and vehicle within a radius of up to 7.5 miles. The tower will be outfitted with high-definition cameras with night vision, thermal sensors, and ground-sweeping radar, all of which will feed real-time data to Border Patrol agents at a central operating station in Ajo, Arizona. The system will store an archive with the ability to rewind and track individuals’ movements across time — an ability known as “wide-area persistent surveillance.” CBP plans 10 of these towers across the Tohono O’odham reservation, which spans an area roughly the size of Connecticut. Two will be located near residential areas, including Rivas’s neighborhood, which is home to about 50 people. To build them, CBP has entered a $26 million contract with the U.S. division of Elbit Systems, Israel’s largest military company.
  • U.S. borderlands have become laboratories for new systems of enforcement and control
  • these same systems often end up targeting other marginalized populations as well as political dissidents
  • ...16 more annotations...
  • the spread of persistent surveillance technologies is particularly worrisome because they remove any limit on how much information police can gather on a person’s movements. “The border is the natural place for the government to start using them, since there is much more public support to deploy these sorts of intrusive technologies there,”
  • the company’s ultimate goal is to build a “layer” of electronic surveillance equipment across the entire perimeter of the U.S. “Over time, we’ll expand not only to the northern border, but to the ports and harbors across the country,”
  • In addition to fixed and mobile surveillance towers, other technology that CBP has acquired and deployed includes blimps outfitted with high-powered ground and air radar, sensors buried underground, and facial recognition software at ports of entry. CBP’s drone fleet has been described as the largest of any U.S. agency outside the Department of Defense
  • Nellie Jo David, a Tohono O’odham tribal member who is writing her dissertation on border security issues at the University of Arizona, says many younger people who have been forced by economic circumstances to work in nearby cities are returning home less and less, because they want to avoid the constant surveillance and harassment. “It’s especially taken a toll on our younger generations.”
  • Border militarism has been spreading worldwide owing to neoliberal economic policies, wars, and the onset of the climate crisis, all of which have contributed to the uprooting of increasingly large numbers of people, notes Reece Jones
  • In the U.S., leading companies with border security contracts include long-established contractors such as Lockheed Martin in addition to recent upstarts such as Anduril Industries, founded by tech mogul Palmer Luckey to feed the growing market for artificial intelligence and surveillance sensors — primarily in the borderlands. Elbit Systems has frequently touted a major advantage over these competitors: the fact that its products are “field-proven” on Palestinians
  • Verlon Jose, then-tribal vice chair, said that many nation members calculated that the towers would help dissuade the federal government from building a border wall across their lands. The Tohono O’odham are “only as sovereign as the federal government allows us to be,”
  • Leading Democrats have argued for the development of an ever-more sophisticated border surveillance state as an alternative to Trump’s border wall. “The positive, shall we say, almost technological wall that can be built is what we should be doing,” House Speaker Nancy Pelosi said in January. But for those crossing the border, the development of this surveillance apparatus has already taken a heavy toll. In January, a study published by researchers from the University of Arizona and Earlham College found that border surveillance towers have prompted migrants to cross along more rugged and circuitous pathways, leading to greater numbers of deaths from dehydration, exhaustion, and exposure.
  • “Walls are not only a question of blocking people from moving, but they are also serving as borders or frontiers between where you enter the surveillance state,” she said. “The idea is that at the very moment you step near the border, Elbit will catch you. Something similar happens in Palestine.”
  • CBP is by far the largest law enforcement entity in the U.S., with 61,400 employees and a 2018 budget of $16.3 billion — more than the militaries of Iran, Mexico, Israel, and Pakistan. The Border Patrol has jurisdiction 100 miles inland from U.S. borders, making roughly two-thirds of the U.S. population theoretically subject to its operations, including the entirety of the Tohono O’odham reservation
  • Between 2013 and 2016, for example, roughly 40 percent of Border Patrol seizures at immigration enforcement checkpoints involved 1 ounce or less of marijuana confiscated from U.S. citizens.
  • the agency uses its sprawling surveillance apparatus for purposes other than border enforcement
  • documents obtained via public records requests suggest that CBP drone flights included surveillance of Dakota Access pipeline protests
  • CBP’s repurposing of the surveillance tower and drones to surveil dissidents hints at other possible abuses. “It’s a reminder that technologies that are sold for one purpose, such as protecting the border or stopping terrorists — or whatever the original justification may happen to be — so often get repurposed for other reasons, such as targeting protesters.”
  • The impacts of the U.S. border on Tohono O’odham people date to the mid-19th century. The tribal nation’s traditional land extended 175 miles into Mexico before being severed by the 1853 Gadsden Purchase, a U.S. acquisition of land from the Mexican government. As many as 2,500 of the tribe’s more than 30,000 members still live on the Mexican side. Tohono O’odham people used to travel between the United States and Mexico fairly easily on roads without checkpoints to visit family, perform ceremonies, or obtain health care. But that was before the Border Patrol arrived en masse in the mid-2000s, turning the reservation into something akin to a military occupation zone. Residents say agents have administered beatings, used pepper spray, pulled people out of vehicles, shot two Tohono O’odham men under suspicious circumstances, and entered people’s homes without warrants. “It is apartheid here,” Ofelia Rivas says. “We have to carry our papers everywhere. And everyone here has experienced the Border Patrol’s abuse in some way.”
  • Tohono O’odham people have developed common cause with other communities struggling against colonization and border walls. David is among numerous activists from the U.S. and Mexican borderlands who joined a delegation to the West Bank in 2017, convened by Stop the Wall, to build relationships and learn about the impacts of Elbit’s surveillance systems. “I don’t feel safe with them taking over my community, especially if you look at what’s going on in Palestine — they’re bringing the same thing right over here to this land,” she says. “The U.S. government is going to be able to surveil basically anybody on the nation.”
Ed Webb

TSA is adding face recognition at big airports. Here's how to opt out. - The Washington... - 0 views

  • Any time data gets collected somewhere, it could also be stolen — and you only get one face. The TSA says all its databases are encrypted to reduce hacking risk. But in 2019, the Department of Homeland Security disclosed that photos of travelers were taken in a data breach, accessed through the network of one of its subcontractors.
  • “What we often see with these biometric programs is they are only optional in the introductory phases — and over time we see them becoming standardized and nationalized and eventually compulsory,” said Cahn. “There is no place more coercive to ask people for their consent than an airport.”
  • Those who have the privilege of not having to worry their face will be misread can zip right through — whereas people who don’t consent to it pay a tax with their time. At that point, how voluntary is it, really?
Ed Webb

Programmed for Love: The Unsettling Future of Robotics - The Chronicle Review - The Chr... - 0 views

  • Her prediction: Companies will soon sell robots designed to baby-sit children, replace workers in nursing homes, and serve as companions for people with disabilities. All of which to Turkle is demeaning, "transgressive," and damaging to our collective sense of humanity. It's not that she's against robots as helpers—building cars, vacuuming floors, and helping to bathe the sick are one thing. She's concerned about robots that want to be buddies, implicitly promising an emotional connection they can never deliver.
  • y: We are already cyborgs, reliant on digital devices in ways that many of us could not have imagined just a few years ago
  • "We are hard-wired that if something meets extremely primitive standards, either eye contact or recognition or very primitive mutual signaling, to accept it as an Other because as animals that's how we're hard-wired—to recognize other creatures out there."
  • ...4 more annotations...
  • "Can a broken robot break a child?" they asked. "We would not consider the ethics of having children play with a damaged copy of Microsoft Word or a torn Raggedy Ann doll. But sociable robots provoke enough emotion to make this ethical question feel very real."
  • "The concept of robots as baby sitters is, intellectually, one that ought to appeal to parents more than the idea of having a teenager or similarly inexperienced baby sitter responsible for the safety of their infants," he writes. "Their smoke-detection capabilities will be better than ours, and they will never be distracted for the brief moment it can take an infant to do itself some terrible damage or be snatched by a deranged stranger."
  • "What if we get used to relationships that are made to measure?" Turkle asks. "Is that teaching us that relationships can be just the way we want them?" After all, if a robotic partner were to become annoying, we could just switch it off.
  • We've reached a moment, she says, when we should make "corrections"—to develop social norms to help offset the feeling that we must check for messages even when that means ignoring the people around us. "Today's young people have a special vulnerability: Although always connected, they feel deprived of attention," she writes. "Some, as children, were pushed on swings while their parents spoke on cellphones. Now these same parents do their e-mail at the dinner table." One 17-year-old boy even told her that at least a robot would remember everything he said, contrary to his father, who often tapped at a BlackBerry during conversations.
Ed Webb

The Web Means the End of Forgetting - NYTimes.com - 1 views

  • for a great many people, the permanent memory bank of the Web increasingly means there are no second chances — no opportunities to escape a scarlet letter in your digital past. Now the worst thing you’ve done is often the first thing everyone knows about you.
  • a collective identity crisis. For most of human history, the idea of reinventing yourself or freely shaping your identity — of presenting different selves in different contexts (at home, at work, at play) — was hard to fathom, because people’s identities were fixed by their roles in a rigid social hierarchy. With little geographic or social mobility, you were defined not as an individual but by your village, your class, your job or your guild. But that started to change in the late Middle Ages and the Renaissance, with a growing individualism that came to redefine human identity. As people perceived themselves increasingly as individuals, their status became a function not of inherited categories but of their own efforts and achievements. This new conception of malleable and fluid identity found its fullest and purest expression in the American ideal of the self-made man, a term popularized by Henry Clay in 1832.
  • the dawning of the Internet age promised to resurrect the ideal of what the psychiatrist Robert Jay Lifton has called the “protean self.” If you couldn’t flee to Texas, you could always seek out a new chat room and create a new screen name. For some technology enthusiasts, the Web was supposed to be the second flowering of the open frontier, and the ability to segment our identities with an endless supply of pseudonyms, avatars and categories of friendship was supposed to let people present different sides of their personalities in different contexts. What seemed within our grasp was a power that only Proteus possessed: namely, perfect control over our shifting identities. But the hope that we could carefully control how others view us in different contexts has proved to be another myth. As social-networking sites expanded, it was no longer quite so easy to have segmented identities: now that so many people use a single platform to post constant status updates and photos about their private and public activities, the idea of a home self, a work self, a family self and a high-school-friends self has become increasingly untenable. In fact, the attempt to maintain different selves often arouses suspicion.
  • ...20 more annotations...
  • All around the world, political leaders, scholars and citizens are searching for responses to the challenge of preserving control of our identities in a digital world that never forgets. Are the most promising solutions going to be technological? Legislative? Judicial? Ethical? A result of shifting social norms and cultural expectations? Or some mix of the above?
  • These approaches share the common goal of reconstructing a form of control over our identities: the ability to reinvent ourselves, to escape our pasts and to improve the selves that we present to the world.
  • many technological theorists assumed that self-governing communities could ensure, through the self-correcting wisdom of the crowd, that all participants enjoyed the online identities they deserved. Wikipedia is one embodiment of the faith that the wisdom of the crowd can correct most mistakes — that a Wikipedia entry for a small-town mayor, for example, will reflect the reputation he deserves. And if the crowd fails — perhaps by turning into a digital mob — Wikipedia offers other forms of redress
  • In practice, however, self-governing communities like Wikipedia — or algorithmically self-correcting systems like Google — often leave people feeling misrepresented and burned. Those who think that their online reputations have been unfairly tarnished by an isolated incident or two now have a practical option: consulting a firm like ReputationDefender, which promises to clean up your online image. ReputationDefender was founded by Michael Fertik, a Harvard Law School graduate who was troubled by the idea of young people being forever tainted online by their youthful indiscretions. “I was seeing articles about the ‘Lord of the Flies’ behavior that all of us engage in at that age,” he told me, “and it felt un-American that when the conduct was online, it could have permanent effects on the speaker and the victim. The right to new beginnings and the right to self-definition have always been among the most beautiful American ideals.”
  • In the Web 3.0 world, Fertik predicts, people will be rated, assessed and scored based not on their creditworthiness but on their trustworthiness as good parents, good dates, good employees, good baby sitters or good insurance risks.
  • “Our customers include parents whose kids have talked about them on the Internet — ‘Mom didn’t get the raise’; ‘Dad got fired’; ‘Mom and Dad are fighting a lot, and I’m worried they’ll get a divorce.’ ”
  • as facial-recognition technology becomes more widespread and sophisticated, it will almost certainly challenge our expectation of anonymity in public
  • Ohm says he worries that employers would be able to use social-network-aggregator services to identify people’s book and movie preferences and even Internet-search terms, and then fire or refuse to hire them on that basis. A handful of states — including New York, California, Colorado and North Dakota — broadly prohibit employers from discriminating against employees for legal off-duty conduct like smoking. Ohm suggests that these laws could be extended to prevent certain categories of employers from refusing to hire people based on Facebook pictures, status updates and other legal but embarrassing personal information. (In practice, these laws might be hard to enforce, since employers might not disclose the real reason for their hiring decisions, so employers, like credit-reporting agents, might also be required by law to disclose to job candidates the negative information in their digital files.)
  • There’s already a sharp rise in lawsuits known as Twittergation — that is, suits to force Web sites to remove slanderous or false posts.
  • many people aren’t worried about false information posted by others — they’re worried about true information they’ve posted about themselves when it is taken out of context or given undue weight. And defamation law doesn’t apply to true information or statements of opinion. Some legal scholars want to expand the ability to sue over true but embarrassing violations of privacy — although it appears to be a quixotic goal.
  • Researchers at the University of Washington, for example, are developing a technology called Vanish that makes electronic data “self-destruct” after a specified period of time. Instead of relying on Google, Facebook or Hotmail to delete the data that is stored “in the cloud” — in other words, on their distributed servers — Vanish encrypts the data and then “shatters” the encryption key. To read the data, your computer has to put the pieces of the key back together, but they “erode” or “rust” as time passes, and after a certain point the document can no longer be read.
  • Plenty of anecdotal evidence suggests that young people, having been burned by Facebook (and frustrated by its privacy policy, which at more than 5,000 words is longer than the U.S. Constitution), are savvier than older users about cleaning up their tagged photos and being careful about what they post.
  • norms are already developing to recreate off-the-record spaces in public, with no photos, Twitter posts or blogging allowed. Milk and Honey, an exclusive bar on Manhattan’s Lower East Side, requires potential members to sign an agreement promising not to blog about the bar’s goings on or to post photos on social-networking sites, and other bars and nightclubs are adopting similar policies. I’ve been at dinners recently where someone has requested, in all seriousness, “Please don’t tweet this” — a custom that is likely to spread.
  • research group’s preliminary results suggest that if rumors spread about something good you did 10 years ago, like winning a prize, they will be discounted; but if rumors spread about something bad that you did 10 years ago, like driving drunk, that information has staying power
  • strategies of “soft paternalism” that might nudge people to hesitate before posting, say, drunken photos from Cancún. “We could easily think about a system, when you are uploading certain photos, that immediately detects how sensitive the photo will be.”
  • It’s sobering, now that we live in a world misleadingly called a “global village,” to think about privacy in actual, small villages long ago. In the villages described in the Babylonian Talmud, for example, any kind of gossip or tale-bearing about other people — oral or written, true or false, friendly or mean — was considered a terrible sin because small communities have long memories and every word spoken about other people was thought to ascend to the heavenly cloud. (The digital cloud has made this metaphor literal.) But the Talmudic villages were, in fact, far more humane and forgiving than our brutal global village, where much of the content on the Internet would meet the Talmudic definition of gossip: although the Talmudic sages believed that God reads our thoughts and records them in the book of life, they also believed that God erases the book for those who atone for their sins by asking forgiveness of those they have wronged. In the Talmud, people have an obligation not to remind others of their past misdeeds, on the assumption they may have atoned and grown spiritually from their mistakes. “If a man was a repentant [sinner],” the Talmud says, “one must not say to him, ‘Remember your former deeds.’ ” Unlike God, however, the digital cloud rarely wipes our slates clean, and the keepers of the cloud today are sometimes less forgiving than their all-powerful divine predecessor.
  • On the Internet, it turns out, we’re not entitled to demand any particular respect at all, and if others don’t have the empathy necessary to forgive our missteps, or the attention spans necessary to judge us in context, there’s nothing we can do about it.
  • Gosling is optimistic about the implications of his study for the possibility of digital forgiveness. He acknowledged that social technologies are forcing us to merge identities that used to be separate — we can no longer have segmented selves like “a home or family self, a friend self, a leisure self, a work self.” But although he told Facebook, “I have to find a way to reconcile my professor self with my having-a-few-drinks self,” he also suggested that as all of us have to merge our public and private identities, photos showing us having a few drinks on Facebook will no longer seem so scandalous. “You see your accountant going out on weekends and attending clown conventions, that no longer makes you think that he’s not a good accountant. We’re coming to terms and reconciling with that merging of identities.”
  • a humane society values privacy, because it allows people to cultivate different aspects of their personalities in different contexts; and at the moment, the enforced merging of identities that used to be separate is leaving many casualties in its wake.
  • we need to learn new forms of empathy, new ways of defining ourselves without reference to what others say about us and new ways of forgiving one another for the digital trails that will follow us forever
Ed Webb

A woman first wrote the prescient ideas Huxley and Orwell made famous - Quartzy - 1 views

  • In 1919, a British writer named Rose Macaulay published What Not, a novel about a dystopian future—a brave new world if you will—where people are ranked by intelligence, the government mandates mind training for all citizens, and procreation is regulated by the state.You’ve probably never heard of Macaulay or What Not. However, Aldous Huxley, author of the science fiction classic Brave New World, hung out in the same London literary circles as her and his 1932 book contains many concepts that Macaulay first introduced in her work. In 2019, you’ll be able to read Macaulay’s book yourself and compare the texts as the British publisher Handheld Press is planning to re- release the forgotten novel in March. It’s been out of print since the year it was first released.
  • The resurfacing of What Not also makes this a prime time to consider another work that influenced Huxley’s Brave New World, the 1923 novel We by Yvgeny Zamyatin. What Not and We are lost classics about a future that foreshadows our present. Notably, they are also hidden influences on some of the most significant works of 20th century fiction, Brave New World and George Orwell’s 1984.
  • In Macaulay’s book—which is a hoot and well worth reading—a democratically elected British government has been replaced with a “United Council, five minds with but a single thought—if that,” as she put it. Huxley’s Brave New World is run by a similarly small group of elites known as “World Controllers.”
  • ...12 more annotations...
  • citizens of What Not are ranked based on their intelligence from A to C3 and can’t marry or procreate with someone of the same rank to ensure that intelligence is evenly distributed
  • Brave New World is more futuristic and preoccupied with technology than What Not. In Huxley’s world, procreation and education have become completely mechanized and emotions are strictly regulated pharmaceutically. Macaulay’s Britain is just the beginning of this process, and its characters are not yet completely indoctrinated into the new ways of the state—they resist it intellectually and question its endeavors, like the newly-passed Mental Progress Act. She writes:He did not like all this interfering, socialist what-not, which was both upsetting the domestic arrangements of his tenants and trying to put into their heads more learning than was suitable for them to have. For his part he thought every man had a right to be a fool if he chose, yes, and to marry another fool, and to bring up a family of fools too.
  • Where Huxley pairs dumb but pretty and “pneumatic” ladies with intelligent gentlemen, Macaulay’s work is decidedly less sexist.
  • We was published in French, Dutch, and German. An English version was printed and sold only in the US. When Orwell wrote about We in 1946, it was only because he’d managed to borrow a hard-to-find French translation.
  • While Orwell never indicated that he read Macaulay, he shares her subversive and subtle linguistic skills and satirical sense. His protagonist, Winston—like Kitty—works for the government in its Ministry of Truth, or Minitrue in Newspeak, where he rewrites historical records to support whatever Big Brother currently says is good for the regime. Macaulay would no doubt have approved of Orwell’s wit. And his state ministries bear a striking similarity to those she wrote about in What Not.
  • Orwell was familiar with Huxley’s novel and gave it much thought before writing his own blockbuster. Indeed, in 1946, before the release of 1984, he wrote a review of Zamyatin’s We (pdf), comparing the Russian novel with Huxley’s book. Orwell declared Huxley’s text derivative, writing in his review of We in The Tribune:The first thing anyone would notice about We is the fact—never pointed out, I believe—that Aldous Huxley’s Brave New World must be partly derived from it. Both books deal with the rebellion of the primitive human spirit against a rationalised, mechanized, painless world, and both stories are supposed to take place about six hundred years hence. The atmosphere of the two books is similar, and it is roughly speaking the same kind of society that is being described, though Huxley’s book shows less political awareness and is more influenced by recent biological and psychological theories.
  • In We, the story is told by D-503, a male engineer, while in Brave New World we follow Bernard Marx, a protagonist with a proper name. Both characters live in artificial worlds, separated from nature, and they recoil when they first encounter people who exist outside of the state’s constructed and controlled cities.
  • Although We is barely known compared to Orwell and Huxley’s later works, I’d argue that it’s among the best literary science fictions of all time, and it’s highly relevant, as it was when first written. Noam Chomsky calls it “more perceptive” than both 1984 and Brave New World. Zamyatin’s futuristic society was so on point, he was exiled from the Soviet Union because it was such an accurate description of life in a totalitarian regime, though he wrote it before Stalin took power.
  • Macaulay’s work is more subtle and funny than Huxley’s. Despite being a century old, What Not is remarkably relevant and readable, a satire that only highlights how little has changed in the years since its publication and how dangerous and absurd state policies can be. In this sense then, What Not reads more like George Orwell’s 1949 novel 1984 
  • Orwell was critical of Zamyatin’s technique. “[We] has a rather weak and episodic plot which is too complex to summarize,” he wrote. Still, he admired the work as a whole. “[Its] intuitive grasp of the irrational side of totalitarianism—human sacrifice, cruelty as an end in itself, the worship of a Leader who is credited with divine attributes—[…] makes Zamyatin’s book superior to Huxley’s,”
  • Like our own tech magnates and nations, the United State of We is obsessed with going to space.
  • Perhaps in 2019 Macaulay’s What Not, a clever and subversive book, will finally get its overdue recognition.
Ed Webb

Scientific blinders: Learning from the moral failings of Nazi physicists - Bulletin of ... - 0 views

  • As the evening progressed, more and more questions concerning justice and ethics occurred to the physicists: Are atomic weapons inherently inhumane, and should they never be used? If the Germans had come to possess such weapons, what would be the world’s fate? What constitutes real patriotism in Nazi Germany—working for the regime’s success, or its defeat? The scientists expressed surprise and bafflement at their colleagues’ opinions, and their own views sometimes evolved from one moment to the next. The scattered, changing opinions captured in the Farm Hall transcripts highlight that, in their five years on the Nazi nuclear program, the German physicists had likely failed to wrestle meaningfully with these critical questions.
  • looking back at the Uranium Club serves to remind us scientists of how easy it is to focus on technical matters and avoid considering moral ones. This is especially true when the moral issues are perplexing, when any negative impacts seem distant, and when the science is exciting.
  • engineers who develop tracking or facial-recognition systems may be creating tools that can be purchased by repressive regimes intent on spying on and suppressing dissent. Accordingly, those researchers have a certain obligation to consider their role and the impact of their work.
  • ...2 more annotations...
  • reflecting seriously on the societal context of a research position may prompt a scientist to accept the job—and to take it upon herself or himself to help restrain unthinking innovation at work, by raising questions about whether every feature that can be added should in fact be implemented. (The same goes for whether certain lines of research should be pursued and particular findings published.)
  • The challenge for each of us, moving forward, is to ask ourselves and one another, hopefully far earlier in the research process than did Germany’s Walther Gerlach: “What are we working for?”
  •  
    If you get the opportunity see, or at least read, the plays The Physicists (Die Physiker) by Friedrich Dürrenmatt and Copenhagen by Michael Frayn.
Ed Webb

Nine million logs of Brits' road journeys spill onto the internet from password-less nu... - 0 views

  • In a blunder described as "astonishing and worrying," Sheffield City Council's automatic number-plate recognition (ANPR) system exposed to the internet 8.6 million records of road journeys made by thousands of people
  • The Register learned of the unprotected dashboard from infosec expert and author Chris Kubecka, working with freelance writer Gerard Janssen, who stumbled across it using search engine Censys.io. She said: "Was the public ever told the system would be in place and that the risks were reasonable? Was there an opportunity for public discourse – or, like in Hitchhiker's Guide to the Galaxy, were the plans in a planning office at an impossible or undisclosed location?"
  • The dashboard was taken offline within a few hours of The Register alerting officials. Sheffield City Council and South Yorkshire Police added: "As soon as this was brought to our attention we took action to deal with the immediate risk and ensure the information was no longer viewable externally. Both Sheffield City Council and South Yorkshire Police have also notified the Information Commissioner's Office. We will continue to investigate how this happened and do everything we can to ensure it will not happen again."
Ed Webb

AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views

  • Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
  • Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
  • Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
  • ...9 more annotations...
  • Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.
  • output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem
  • Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.
  • the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few
  • the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place
  • the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.
  • employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.
  • too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity
  • We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.
Ed Webb

Zoom urged by rights groups to rule out 'creepy' AI emotion tech - 0 views

  • Human rights groups have urged video-conferencing company Zoom to scrap research on integrating emotion recognition tools into its products, saying the technology can infringe users' privacy and perpetuate discrimination
  • "If Zoom advances with these plans, this feature will discriminate against people of certain ethnicities and people with disabilities, hardcoding stereotypes into millions of devices,"
  • The company has already built tools that purport to analyze the sentiment of meetings based on text transcripts of video calls
  • ...1 more annotation...
  • "This move to mine users for emotional data points based on the false idea that AI can track and analyze human emotions is a violation of privacy and human rights,"
1 - 20 of 21 Next ›
Showing 20 items per page