Skip to main content

Home/ History Readings/ Group items tagged facial

Rss Feed Group items tagged

Javier E

Facebook's Push for Facial Recognition Prompts Privacy Alarms - The New York Times - 0 views

  • Facial recognition works by scanning faces of unnamed people in photos or videos and then matching codes of their facial patterns to those in a database of named people. Facebook has said that users are in charge of that process, telling them: “You control face recognition.
  • But critics said people cannot actually control the technology — because Facebook scans their faces in photos even when their facial recognition setting is turned off.
  • Rochelle Nadhiri, a Facebook spokeswoman, said its system analyzes faces in users’ photos to check whether they match with those who have their facial recognition setting turned on. If the system cannot find a match, she said, it does not identify the unknown face and immediately deletes the facial data
  • ...12 more annotations...
  • In the European Union, a tough new data protection law called the General Data Protection Regulation now requires companies to obtain explicit and “freely given” consent before collecting sensitive information like facial data. Some critics, including the former government official who originally proposed the new law, contend that Facebook tried to improperly influence user consent by promoting facial recognition as an identity protection tool.
  • People could turn it off. But privacy experts said Facebook had neither obtained users’ opt-in consent for the technology nor explicitly informed them that the company could benefit from scanning their photos
  • Separately, privacy and consumer groups lodged a complaint with the Federal Trade Commission in April saying Facebook added facial recognition services, like the feature to help identify impersonators, without obtaining prior consent from people before turning it on. The groups argued that Facebook violated a 2011 consent decree that prohibits it from deceptive privacy practices
  • Critics said Facebook took an early lead in consumer facial recognition services partly by turning on the technology as the default option for users. In 2010, it introduced a photo-labeling feature called Tag Suggestions that used face-matching software to suggest the names of people in users’ photos.
  • “Facebook is somehow threatening me that, if I do not buy into face recognition, I will be in danger,” said Viviane Reding, the former justice commissioner of the European Commission who is now a member of the European Parliament. “It goes completely against the European law because it tries to manipulate consent.”
  • “When Tag Suggestions asks you ‘Is this Jill?’ you don’t think you are annotating faces to improve Facebook’s face recognition algorithm,” said Brian Brackeen, the chief executive of Kairos, a facial recognition company. “Even the premise is an unfair use of people’s time and labor.”
  • The huge trove of identified faces, he added, enabled Facebook to quickly develop one of the world’s most powerful commercial facial recognition engines. In 2014, Facebook researchers said they had trained face-matching software “on the largest facial dataset to date, an identity labeled dataset of four million facial images.”
  • Facebook may only be getting started with its facial recognition services. The social network has applied for various patents, many of them still under consideration, which show how it could use the technology to track its online users in the real world.
  • One patent application, published last November, described a system that could detect consumers within stores and match those shoppers’ faces with their social networking profiles. Then it could analyze the characteristics of their friends, and other details, using the information to determine a “trust level” for each shopper. Consumers deemed “trustworthy” could be eligible for special treatment, like automatic access to merchandise in locked display cases, the document said.
  • Another Facebook patent filing described how cameras near checkout counters could capture shoppers’ faces, match them with their social networking profiles and then send purchase confirmation messages to their phones
  • But legal filings in the class-action suit hint at the technology’s importance to Facebook’s business.
  • If the suit were to move forward, Facebook’s lawyers argued in a recent court document, “the reputational and economic costs to Facebook will be irreparable.”
Javier E

The Secretive Company That Might End Privacy as We Know It - The New York Times - 0 views

  • Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.
  • without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year
  • The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.
  • ...37 more annotations...
  • it’s not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes.
  • “The weaponization possibilities of this are endless,” said Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University. “Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.”
  • While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.
  • The company eventually started answering my questions, saying that its earlier silence was typical of an early-stage start-up in stealth mode. Mr. Ton-That acknowledged designing a prototype for use with augmented-reality glasses but said the company had no plans to release it.
  • In addition to Mr. Ton-That, Clearview was founded by Richard Schwartz — who was an aide to Rudolph W. Giuliani when he was mayor of New York — and backed financially by Peter Thiel, a venture capitalist behind Facebook and Palantir.
  • “I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” Mr. Scalzo said. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”
  • “In 2017, Peter gave a talented young founder $200,000, which two years later converted to equity in Clearview AI,” said Jeremiah Hall, Mr. Thiel’s spokesman. “That was Peter’s only contribution; he is not involved in the company.”
  • He began in 2016 by recruiting a couple of engineers. One helped design a program that can automatically collect images of people’s faces from across the internet, such as employment sites, news sites, educational sites, and social networks including Facebook, YouTube, Twitter, Instagram and even Venmo
  • Representatives of those companies said their policies prohibit such scraping, and Twitter said it explicitly banned use of its data for facial recognition
  • Another engineer was hired to perfect a facial recognition algorithm that was derived from academic papers. The result: a system that uses what Mr. Ton-That described as a “state-of-the-art neural net” to convert all the images into mathematical formulas, or vectors, based on facial geometry — like how far apart a person’s eyes are
  • Clearview created a vast directory that clustered all the photos with similar vectors into “neighborhoods.”
  • When a user uploads a photo of a face into Clearview’s system, it converts the face into a vector and then shows all the scraped photos stored in that vector’s neighborhood — along with the links to the sites from which those images came.
  • Mr. Schwartz paid for server costs and basic expenses, but the operation was bare bones; everyone worked from home. “I was living on credit card debt,” Mr. Ton-That said. “Plus, I was a Bitcoin believer, so I had some of those.”
  • The company soon changed its name to Clearview AI and began marketing to law enforcement. That was when the company got its first round of funding from outside investors: Mr. Thiel and Kirenaga Partners
  • Mr. Schwartz and Mr. Ton-That met in 2016 at a book event at the Manhattan Institute, a conservative think tank. Mr. Schwartz, now 61, had amassed an impressive Rolodex working for Mr. Giuliani in the 1990s and serving as the editorial page editor of The New York Daily News in the early 2000s. The two soon decided to go into the facial recognition business together: Mr. Ton-That would build the app, and Mr. Schwartz would use his contacts to drum up commercial interest.
  • They immediately got a match: The man appeared in a video that someone had posted on social media, and his name was included in a caption on the video. “He did not have a driver’s license and hadn’t been arrested as an adult, so he wasn’t in government databases,”
  • The man was arrested and charged; Mr. Cohen said he probably wouldn’t have been identified without the ability to search social media for his face. The Indiana State Police became Clearview’s first paying customer, according to the company
  • Clearview deployed current and former Republican officials to approach police forces, offering free trials and annual licenses for as little as $2,000. Mr. Schwartz tapped his political connections to help make government officials aware of the tool
  • The company’s most effective sales technique was offering 30-day free trials to officers, who then encouraged their acquisition departments to sign up and praised the tool to officers from other police departments at conferences and online, according to the company and documents provided by police departments in response to public-record requests. Mr. Ton-That finally had his viral hit.
  • Photos “could be covertly taken with telephoto lens and input into the software, without ‘burning’ the surveillance operation,” the detective wrote in the email, provided to The Times by two researchers,
  • Sergeant Ferrara found Clearview’s app superior, he said. Its nationwide database of images is much larger, and unlike FACES, Clearview’s algorithm doesn’t require photos of people looking straight at the camera.
  • “With Clearview, you can use photos that aren’t perfect,” Sergeant Ferrara said. “A person can be wearing a hat or glasses, or it can be a profile shot or partial view of their face.”
  • Mr. Ton-That said the tool does not always work. Most of the photos in Clearview’s database are taken at eye level. Much of the material that the police upload is from surveillance cameras mounted on ceilings or high on walls.
  • Despite that, the company said, its tool finds matches up to 75 percent of the time. But it is unclear how often the tool delivers false matches, because it has not been tested by an independent party
  • One reason that Clearview is catching on is that its service is unique. That’s because Facebook and other social media sites prohibit people from scraping users’ images — Clearview is violating the sites’ terms of service.
  • Some law enforcement officials said they didn’t realize the photos they uploaded were being sent to and stored on Clearview’s servers. Clearview tries to pre-empt concerns with an F.A.Q. document given to would-be clients that says its customer-support employees won’t look at the photos that the police upload.
  • Mr. Clement, now a partner at Kirkland & Ellis, wrote that the authorities don’t have to tell defendants that they were identified via Clearview, as long as it isn’t the sole basis for getting a warrant to arrest them.
  • Because the police upload photos of people they’re trying to identify, Clearview possesses a growing database of individuals who have attracted attention from law enforcement. The company also has the ability to manipulate the results that the police see.
  • After the company realized I was asking officers to run my photo through the app, my face was flagged by Clearview’s systems and for a while showed no matches. When asked about this, Mr. Ton-That laughed and called it a “software bug.”
  • “It’s creepy what they’re doing, but there will be many more of these companies. There is no monopoly on math,” said Al Gidari, a privacy professor at Stanford Law School. “Absent a very strong federal privacy law, we’re all screwed.”
  • But if your profile has already been scraped, it is too late. The company keeps all the images it has scraped even if they are later deleted or taken down, though Mr. Ton-That said the company was working on a tool that would let people request that images be removed if they had been taken down from the website of origin
  • Woodrow Hartzog, a professor of law and computer science at Northeastern University in Boston, sees Clearview as the latest proof that facial recognition should be banned in the United States.
  • We’ve relied on industry efforts to self-police and not embrace such a risky technology, but now those dams are breaking because there is so much money on the table,”
  • “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”
  • Mr. Ton-That said he was reluctant. “There’s always going to be a community of bad people who will misuse it,” he said.
  • Even if Clearview doesn’t make its app publicly available, a copycat company might, now that the taboo is broken. Searching someone by face could become as easy as Googling a name
  • Someone walking down the street would be immediately identifiable — and his or her home address would be only a few clicks away. It would herald the end of public anonymity.
Javier E

Tencent Uses Facial Recognition on Teenage Gamers - The New York Times - 0 views

  • the room to maneuver is shrinking in China, where underage players are required to log on using their real names and identification numbers as part of countrywide regulations aimed at limiting screen time and keeping internet addiction in check. In 2019, the country imposed a cybercurfew barring those under 18 from playing games between 10 p.m. and 8 a.m.
  • Recognizing that wily teenagers might try to use their parents’ devices or identities to circumvent the restrictions, the Chinese internet conglomerate Tencent said this week that it would close the loophole by deploying facial recognition technology in its video games.
  • Privacy concerns were widely discussed when the real-name registration requirement for minors was introduced in 2019. Describing facial recognition technology as a double-edged sword, the China Security and Protection Industry Association, a government-linked trade group, said in a paper published last year that the mass collection of personal data could result in security breaches.
  • ...2 more annotations...
  • Tencent said it began testing facial recognition technology in April to verify the ages of avid nighttime players and has since used it in 60 of its games. In June, it prompted an average of 5.8 million users a day to show their faces while logging in, blocking more than 90 percent of those who rejected or failed facial verification from access to their accounts.
  • In the case of video games, the government has long blamed them for causing nearsightedness, sleep deprivation and low academic performance among young people. The 2019 regulations also limited how much time and money underage users could spend playing video games.
Javier E

How Nations Are Losing a Global Race to Tackle A.I.'s Harms - The New York Times - 0 views

  • When European Union leaders introduced a 125-page draft law to regulate artificial intelligence in April 2021, they hailed it as a global model for handling the technology.
  • E.U. lawmakers had gotten input from thousands of experts for three years about A.I., when the topic was not even on the table in other countries. The result was a “landmark” policy that was “future proof,” declared Margrethe Vestager, the head of digital policy for the 27-nation bloc.
  • Then came ChatGPT.
  • ...45 more annotations...
  • The eerily humanlike chatbot, which went viral last year by generating its own answers to prompts, blindsided E.U. policymakers. The type of A.I. that powered ChatGPT was not mentioned in the draft law and was not a major focus of discussions about the policy. Lawmakers and their aides peppered one another with calls and texts to address the gap, as tech executives warned that overly aggressive regulations could put Europe at an economic disadvantage.
  • Even now, E.U. lawmakers are arguing over what to do, putting the law at risk. “We will always be lagging behind the speed of technology,” said Svenja Hahn, a member of the European Parliament who was involved in writing the A.I. law.
  • Lawmakers and regulators in Brussels, in Washington and elsewhere are losing a battle to regulate A.I. and are racing to catch up, as concerns grow that the powerful technology will automate away jobs, turbocharge the spread of disinformation and eventually develop its own kind of intelligence.
  • Nations have moved swiftly to tackle A.I.’s potential perils, but European officials have been caught off guard by the technology’s evolution, while U.S. lawmakers openly concede that they barely understand how it works.
  • The absence of rules has left a vacuum. Google, Meta, Microsoft and OpenAI, which makes ChatGPT, have been left to police themselves as they race to create and profit from advanced A.I. systems
  • At the root of the fragmented actions is a fundamental mismatch. A.I. systems are advancing so rapidly and unpredictably that lawmakers and regulators can’t keep pace
  • That gap has been compounded by an A.I. knowledge deficit in governments, labyrinthine bureaucracies and fears that too many rules may inadvertently limit the technology’s benefits.
  • Even in Europe, perhaps the world’s most aggressive tech regulator, A.I. has befuddled policymakers.
  • The European Union has plowed ahead with its new law, the A.I. Act, despite disputes over how to handle the makers of the latest A.I. systems.
  • The result has been a sprawl of responses. President Biden issued an executive order in October about A.I.’s national security effects as lawmakers debate what, if any, measures to pass. Japan is drafting nonbinding guidelines for the technology, while China has imposed restrictions on certain types of A.I. Britain has said existing laws are adequate for regulating the technology. Saudi Arabia and the United Arab Emirates are pouring government money into A.I. research.
  • A final agreement, expected as soon as Wednesday, could restrict certain risky uses of the technology and create transparency requirements about how the underlying systems work. But even if it passes, it is not expected to take effect for at least 18 months — a lifetime in A.I. development — and how it will be enforced is unclear.
  • Many companies, preferring nonbinding codes of conduct that provide latitude to speed up development, are lobbying to soften proposed regulations and pitting governments against one another.
  • “No one, not even the creators of these systems, know what they will be able to do,” said Matt Clifford, an adviser to Prime Minister Rishi Sunak of Britain, who presided over an A.I. Safety Summit last month with 28 countries. “The urgency comes from there being a real question of whether governments are equipped to deal with and mitigate the risks.”
  • Europe takes the lead
  • In mid-2018, 52 academics, computer scientists and lawyers met at the Crowne Plaza hotel in Brussels to discuss artificial intelligence. E.U. officials had selected them to provide advice about the technology, which was drawing attention for powering driverless cars and facial recognition systems.
  • as they discussed A.I.’s possible effects — including the threat of facial recognition technology to people’s privacy — they recognized “there were all these legal gaps, and what happens if people don’t follow those guidelines?”
  • In 2019, the group published a 52-page report with 33 recommendations, including more oversight of A.I. tools that could harm individuals and society.
  • By October, the governments of France, Germany and Italy, the three largest E.U. economies, had come out against strict regulation of general purpose A.I. models for fear of hindering their domestic tech start-ups. Others in the European Parliament said the law would be toothless without addressing the technology. Divisions over the use of facial recognition technology also persisted.
  • So when the A.I. Act was unveiled in 2021, it concentrated on “high risk” uses of the technology, including in law enforcement, school admissions and hiring. It largely avoided regulating the A.I. models that powered them unless listed as dangerous
  • “They sent me a draft, and I sent them back 20 pages of comments,” said Stuart Russell, a computer science professor at the University of California, Berkeley, who advised the European Commission. “Anything not on their list of high-risk applications would not count, and the list excluded ChatGPT and most A.I. systems.”
  • E.U. leaders were undeterred.“Europe may not have been the leader in the last wave of digitalization, but it has it all to lead the next one,” Ms. Vestager said when she introduced the policy at a news conference in Brussels.
  • In 2020, European policymakers decided that the best approach was to focus on how A.I. was used and not the underlying technology. A.I. was not inherently good or bad, they said — it depended on how it was applied.
  • Nineteen months later, ChatGPT arrived.
  • The Washington game
  • Lacking tech expertise, lawmakers are increasingly relying on Anthropic, Microsoft, OpenAI, Google and other A.I. makers to explain how it works and to help create rules.
  • “We’re not experts,” said Representative Ted Lieu, Democrat of California, who hosted Sam Altman, OpenAI’s chief executive, and more than 50 lawmakers at a dinner in Washington in May. “It’s important to be humble.”
  • Tech companies have seized their advantage. In the first half of the year, many of Microsoft’s and Google’s combined 169 lobbyists met with lawmakers and the White House to discuss A.I. legislation, according to lobbying disclosures. OpenAI registered its first three lobbyists and a tech lobbying group unveiled a $25 million campaign to promote A.I.’s benefits this year.
  • In that same period, Mr. Altman met with more than 100 members of Congress, including former Speaker Kevin McCarthy, Republican of California, and the Senate leader, Chuck Schumer, Democrat of New York. After testifying in Congress in May, Mr. Altman embarked on a 17-city global tour, meeting world leaders including President Emmanuel Macron of France, Mr. Sunak and Prime Minister Narendra Modi of India.
  • , the White House announced that the four companies had agreed to voluntary commitments on A.I. safety, including testing their systems through third-party overseers — which most of the companies were already doing.
  • “It was brilliant,” Mr. Smith said. “Instead of people in government coming up with ideas that might have been impractical, they said, ‘Show us what you think you can do and we’ll push you to do more.’”
  • In a statement, Ms. Raimondo said the federal government would keep working with companies so “America continues to lead the world in responsible A.I. innovation.”
  • Over the summer, the Federal Trade Commission opened an investigation into OpenAI and how it handles user data. Lawmakers continued welcoming tech executives.
  • In September, Mr. Schumer was the host of Elon Musk, Mark Zuckerberg of Meta, Sundar Pichai of Google, Satya Nadella of Microsoft and Mr. Altman at a closed-door meeting with lawmakers in Washington to discuss A.I. rules. Mr. Musk warned of A.I.’s “civilizational” risks, while Mr. Altman proclaimed that A.I. could solve global problems such as poverty.
  • A.I. companies are playing governments off one another. In Europe, industry groups have warned that regulations could put the European Union behind the United States. In Washington, tech companies have cautioned that China might pull ahead.
  • In May, Ms. Vestager, Ms. Raimondo and Antony J. Blinken, the U.S. secretary of state, met in Lulea, Sweden, to discuss cooperating on digital policy.
  • “China is way better at this stuff than you imagine,” Mr. Clark of Anthropic told members of Congress in January.
  • After two days of talks, Ms. Vestager announced that Europe and the United States would release a shared code of conduct for safeguarding A.I. “within weeks.” She messaged colleagues in Brussels asking them to share her social media post about the pact, which she called a “huge step in a race we can’t afford to lose.”
  • Months later, no shared code of conduct had appeared. The United States instead announced A.I. guidelines of its own.
  • Little progress has been made internationally on A.I. With countries mired in economic competition and geopolitical distrust, many are setting their own rules for the borderless technology.
  • Yet “weak regulation in another country will affect you,” said Rajeev Chandrasekhar, India’s technology minister, noting that a lack of rules around American social media companies led to a wave of global disinformation.
  • “Most of the countries impacted by those technologies were never at the table when policies were set,” he said. “A.I will be several factors more difficult to manage.”
  • Even among allies, the issue has been divisive. At the meeting in Sweden between E.U. and U.S. officials, Mr. Blinken criticized Europe for moving forward with A.I. regulations that could harm American companies, one attendee said. Thierry Breton, a European commissioner, shot back that the United States could not dictate European policy, the person said.
  • Some policymakers said they hoped for progress at an A.I. safety summit that Britain held last month at Bletchley Park, where the mathematician Alan Turing helped crack the Enigma code used by the Nazis. The gathering featured Vice President Kamala Harris; Wu Zhaohui, China’s vice minister of science and technology; Mr. Musk; and others.
  • The upshot was a 12-paragraph statement describing A.I.’s “transformative” potential and “catastrophic” risk of misuse. Attendees agreed to meet again next year.
  • The talks, in the end, produced a deal to keep talking.
criscimagnael

Australia Wields a New DNA Tool to Crack Missing-Person Mysteries - The New York Times - 0 views

  • The technique can predict a person’s ancestry and physical traits without the need for a match with an existing sample in a database.
  • When a man washed up on the shores of Christmas Island in 1942, lifeless and hunched over in a shrapnel-riddled raft, no one knew who he was.
  • It wasn’t until the 1990s that the Royal Australian Navy began to suspect that he may have been a sailor from the HMAS Sydney II, an Australian warship whose 645-member crew disappeared at sea when it sank off the coast of Western Australia during World War II.
  • ...12 more annotations...
  • In 2006, the man’s remains were exhumed, but DNA extracted from his teeth yielded no match with a list of people Navy officials thought might be his descendants. With few leads, the scientist who conducted the DNA test, Jeremy Austin, told the Navy about an emerging technique that could predict a person’s ancestry and physical traits from genetic material.
  • In Australia, forensic scientists are repurposing the technique to help link missing persons with unidentified remains in the hope of resolving long-running mysteries. In the case of the sailor, Dr. Austin sent the sample to researchers in Europe, who reported back that the man was of European ancestry and most likely had red hair and blue eyes.
  • That alone wasn’t enough to identify the sailor, but it narrowed the search. “In a ship full of 645 white guys, you wouldn’t expect to see more than two or three with this pigmentation,”
  • This forensic tool, which has been slowly advancing since the mid-2000s, is similar to genetic tests that estimate risks for certain diseases. About five years ago, scientists with the Australian Federal Police began developing their own version of the technology, which combines genomics, big data and machine learning. It became available for use last year.
  • The predictions from DNA phenotyping — whether a person had, say, brown hair and blue eyes — will be brought to life by a forensic artist, combining the phenotype information with renderings of bone structure to generate a three-dimensional digital facial reconstruction.
  • “It’s an investigative lead we’ve never had before,”
  • In the United States, police departments have for years been using private DNA phenotyping services, like one from the Virginia-based Parabon NanoLabs, to try to generate facial images of suspects. The images are sometimes distributed to the public to assist in investigations.
  • Many scientists, however, are skeptical of this application of the technology. “You cannot do a full facial prediction right now,” said Susan Walsh, a professor of biology at Indiana University-Purdue University Indianapolis who developed some of the earliest phenotyping methods for eye and hair color. “The foundation of the genetics is absolutely not there.”
  • Facial image prediction has been condemned by human rights organizations, including the A.C.L.U., which suggest that it risks being skewed by existing social prejudices.
  • The same DNA was then linked to dozens of serious crimes across Western Europe, prompting a theory that the perpetrator was a serial offender from a traveling Roma community.It turned out that the recurring genetic material belonged to a female Polish factory worker who had accidentally contaminated the cotton swabs used to collect the samples.
  • “The families want any and all techniques applied to these cases if it’s going to help answer the question of what happened,” she said.
  • Such was the case with the mystery sailor. After his genotype was sequenced and his phenotype predicted, a team of scientists across several Australian institutions, including Dr. Ward’s program, used this information to track down a woman they believed to be a living relative of the soldier. They checked her DNA and had a match.
hannahcarter11

Swiss Vote To Ban Wearing Of Burqas In Public : NPR - 0 views

  • Swiss voters approved a proposition Sunday banning facial coverings in public. Niqabs and burqas, worn by almost no one even among the country's Muslim population, will be banned outside of religious institutions. The new law doesn't apply to facial coverings for health reasons.
  • Switzerland will join several European countries that have implemented a ban on facial coverings, including France, Denmark, the Netherlands and Austria.
  • The new legislation was brought to the ballot through a people's initiative launched by the nation's right-wing Egerkingen Committee, the same group that led the charge to ban minarets over a decade ago
  • ...7 more annotations...
  • The Swiss government opposed the nationwide initiative as excessive and argued such bans should be decided by individual regions, two of which already have a "burqa ban" in place.
  • The ban barely passed a majority vote, with 51.2% of the Swiss voting in support of the proposal.
  • One of the largest backers of the initiative was the nationalist Swiss People's Party, which applauded the outcome of the vote and called the new measure "A strong symbol in the fight against radical political Islam."
  • Some feminist groups and progressive Muslims reportedly were supporters of the initiative, arguing that full face coverings are oppressive to women.
  • Other groups felt the new restriction was Islamophobic and that women should not be told what to wear.
  • "Today's decision is tearing open old wounds, expanding the principle of legal inequality and sending a clear signal of exclusion to the Muslim minority," the group wrote.
  • Researchers found that at most a few dozen Muslim women wear full face coverings in Switzerland. About 5% of Switzerland's population of 8.6 million is Muslim, the BBC reported.
Javier E

Facial Scanning Is Making Gains in Surveillance - NYTimes.com - 0 views

  • “I would say we’re at least five years off, but it all depends on what kind of goals they have in mind” for such a system,
Javier E

Opinion | The Real Google Censorship Scandal - The New York Times - 0 views

  • In his new book, “AI Superpowers: China, Silicon Valley, and the New World Order,” Dr. Lee argues that advances in artificial intelligence — the future of computing — will be enjoyed only by those with the ability to essentially shove increasing amounts of data into the maw of the machine.Right now, he notes, with China’s aggressive use of sensors and you-say-facial-recognition-I-say-surveillance, a population hooked on mobile in a much more significant way than here and consumers more willing to trade away their privacy for digital convenience, China’s internet companies have access to 10 to 15 times more data than American ones. Dr. Lee and others have called it a “data gap” that Google has to bridge, and soon, if it wants to remain competitive.
  • “To develop really strong A.I., you need a lot of data. Well, if you have an authoritarian government that says, ‘Hey, we’re now all doing facial recognition,’ you suddenly have a lot of data,” she said. “If you’re in the United States or Europe or whatever, you have to get consent, and that consent can be withdrawn. There are all kinds of hurdles to collecting the data, which means we’ll be slower. And I don’t have an issue with that except for the fact that China has the ability to employ technology that will simply be the dominant technology if they get there first.”
carolinehayter

Switzerland to ban wearing of burqa and niqab in public places | Switzerland | The Guar... - 0 views

  • Muslim groups criticise move, which they say will further stigmatise and marginalise their community
  • Switzerland will follow France, Belgium and Austria after narrowly voting in a referendum to ban women from wearing the burqa or niqab in public spaces.
  • Just over 51% of Swiss voters cast their ballots in favour of the initiative to ban people from covering their face completely on the street, in shops and restaurants.
  • ...9 more annotations...
  • Switzerland’s parliament and the seven-member executive council that constitutes the country’s federal government opposed the referendum proposal. They argued that full facial veils represented a “fringe phenomenon”, and instead proposed an initiative that would force people to lift their facial coverings when asked to confirm their identity to officials.
  • Muslim groups have criticised the ban. “This is clearly an attack against the Muslim community in Switzerland. What is aimed here is to stigmatise and marginalise Muslims even more,”
  • “A burqa ban would damage our reputation as an open and tolerant tourism destination,” said Nicole Brändle Schlegel of the HotellerieSuisse umbrella organisation.
  • Supporters of the ban argue that it also intended to stop violent street protesters and football hooligans wearing masks, and that the referendum text does not explicitly mention Islam or the words “niqab” or “burqa”.Their campaign, however, framed the referendum as a verdict on the role of Islam in public life.
  • Campaign ads it paid for showed a woman wearing a niqab and sunglasses alongside the slogan: “Stop extremism! Yes to the veil ban.”
  • A video on the Swiss government’s website explaining the arguments in favour of a ban proposed that “religious veils like the burqa or the niqab are a symbol of the oppression of women and aren’t suitable to our society”.
  • A recent study by the University of Lucerne put the number of women in Switzerland who wear a niqab at 21 to 37, and found no evidence at all of women wearing the burqa, which women were forced to wear in Afghanistan under the Taliban.
  • Muslims make up around 5% of the Swiss population
  • The referendum outcome means Switzerland will follow France, which banned wearing a full face veil in public in 2011. Full or partial bans on wearing face coverings in public are also in place in Austria, Belgium, Bulgaria, Denmark and the Netherlands.
Javier E

Opinion | If Stalin Had a Smartphone - The New York Times - 0 views

  • As online life expands, neighborhood life and social trust decline. As the social fabric decays, social isolation rises and online viciousness and swindling accumulate, you tell people that the state has to step in to restore trust. By a series of small ratcheted steps, you’ve been given permission to completely regulate their online life.
  • This, too, is essentially what is happening in China.As George Orwell and Aldous Huxley understood, if you want to be a good totalitarian, it isn’t enough to control behavior. To have total power you have to be able to control people’s minds. With modern information technology, the state can shape the intimate information pond in which we swim
  • Human history is a series of struggles for power. Every few generations, just for fun, the gods give us a new set of equipment that radically alters the game. We thought the new tools would democratize power, but they seem to have centralized it. It’s springtime for dictators
  • ...9 more annotations...
  • Back in Stalin’s day, social discipline was so drastic. You had to stage a show trial (so expensive!), send somebody to the gulag or organize a purge. Now your tyranny can be small, subtle and omnipresent. It’s like the broken windows theory of despotism. By punishing the small deviations, you prevent the big ones from ever happening.
  • Third, thanks to big data, today’s Stalin would be able to build a massive Social Credit System to score and rank citizens, like the systems the Chinese are now using. Governments, banks and online dating sites gather data on, well, everybody. Do you pay your debts? How many hours do you spend playing video games? Do you jaywalk?
  • some of the best minds in the world have spent tens of billions of dollars improving tools that predict personal consumption. This technology, too, has got to come in handy for any modern-day Stalin.
  • One Chinese firm, Yitu, installed a system that keeps a record of employees’ movements as they walk to the break room or rest room. It records them with blue dotted lines on a monitor. That would be so helpful for your thoroughly modern dictator.
  • this is not even to mention the facial recognition technology the Chinese are using to keep track of their own citizens. In Beijing, facial recognition is used in apartment buildings to prevent renters from subletting their apartments.
  • I feel bad for Joseph Stalin. He dreamed of creating a totalitarian society where every individual’s behavior could be predicted and controlled. But he was born a century too early. He lived before the technology that would have made being a dictator so much easier!
  • The internet of things means that our refrigerators, watches, glasses, phones and security cameras will soon be recording every move we make.
  • In the second place, thanks to artificial intelligence, Uncle Joe would have much better tools for predicting how his subjects are about to behave.
  • f your score is too low, you can get put on a blacklist. You may not be able to visit a museum. You may not be able to fly on a plane, check into a hotel, visit the mall or graduate from high school. Your daughter gets rejected by her favorite university.
martinelligi

Coronavirus in the U.S: How Did the Pandemic Get So Bad? | Time - 0 views

  • If, early in the spring, the U.S. had mobilized its ample resources and expertise in a coherent national effort to prepare for the virus, things might have turned out differently. If, in midsummer, the country had doubled down on the measures (masks, social-distancing rules, restricted indoor activities and public gatherings) that seemed to be working, instead of prematurely declaring victory, things might have turned out differently. The tragedy is that if science and common sense solutions were united in a national, coordinated response, the U.S. could have avoided many thousands of more deaths this summer.
  • . More than 13 million Americans remain unemployed as of August, according to Bureau of Labor Statistics data published Sept. 4.
  • t this point, we can start to see why the U.S. foundered: a failure of leadership at many levels and across parties; a distrust of scientists, the media and expertise in general; and deeply ingrained cultural attitudes about individuality and how we value human lives have all combined to result in a horrifically inadequate pandemic response
  • ...10 more annotations...
  • Common-sense solutions like face masks were undercut or ignored. Research shows that wearing a facial covering significantly reduces the spread of COVID-19, and a pre-existing culture of mask wearing in East Asia is often cited as one reason countries in that region were able to control their outbreaks. In the U.S., Trump did not wear a mask in public until July 11, more than three months after the CDC recommended facial coverings, transforming what ought to have been a scientific issue into a partisan one.
  • Testing is key to a pandemic response—the more data officials have about an outbreak, the better equipped they are to respond. Rather than call for more testing, Trump has instead suggested that maybe the U.S. should be testing less. He has repeatedly, and incorrectly, blamed increases in new cases on more testing. “If we didn’t do testing, we’d have no cases,” the President said in June, later suggesting he was being sarcastic.
  • Seven months after the coronavirus was found on American soil, we’re still suffering hundreds, sometimes more than a thousand, deaths every day. An American Nurses Association survey from late July and early August found that of 21,000 U.S. nurses polled, 42% reported either widespread or intermittent shortages in personal protective equipment (PPE) like masks, gloves and medical gowns.
  • Among the world’s wealthy nations, only the U.S. has an outbreak that continues to spin out of control. Of the 10 worst-hit countries, the U.S. has the seventh-highest number of deaths per 100,000 population; the other nine countries in the top 10 have an average per capita GDP of $10,195, compared to $65,281 for the U.S. Some countries, like New Zealand, have even come close to eradicating COVID-19 entirely.
  • The coronavirus has laid bare the inequalities of American public health. Black Americans are nearly three times as likely as white Americans to get COVID-19, nearly five times as likely to be hospitalized and twice as likely to die. As the Centers for Disease Control and Prevention (CDC) notes, being Black in the U.S. is a marker of risk for underlying conditions that make COVID-19 more dangerous, “including socioeconomic status, access to health care and increased exposure to the virus due to occupation (e.g., frontline, essential and critical infrastructure workers).” In other words, COVID-19 is more dangerous for Black Americans because of generations of systemic racism and discrimination. The same is true to a lesser extent for Native American and Latino communities, according to CDC data.
  • Americans today tend to value the individual over the collective. A 2011 Pew survey found that 58% of Americans said “freedom to pursue life’s goals without interference from the state” is more important than the state guaranteeing “nobody is in need.” It’s easy to view that trait as a root cause of the country’s struggles with COVID-19; a pandemic requires people to make temporary sacrifices for the benefit of the group, whether it’s wearing a mask or skipping a visit to their local bar.
  • ut at least some Americans still refuse to take such a simple step as wearing a mask. Why? Because we’re also in the midst of an epistemic crisis. Republicans and Democrats today don’t just disagree on issues; they disagree on the basic truths that structure their respective realities.
  • There’s another disturbing undercurrent to Americans’ attitude toward the pandemic thus far: a seeming willingness to accept mass death. As a nation we may have become dull to horrors that come our way as news, from gun violence to the seemingly never-ending incidents of police brutality to the water crises in Flint, Mich., and elsewhere. Americans seem to have already been inured to the idea that other Americans will die regularly, when they do not need to.
  • Our leaders need to listen to experts and let policy be driven by science. And for the time being, all of us need to accept that there are certain things we cannot, or should not, do, like go to the movies or host an indoor wedding.
  • The U.S. is no longer the epicenter of the global pandemic; that unfortunate torch has been passed to countries like India, Argentina and Brazil. And in the coming months there might yet be a vaccine, or more likely a cadre of vaccines, that finally halts the march of COVID-19 through the country.
carolinehayter

Researchers Demand That Google Rehire And Promote Timnit Gebru After Firing : NPR - 0 views

  • Members of a prestigious research unit at Google have sent a letter to the company's chief executive demanding that ousted artificial intelligence researcher Timnit Gebru be reinstated.
  • Gebru, who studies the ethics of AI and was one of the only Black research scientists at Google, says she was unexpectedly fired after a dispute over an academic paper and months of speaking out about the need for more women and people of color at the tech giant.
  • "Offering Timnit her position back at a higher level would go a long way to help re-establish trust and rebuild our team environment,"
  • ...13 more annotations...
  • "The removal of Timnit has had a demoralizing effect on the whole of our team."
  • Since Gebru's termination earlier this month, more than 2,600 Googlers have signed an open letter expressing dismay over the way Gebru exited the company and asking executives for a full explanation of what prompted her dismissal.
  • Gebru's firing happened "without warning rather than engaging in dialogue."
  • Google has maintained that Gebru resigned, though Gebru herself says she never voluntary agreed to leave the company.
  • They say Jeff Dean, senior vice president of Google Research, and other executives involved in Gebru's firing need to be held accountable.
  • She also was the co-author of pioneering research into facial recognition technology that demonstrated how people of color and women are misidentified far more often than white faces. The study helped persuade IBM, Amazon and Microsoft to stop selling the technology to law enforcement.
  • At Google, Gebru's former team wrote in the Wednesday letter that studying ways to reduce the harm of AI on marginalized groups is key to their mission.
  • Last month, Google abruptly asked Gebru to retract a research paper focused on the potential biases baked into an AI system that attempts to mimic human speech. The technology helps power Google's search engine. Google claims that the paper did not meet its bar for publication and that Gebru did not follow the company's internal review protocol.
  • However, Gebru and her supporters counter that she was being targeted because of how outspoken she was about diversity issues, a theme that was underscored in the letter.
  • The letter says Google's top brass have committed to advancing diversity, equity and inclusion among its research units, but unless more concrete and immediate action is taken, those promises are "virtue signaling; they are damaging, evasive, defensive and demonstrate leadership's inability to understand how our organization is part of the problem," according to the letter.
  • Gebru helped establish Black in AI, a group that supports Black researchers in the field of artificial intelligence.
  • saying such "gaslighting" has caused harm to Gebru and the Black community at Google.
  • Google has a history of striking back against employees who agitate internally for change. Organizers of the worldwide walkouts at Google in 2018 over sexual harassment and other issues were fired by the company. And more recently, the National Labor Relation Board accused Google of illegally firing workers who were involved in union organizing.
Javier E

Scientists See Advances in Deep Learning, a Part of Artificial Intelligence - NYTimes.com - 0 views

  • Using an artificial intelligence technique inspired by theories about how the brain recognizes patterns, technology companies are reporting startling gains in fields as diverse as computer vision, speech recognition and the identification of promising new molecules for designing drugs.
  • They offer the promise of machines that converse with humans and perform tasks like driving cars and working in factories, raising the specter of automated robots that could replace human workers.
  • what is new in recent months is the growing speed and accuracy of deep-learning programs, often called artificial neural networks or just “neural nets” for their resemblance to the neural connections in the brain.
  • ...2 more annotations...
  • With greater accuracy, for example, marketers can comb large databases of consumer behavior to get more precise information on buying habits. And improvements in facial recognition are likely to make surveillance technology cheaper and more commonplace.
  • Modern artificial neural networks are composed of an array of software components, divided into inputs, hidden layers and outputs. The arrays can be “trained” by repeated exposures to recognize patterns like images or sounds.
Javier E

Confessions of a 'Bad' Teacher - NYTimes.com - 0 views

  • In fact, I don’t just want to get better; like most teachers I know, I’m a bit of a perfectionist. I have to be. Dozens and dozens of teenagers scrutinize my language, clothing and posture all day long, all week long. If I’m off my game, the students tell me. They comment on my taste in neckties, my facial hair, the quality of my lessons. All of us teachers are evaluated all day long, already. It’s one of the most exhausting aspects of our job. Teaching was a high-pressure job long before No Child Left Behind and the current debates about teacher evaluation. These debates seem to rest on the assumption that, left to our own devices, we teachers would be happy to coast through the school year, let our skills atrophy and collect our pensions. The truth is, teachers don’t need elected officials to motivate us. If our students are not learning, they let us know. They put their heads down or they pass notes. They raise their hands and ask for clarification. Sometimes, they just stare at us like zombies. Few things are more excruciating for a teacher than leading a class that’s not learning. Good administrators use the evaluation processes to support teachers and help them avoid those painful classroom moments — not to weed out the teachers who don’t produce good test scores or adhere to their pedagogical beliefs. Worst of all, the more intense the pressure gets, the worse we teach
Javier E

When Harry Met eHarmony - Megan Garber - The Atlantic - 0 views

  • The rom-com industrial complex—the cultural institution charged with capturing romance as a kind of ritual—failed to recognize the evolution of romance itself.
  • the rom-com's normative approach to relationships—the posture that treats romance and romantic partners as puzzles to be solved—is the thing that may be dying. Or, rather, the thing that may be evolving, slowly and steadily, into something else. We have less of a need, now, to look to the movies to give structure to our romantic relationships: The world is doing that for us, already. Under the influence of Match and eHarmony and Tinder and JDate and Our Time and OK Cupid and Farmers Only and all the others—services that promise to mate souls according to algorithms—our sense of romance itself is becoming ever more formulaic. The will-they-or-won't-they—the gooey stuff that forms the rom-com's gooey center—becomes less compelling a tension in a world ever more dominated by signals and swipes. We are ceding some of love's mystery to measurement.
  • the axis romance has revolved around—the guiding sense of mystery, of uncertainty, of otherness—is giving way, under the influence of digital capabilities, to more pragmatic orientations. eHarmony promises to connect people across “29 dimensions® of compatibility,” breaking those out into “Core Traits” and “Vital Attributes.” Match.com now lets MENSA members connect through its platform, and is experimenting with facial recognition programs to help users better find “their type.” The promises of big data—insights! wisdom! relevance!—are insinuating themselves onto relationships. Love, actually, is now more
  • ...2 more annotations...
  • he rom-com, in general, has responded to this enormous cultural shift by ignoring it. There has been no You’ve Got Mail for the OK Cupid era. There hasn’t even been a Love Actually. But we've gotten something in their place: a move away from the sappy-and-stale dude-and-lady rom-com—and toward more expansive explorations of relationships at large.
  • The rom-com, as a genre, is moving past its obsession with nubile youth to present broad forms of love and relatively inclusive notions of sexuality and, perhaps even more subversively, relationships between people over 40. It is interpreting—and modeling—wide-ranging notions of what romance can be, trading the familiar arc of love, loss, reunion, and Happily Ever After for something more nuanced, more messy, more real.
aqconces

Were the Terracotta Warriors Based on Actual People? | History | Smithsonian - 0 views

  • When farmers digging a well in 1974 discovered the Terracotta Army, commissioned by China’s first emperor two millennia ago, the sheer numbers were staggering: an estimated 7,000 soldiers, plus horses and chariots
  • But it’s the huge variety of facial features and expressions that still puzzle scholars. Were standard parts fit together in a Mr. Potato Head approach or was each warrior sculpted to be unique, perhaps a facsimile of an actual person? How could you even know?
  • Short answer: The ears have it. Andrew Bevan, an archaeologist at University College London, along with colleagues, used advanced computer analyses to compare 30 warrior ears photographed at the Mausoleum of the First Qin Emperor in China to find out whether, statistically speaking, the auricular ridges are as “idiosyncratic” and “strongly individual” as they are in people.
  • ...1 more annotation...
  • Turns out no two ears are alike—raising the possibility that the figures are based on a real army of warriors. Knowing for sure will take time: There are over 13,000 ears to go.
qkirkpatrick

Opinion: The 'bionic men' of World War I - CNN.com - 0 views

  • World War I slaughtered and mutilated soldiers on a scale the world had never seen. It's little wonder that its vast numbers of returning crippled veterans led to major gains in the technology of prosthetic limbs.
  • Virtually every device produced today to replace lost body function of soldiers returning from our modern wars -- as well as accident victims, or victims of criminal acts, such as the Boston Marathon bombings -- has its roots in the technological advances that emerged from World War I.
  • Thanks to better surgery, many now survived. On the German side alone, there were 2 million casualties, 64 percent of them with injured limbs. Some 67,000 were amputees. Over 4,000 amputations were performed on U.S. service personnel according to the U.S. Department of Veterans Affairs.
  • ...3 more annotations...
  • Glass eyes and a variety of facial prostheses allowed those with defacing injuries to appear in public. For example, a galvanized and painted copper plate could fill in the missing eye socket and neighboring maxillary bone.
  • The image of men tied to their work resonates unsettlingly with Karl Marx's prediction that the urban proletariat would one day become a mere "appendage of the machine." It's an example of how military and industrial conceptions of the body were extended to dehumanize the body itself.
  • In 2008 runner Oscar Pistorius, a double-leg amputee, sought to compete in the Bejing Olympics, but his running blades, made of carbon fiber and modeled after a cheetah's leg, were seen by some as an unfair advantage. Four years later in London, he did compete in the Olympics, embodying a development that had its origins 100 years earlier, in World War I.
  •  
    After WWI, there were thousands of veterans that had lost limbs and other body parts. This lead to a rise in prosthetic technology.
Javier E

The great artificial intelligence duopoly - The Washington Post - 0 views

  • The AI revolution will have two engines — China and the United States — pushing its progress swiftly forward. It is unlike any previous technological revolution that emerged from a singular cultural setting. Having two engines will further accelerate the pace of technology.
  • WorldPost: In your book, you talk about the “data gap” between these two engines. What do you mean by that? Lee: Data is the raw material on which AI runs. It is like the role of oil in powering an industrial economy. As an AI algorithm is fed more examples of the phenomenon you want the algorithm to understand, it gains greater and greater accuracy. The more faces you show a facial recognition algorithm, the fewer mistakes it will make in recognizing your face
  • All data is not the same, however. China and the United States have different strengths when it comes to data. The gap emerges when you consider the breadth, quality and depth of the data. Breadth means the number of users, the population whose actions are captured in data. Quality means how well-structured and well-labeled the data is. Depth means how many different data points are generated about the activities of each user.
  • ...15 more annotations...
  • Chinese and American companies are on relatively even footing when it comes to breadth. Though American Internet companies have a smaller domestic user base than China, which has over a billion users on 4G devices, the best American companies can also draw in users from around the globe, bringing their total user base to over a billion.
  • when it comes to depth of data, China has the upper hand. Chinese Internet users channel a much larger portion of their daily activities, transactions and interactions through their smartphones. They use their smartphones for managing their daily lives, from buying groceries at the market to paying their utility bills, booking train or bus tickets and to take out loans, among other things.
  • Weaving together data from mobile payments, public services, financial management and shared mobility gives Chinese companies a deep and more multi-dimensional picture of their users. That allows their AI algorithms to precisely tailor product offerings to each individual. In the current age of AI implementation, this will likely lead to a substantial acceleration and deepening of AI’s impact across China’s economy. That is where the “data gap” appears
  • The radically different business model in China, married to Chinese user habits, creates indigenous branding and monetization strategies as well as an entirely alternative infrastructure for apps and content. It is therefore very difficult, if not impossible, for any American company to try to enter China’s market or vice versa
  • companies in both countries are pursuing their own form of international expansion. The United States uses a “full platform” approach — all Google, all Facebook. Essentially Australia, North America and Europe completely accept the American methodology. That technical empire is likely to continue.
  • The Chinese have realized that the U.S. empire is too difficult to penetrate, so they are looking elsewhere. They are trying, and generally succeeding, in Southeast Asia, the Middle East and Africa. Those regions and countries have not been a focus of U.S. tech, so their products are not built with the cultures of those countries in mind. And since their demographics are closer to China’s — lower income and lots of people, including youth — the Chinese products are a better fit.
  • The jobs that AI cannot do are those of creators, or what I call “empathetic jobs” in services, which will be the largest category that can absorb those displaced from routine jobs. Many jobs will become available in this sector, from teaching to elderly care and nursing. A great effort must be made not only to increase the number of those jobs and create a career path for them but to increase their social status, which also means increasing the pay of these jobs.
  • Policy-wise, we are seeing three approaches. The Chinese have unleashed entrepreneurs with a utilitarian passion to commercialize technology. The Americans are similarly pro-entrepreneur, but the government takes a laissez-faire attitude and the entrepreneurs carry out more moonshots. And Europe is more consumer-oriented, trying to give ownership and control of data back to the individual.
  • An AI arms race would be a grave mistake. The AI boom is more akin to the spread of electricity in the early Industrial Revolution than nuclear weapons during the Cold War. Those who take the arms-race view are more interested in political posturing than the flourishing of humanity. The value of AI as an omni-use technology rests in its creative, not destructive, potential.
  • In a way, having parallel universes should diminish conflict. They can coexist while each can learn from the other. It is not a zero-sum game of winners and losers.
  • We will see a massive migration from one kind of employment to another, not unlike during the transition from agriculture to manufacturing. It will largely be the lower-wage jobs in routine work that will be eliminated, while the ultra-rich will stand to make a lot of money from AI. Social inequality will thus widen.
  • If you were to draw a map a decade from now, you would see China’s tech zone — built not on ownership but partnerships — stretching across Southeast Asia, Indonesia, Africa and to some extent South America. The U.S. zone would entail North America, Australia and Europe. Over time, the “parallel universes” already extant in the United States and China will grow to cover the whole world.
  • There are also issues related to poorer countries who have relied on either following the old China model of low-wage manufacturing jobs or of India’s call centers. AI will replace those jobs that were created by outsourcing from the West. They will be the first to go in the next 10 years. So, underdeveloped countries will also have to look to jobs for creators and in services.
  • I am opposed to the idea of universal basic income because it provides money both to those who don’t need it as well as those who do. And it doesn’t stimulate people’s desire to work. It puts them into a kind of “useless class” category with the terrible consequence of a resentful class without dignity or status.
  • To reinvigorate people’s desire to work with dignity, some subsidy can help offset the costs of critical needs that only humans can provide. That would be a much better use of the distribution of income than giving it to every person whether they need it or not. A far better idea would be for workers of the future to have an equity share in owning the robots — universal basic capital instead of universal basic income.
Javier E

Opinion | Warning! Everything Is Going Deep: 'The Age of Surveillance Capitalism' - The... - 0 views

  • recent advances in the speed and scope of digitization, connectivity, big data and artificial intelligence are now taking us “deep” into places and into powers that we’ve never experienced before — and that governments have never had to regulate before.
  • deep learning, deep insights, deep surveillance, deep facial recognition, deep voice recognition, deep automation and deep artificial minds.
  • how did we get so deep down where the sharks live?
  • ...11 more annotations...
  • The short answer: Technology moves up in steps, and each step, each new platform, is usually biased toward a new set of capabilities. Around the year 2000 we took a huge step up that was biased toward connectivity, because of the explosion of fiber-optic cable, wireless and satellites.
  • Around 2007, we took another big step up. The iPhone, sensors, digitization, big data, the internet of things, artificial intelligence and cloud computing melded together and created a new platform that was biased toward abstracting complexity at a speed, scope and scale we’d never experienced before.
  • Over the last decade, these advances in the speed of connectivity and the elimination of complexity have grown exponentially
  • It means machines can answer so many more questions than nonmachines, also known as “humans.” The percentage of calls a chatbot, or virtual agent, is able to handle without turning the caller over to a person is called its “containment rate,” and these rates are steadily soaring. Soon, automated systems will be so humanlike that they will have to self-identify as machines.
  • Unfortunately, we have not developed the regulations or governance, or scaled the ethics, to manage a world of such deep powers, deep interactions and deep potential abuses.
  • But bad guys, who are always early adopters, also see the same potential to go deep in wholly new ways.
  • Surveillance capitalism,” Zuboff wrote, “unilaterally claims human experience as free raw material for translation into behavioral data. Although some of these data are applied to service improvement, the rest are declared as a proprietary behavioral surplus, fed into advanced manufacturing processes known as ‘machine intelligence,’ and fabricated into prediction products that anticipate what you will do now, soon and later. Finally, these prediction products are traded in a new kind of marketplace that I call behavioral futures markets. Surveillance capitalists have grown immensely wealthy from these trading operations, for many companies are willing to lay bets on our future behavior.”
  • “People are looking to achieve very big numbers. Earlier they had incremental, 5 to 10 percent goals in reducing their work force. Now they’re saying, ‘Why can’t we do it with 1 percent of the people we have?’
  • I wish I thought that catch-up was around the corner. I don’t. Our national discussion has never been more shallow — reduced to 280 characters.
  • This has created an opening and burgeoning demand for political, social and religious leaders, government institutions and businesses that can go deep — that can validate what is real and offer the public deep truths, deep privacy protections and deep trust.
  • But deep trust and deep loyalty cannot be forged overnight. They take time.
1 - 20 of 49 Next › Last »
Showing 20 items per page