Skip to main content

Home/ QN2019/ Group items tagged privacy

Rss Feed Group items tagged

Aurialie Jublin

It's Time to Break Up Facebook - The New York Times - 0 views

  • Mark’s influence is staggering, far beyond that of anyone else in the private sector or in government. He controls three core communications platforms — Facebook, Instagram and WhatsApp — that billions of people use every day. Facebook’s board works more like an advisory committee than an overseer, because Mark controls around 60 percent of voting shares. Mark alone can decide how to configure Facebook’s algorithms to determine what people see in their News Feeds, what privacy settings they can use and even which messages get delivered. He sets the rules for how to distinguish violent and incendiary speech from the merely offensive, and he can choose to shut down a competitor by acquiring, blocking or copying it.
  • Mark is a good, kind person. But I’m angry that his focus on growth led him to sacrifice security and civility for clicks. I’m disappointed in myself and the early Facebook team for not thinking more about how the News Feed algorithm could change our culture, influence elections and empower nationalist leaders. And I’m worried that Mark has surrounded himself with a team that reinforces his beliefs instead of challenging them.
  • We are a nation with a tradition of reining in monopolies, no matter how well intentioned the leaders of these companies may be. Mark’s power is unprecedented and un-American.It is time to break up Facebook.
  • ...26 more annotations...
  • We already have the tools we need to check the domination of Facebook. We just seem to have forgotten about them.America was built on the idea that power should not be concentrated in any one person, because we are all fallible. That’s why the founders created a system of checks and balances. They didn’t need to foresee the rise of Facebook to understand the threat that gargantuan companies would pose to democracy. Jefferson and Madison were voracious readers of Adam Smith, who believed that monopolies prevent the competition that spurs innovation and leads to economic growth.
  • The Sherman Antitrust Act of 1890 outlawed monopolies. More legislation followed in the 20th century, creating legal and regulatory structures to promote competition and hold the biggest companies accountable. The Department of Justice broke up monopolies like Standard Oil and AT&T.
  • For many people today, it’s hard to imagine government doing much of anything right, let alone breaking up a company like Facebook. This isn’t by coincidence. Starting in the 1970s, a small but dedicated group of economists, lawyers and policymakers sowed the seeds of our cynicism. Over the next 40 years, they financed a network of think tanks, journals, social clubs, academic centers and media outlets to teach an emerging generation that private interests should take precedence over public ones. Their gospel was simple: “Free” markets are dynamic and productive, while government is bureaucratic and ineffective. By the mid-1980s, they had largely managed to relegate energetic antitrust enforcement to the history books.
  • It was this drive to compete that led Mark to acquire, over the years, dozens of other companies, including Instagram and WhatsApp in 2012 and 2014. There was nothing unethical or suspicious, in my view, in these moves.
  • Over a decade later, Facebook has earned the prize of domination. It is worth half a trillion dollars and commands, by my estimate, more than 80 percent of the world’s social networking revenue. It is a powerful monopoly, eclipsing all of its rivals and erasing competition from the social networking category. This explains why, even during the annus horribilis of 2018, Facebook’s earnings per share increased by an astounding 40 percent compared with the year before. (I liquidated my Facebook shares in 2012, and I don’t invest directly in any social media companies.)
  • Facebook’s dominance is not an accident of history. The company’s strategy was to beat every competitor in plain view, and regulators and the government tacitly — and at times explicitly — approved. In one of the government’s few attempts to rein in the company, the F.T.C. in 2011 issued a consent decree that Facebook not share any private information beyond what users already agreed to. Facebook largely ignored the decree. Last month, the day after the company predicted in an earnings call that it would need to pay up to $5 billion as a penalty for its negligence — a slap on the wrist — Facebook’s shares surged 7 percent, adding $30 billion to its value, six times the size of the fine.
  • As markets become more concentrated, the number of new start-up businesses declines. This holds true in other high-tech areas dominated by single companies, like search (controlled by Google) and e-commerce (taken over by Amazon). Meanwhile, there has been plenty of innovation in areas where there is no monopolistic domination, such as in workplace productivity (Slack, Trello, Asana), urban transportation (Lyft, Uber, Lime, Bird) and cryptocurrency exchanges (Ripple, Coinbase, Circle).
  • Facebook’s business model is built on capturing as much of our attention as possible to encourage people to create and share more information about who they are and who they want to be. We pay for Facebook with our data and our attention, and by either measure it doesn’t come cheap.
  • The most problematic aspect of Facebook’s power is Mark’s unilateral control over speech. There is no precedent for his ability to monitor, organize and even censor the conversations of two billion people.Facebook engineers write algorithms that select which users’ comments or experiences end up displayed in the News Feeds of friends and family. These rules are proprietary and so complex that many Facebook employees themselves don’t understand them.
  • Facebook has responded to many of the criticisms of how it manages speech by hiring thousands of contractors to enforce the rules that Mark and senior executives develop. After a few weeks of training, these contractors decide which videos count as hate speech or free speech, which images are erotic and which are simply artistic, and which live streams are too violent to be broadcast. (The Verge reported that some of these moderators, working through a vendor in Arizona, were paid $28,800 a year, got limited breaks and faced significant mental health risks.)
  • As if Facebook’s opaque algorithms weren’t enough, last year we learned that Facebook executives had permanently deleted their own messages from the platform, erasing them from the inboxes of recipients; the justification was corporate security concerns. When I look at my years of Facebook messages with Mark now, it’s just a long stream of my own light-blue comments, clearly written in response to words he had once sent me. (Facebook now offers this as a feature to all users.)
  • Mark used to insist that Facebook was just a “social utility,” a neutral platform for people to communicate what they wished. Now he recognizes that Facebook is both a platform and a publisher and that it is inevitably making decisions about values. The company’s own lawyers have argued in court that Facebook is a publisher and thus entitled to First Amendment protection.
  • Mark knows that this is too much power and is pursuing a twofold strategy to mitigate it. He is pivoting Facebook’s focus toward encouraging more private, encrypted messaging that Facebook’s employees can’t see, let alone control. Second, he is hoping for friendly oversight from regulators and other industry executives.
  • In an op-ed essay in The Washington Post in March, he wrote, “Lawmakers often tell me we have too much power over speech, and I agree.” And he went even further than before, calling for more government regulation — not just on speech, but also on privacy and interoperability, the ability of consumers to seamlessly leave one network and transfer their profiles, friend connections, photos and other data to another.
  • Facebook isn’t afraid of a few more rules. It’s afraid of an antitrust case and of the kind of accountability that real government oversight would bring.
  • Mark may never have a boss, but he needs to have some check on his power. The American government needs to do two things: break up Facebook’s monopoly and regulate the company to make it more accountable to the American people.First, Facebook should be separated into multiple companies. The F.T.C., in conjunction with the Justice Department, should enforce antitrust laws by undoing the Instagram and WhatsApp acquisitions and banning future acquisitions for several years. The F.T.C. should have blocked these mergers, but it’s not too late to act. There is precedent for correcting bad decisions — as recently as 2009, Whole Foods settled antitrust complaints by selling off the Wild Oats brand and stores that it had bought a few years earlier.
  • Still others worry that the breakup of Facebook or other American tech companies could be a national security problem. Because advancements in artificial intelligence require immense amounts of data and computing power, only large companies like Facebook, Google and Amazon can afford these investments, they say. If American companies become smaller, the Chinese will outpace us.While serious, these concerns do not justify inaction. Even after a breakup, Facebook would be a hugely profitable business with billions to invest in new technologies — and a more competitive market would only encourage those investments. If the Chinese did pull ahead, our government could invest in research and development and pursue tactical trade policy, just as it is doing today to hold China’s 5G technology at bay.
  • The cost of breaking up Facebook would be next to zero for the government, and lots of people stand to gain economically. A ban on short-term acquisitions would ensure that competitors, and the investors who take a bet on them, would have the space to flourish. Digital advertisers would suddenly have multiple companies vying for their dollars.
  • But the biggest winners would be the American people. Imagine a competitive market in which they could choose among one network that offered higher privacy standards, another that cost a fee to join but had little advertising and another that would allow users to customize and tweak their feeds as they saw fit. No one knows exactly what Facebook’s competitors would offer to differentiate themselves. That’s exactly the point.
  • Just breaking up Facebook is not enough. We need a new agency, empowered by Congress to regulate tech companies. Its first mandate should be to protect privacy.The Europeans have made headway on privacy with the General Data Protection Regulation, a law that guarantees users a minimal level of protection. A landmark privacy bill in the United States should specify exactly what control Americans have over their digital information, require clearer disclosure to users and provide enough flexibility to the agency to exercise effective oversight over time. The agency should also be charged with guaranteeing basic interoperability across platforms.
  • Finally, the agency should create guidelines for acceptable speech on social media. This idea may seem un-American — we would never stand for a government agency censoring speech. But we already have limits on yelling “fire” in a crowded theater, child pornography, speech intended to provoke violence and false statements to manipulate stock prices. We will have to create similar standards that tech companies can use. These standards should of course be subject to the review of the courts, just as any other limits on speech are. But there is no constitutional right to harass others or live-stream violence.
  • These are difficult challenges. I worry that government regulators will not be able to keep up with the pace of digital innovation. I worry that more competition in social networking might lead to a conservative Facebook and a liberal one, or that newer social networks might be less secure if government regulation is weak. But sticking with the status quo would be worse: If we don’t have public servants shaping these policies, corporations will.
  • Similarly, the Justice Department’s 1970s suit accusing IBM of illegally maintaining its monopoly on personal computer sales ended in a stalemate. But along the way, IBM changed many of its behaviors. It stopped bundling its hardware and software, chose an extremely open design for the operating system in its personal computers and did not exercise undue control over its suppliers. Professor Wu has written that this “policeman at the elbow” led IBM to steer clear “of anything close to anticompetitive conduct, for fear of adding to the case against it.”
  • Finally, an aggressive case against Facebook would persuade other behemoths like Google and Amazon to think twice about stifling competition in their own sectors, out of fear that they could be next. If the government were to use this moment to resurrect an effective competition standard that takes a broader view of the full cost of “free” products, it could affect a whole host of industries.
  • I take responsibility for not sounding the alarm earlier. Don Graham, a former Facebook board member, has accused those who criticize the company now as having “all the courage of the last man leaping on the pile at a football game.” The financial rewards I reaped from working at Facebook radically changed the trajectory of my life, and even after I cashed out, I watched in awe as the company grew. It took the 2016 election fallout and Cambridge Analytica to awaken me to the dangers of Facebook’s monopoly. But anyone suggesting that Facebook is akin to a pinned football player misrepresents its resilience and power.
  • This movement of public servants, scholars and activists deserves our support. Mark Zuckerberg cannot fix Facebook, but our government can.
  •  
    "Since then, Mark's personal reputation and the reputation of Facebook have taken a nose-dive. The company's mistakes - the sloppy privacy practices that dropped tens of millions of users' data into a political consulting firm's lap; the slow response to Russian agents, violent rhetoric and fake news; and the unbounded drive to capture ever more of our time and attention - dominate the headlines. It's been 15 years since I co-founded Facebook at Harvard, and I haven't worked at the company in a decade. But I feel a sense of anger and responsibility."
Aurialie Jublin

Welcome to the Age of Privacy Nihilism - The Atlantic - 0 views

  • But more importantly, the velocity of acquisition and correlation of information has increased dramatically. Web browsers and smartphones contribute to that, in volume and value.
  • The process of correlation has become more sophisticated, too.
  • The centralization of information has also increased. With billions of users globally, organizations like Facebook and Google have a lot more data to offer—and from which to benefit. Enterprise services have also decentralized, and more data has moved to the Cloud—which often just means into the hands of big tech firms like Microsoft, Google, and Amazon. Externalizing that data creates data-privacy risk. But then again, so does storing it locally, where it is susceptible to breaches like the one Equifax experienced last year.
  • ...1 more annotation...
  • The real difference between the old and the new ages of data-intelligence-driven consumer marketing, and the invasion of privacy they entail, is that lots of people are finally aware that it is taking place.
  •  
    "Google and Facebook are easy scapegoats, but companies have been collecting, selling, and reusing your personal data for decades, and now that the public has finally noticed, it's too late. The personal-data privacy war is long over, and you lost."
Aurialie Jublin

Many US Facebook users have changed privacy settings or taken a break | Pew Research Ce... - 0 views

  • There are, however, age differences in the share of Facebook users who have recently taken some of these actions. Most notably, 44% of younger users (those ages 18 to 29) say they have deleted the Facebook app from their phone in the past year, nearly four times the share of users ages 65 and older (12%) who have done so. Similarly, older users are much less likely to say they have adjusted their Facebook privacy settings in the past 12 months: Only a third of Facebook users 65 and older have done this, compared with 64% of younger users. In earlier research, Pew Research Center has found that a larger share of younger than older adults use Facebook. Still, similar shares of older and younger users have taken a break from Facebook for a period of several weeks or more.
  • Roughly half of the users who have downloaded their personal data from Facebook (47%) have deleted the app from their cellphone, while 79% have elected to adjust their privacy settings.
  •  
    "Significant shares of Facebook users have taken steps in the past year to reframe their relationship with the social media platform. Just over half of Facebook users ages 18 and older (54%) say they have adjusted their privacy settings in the past 12 months, according to a new Pew Research Center survey. Around four-in-ten (42%) say they have taken a break from checking the platform for a period of several weeks or more, while around a quarter (26%) say they have deleted the Facebook app from their cellphone. All told, some 74% of Facebook users say they have taken at least one of these three actions in the past year."
Aurialie Jublin

Manifesto | Openbook social network - 0 views

  • 1. Honest. All of our code is open-source. This means it’s free for everyone to see, reproduce and contribute to. We’re transparent about how the social network works. Additionally, in partnership with FoundersPledge, we’ll be giving 30% of our revenue towards making the world a better place. For example, education, climate-change prevention, mental-health and sanitisation. It’s about time tech-companies benefit everyone.
  • We want to build Openbook together. We will create a special group where we encourage you to help us decide what we build next or what we should improve. Let’s build the best social network ever!
  • 3. Privacy-friendly and secure This is what drove us to build Openbook. The privacy and security of our users will always be at the core of everything we do. We don’t track anything you do, neither monetize your information nor share it without your explicit and informed consent.
  • ...2 more annotations...
  • Privacy by default All optional data sharing will be disabled by default, and it is up to you if you want to give that application more of your data instead of the other way around.
  • Our business model is not and will never be advertisements. We will have a transparent revenue model based on a generic way for people to securely transact physical and digital goods and services inside the network. This will be done through an atomic digital unit of value. Although this initially reflected as a marketplace, our ambitions go way beyond that.
  •  
    "Openbook will have the great things you'd expect from a social network: chats, posts, groups and events to name a few. Plus, it will be honest, personal, privacy-friendly, secure and fun ;-)."
Aurialie Jublin

Privacy expert Ann Cavoukian resigns as adviser to Sidewalk Labs - The Logic - 0 views

  •  
    Ann Cavoukian, a world-leading privacy expert, has resigned as an adviser to Sidewalk Labs on its proposed Toronto smart city development. Cavoukian sent a letter advising the company of her resignation Friday. In the letter, she expressed concerns regarding Sidewalk Labs recent digital governance proposals, specifically, the possibility that not all personal data would be de-identified at the source-a concern she said she raised with Sidewalk Labs early last month. Sidewalk Labs told The Logic it is committed to de-identifying data, but that it can't control what third-parties do.
Aurialie Jublin

The Privacy Project - The New York Times - 0 views

  •  
    "Companies and governments are gaining new powers to follow people across the internet and around the world, and even to peer into their genomes. The benefits of such advances have been apparent for years; the costs - in anonymity, even autonomy - are now becoming clearer. The boundaries of privacy are in dispute, and its future is in doubt. Citizens, politicians and business leaders are asking if societies are making the wisest tradeoffs. The Times is embarking on this monthslong project to explore the technology and where it's taking us, and to convene debate about how it can best help realize human potential."
Aurialie Jublin

Retour sur MyData2018 : quelle(s) approche(s) collective(s) des données perso... - 0 views

  • L’entrée en vigueur du RGPD a clairement été l’événement marquant de 2018 et beaucoup d’intervenants s’y sont référés. Le principe d’équité y est affirmé. Or il s’agit d’un principe qui recouvre une dimension collective : l’équité suppose un groupe, contrairement aux autres principes (transparence, légalité, limitation de but, rétention, intégrité et confidentialité, minimisation des données, précision). Toutefois le texte ne donne pas de définition de l’équité dans le contexte de la gestion des données personnelles a fait remarquer Jussi Leppälä, Privacy Officer (Global) chez Valmet. Finalement, les intervenants s’accordaient à dire que le RGPD est un texte axé sur les besoins individuels plutôt que collectifs. Il protège l’individu, mais ne porte pas de vision véritablement collective des données personnelles.
  • Sur cette question de l’équité, l’exemple d’openSCHUFA donné par Walter Palmetshofer (Open Knowledge Allemagne) est inspirant : une campagne de collecte de données a été faite auprès de milliers d’individus pour comprendre l’algorithme de credit-scoring (pointage de crédit) de SCHUFA, bureau de crédit privé allemand. Cela a permis à des individus de pouvoir demander, preuves à l’appui, à corriger des décisions prises par l’algorithme. De manière générale, le biais algorithmique est un enjeu sociétal important, surtout pour les groupes les plus fragiles dont les données personnelles sont plus exposées et davantage victimes de biais algorithmiques (à ce sujet, lire Internet Actu).
  • D’autres intervenants ont insisté sur la nécessité d’accompagner les entreprises vers plus de prises en compte de leur responsabilité sociale. Le modèle de gouvernance qui domine actuellement étant l’hégémonie d’acteurs économiques (GAFA, BATX) raconte Bruno Carballa Smichowski, de Chronos, le rééquilibrage des pouvoirs doit venir des Etats. Ces derniers disposent de données personnelles, mais peuvent également demander des comptes et pousser les acteurs qui utilisent les données à être plus ouverts et actifs : littératie, infrastructure, open innovation, construire la confiance et faire reculer la peur, ou encore impliquer les personnes concernées (Hetan Shah et Jeni Tennison), sont autant d’actions que les pouvoirs publics peuvent mettre en place.
  • ...9 more annotations...
  • Depuis le lancement de MyData, la nécessité de développer les enjeux collectifs des données personnelles apparaît chaque année plus forte. En parallèle, les communs numériques apparaissent de plus en plus comme un modèle alternatif désirable face aux comportements abusifs des acteurs dominants. Les communs autorisent par nature une gestion transparente et au service de la communauté, car gérés par leurs membres.
  • Si sa remarque a donné naissance au track “OurData” en 2017 et en 2018,, le terme de “commun” était pourtant quasiment absent des discussions du track OurData, essentiellement tourné vers l’acteur public et la régulation. L’exemple d’OpenSCHUFA se rattache néanmoins au courant des communs en donnant un exemple concret.
  • L’idée derrière la coopérative de données qui s’intègrerait au modèle MyData/Self Data viserait plutôt à créer un commun : une association d’individus développant des outils et services (chat, moteur de recherche,…) leur permettant de gérer leurs données de A à Z. Il existe plusieurs coopératives de données de ce type : diglife.coop, schluss, open.coop,…
  • Laura James (doteveryone) a suggéré  cette session afin d’échanger avec les participants sur la faisabilité de créer une ou des coopératives “de masse”, appartenant à leurs membres et gérées par eux. L’objectif serait d’offrir aux gens de meilleurs services que les géants de la technologie et les entreprises de type Silicon Valley. Laura James constate que si le problème avec les géants numériques est leur modèle d’entreprise (capitalisme de surveillance) et le modèle de propriété (extraction de richesse au profit de quelques-uns), la “data coop” doit permettre d’offrir une technologie en laquelle nous pouvons avoir confiance – c’est-à-dire préservant notre vie privée, accessible, fiable, équitable, basée sur les sources ouvertes existantes, avec un meilleur support et une véritable durabilité.
  • Est-ce que le peu de succès de Digital Life Collective est dû à un manque d’intérêt de la part des consommateurs pour les questions liées aux données personnelles ? Ou bien est-ce que les enjeux ne sont pas encore bien compris par les gens ? Les porteurs de coopératives présents à la session échangent sur plusieurs éléments de réponse. D’abord, il n’y a pas une absence d’intérêt pour les questions de privacy mais une perception et un traitement différent selon les personnes (par les « millenials » par exemple). Ensuite, les consommateurs veulent-ils avoir à supporter la responsabilité qui va avec la reprise du contrôle sur leurs données ? Rien n’est moins sûr : comme les services gratuits d’aujourd’hui, cela doit être simple. Mais le contrôle implique nécessairement des responsabilités… Les consommateurs ont aussi besoin de services pratiques. Il faut travailler l’expérience utilisateur. Enfin, il faut une littératie des données pour créer un véritable intérêt et dissiper la peur et les malentendus autour de ce sujet.
  • Comment avoir une véritable gouvernance partagée tout en ayant une organisation suffisamment grande ? A peine 10 personnes sont vraiment actives au sein de Digital Life Collective. Schluss recherche une manière de faire participer davantage les membres. C’est un problème récurrent pour les coopératives, et toute organisation dont la gestion s’appuie sur l’ensemble de ses membres. Toutefois, l’un des participants soulignait que même si seul 1% s’implique dans la prise de décisions, tous reçoivent les bénéfices de la coopérative ! Ca n’est pas la gestion parfaitement partagée et idéale, mais cela fonctionne quand même. Avant de renoncer au modèle participatif, quelques modèles de gouvernance pourraient être expérimentés pour faciliter les prises de décision participatives au sein de la coopérative : les jurys citoyens, sociocracy 3.0 (utilisé par certaines entreprises télécom en Finlande), …
  • Dans les sessions de la thématique “OurData”, nous avons eu le plaisir d’entendre à chaque fois (ou presque) que la propriété appliquée aux données personnelles n’a aucun sens. Bien que ce track, plus qu’aucun autre, soit prédisposé à un tel constat,depuis quelques années, la position de la communauté MyData s’est éclaircie à ce sujet et on voit de moins en moins de personnes prôner ce modèle de propriété et de revente individuelle de ses données..
  • En découle un modèle collectif basé non pas sur des titres de propriété individuels mais sur des droits d’usage. Le RGPD en crée quelques-uns mais d’autres questions restent en suspens, comme le droit à la mémoire collective, notamment pour les catégories les plus défavorisées, ou encore l’équité, qui s’oppose à une régulation par les lois du marché.
  • La plupart des intervenants postulent que c’est l’acteur public qui doit agir : en créant de nouveaux droits associés aux données personnelles, en accompagnant les acteurs privés à fournir des solutions plus éthiques et transparentes, en s’engageant pour une culture et une littératie de la donnée pour tous, en actant juridiquement que les données personnelles sont le résultat d’un processus collectif qui appartient à la société qui l’a co-généré et qu’il ne peut y avoir de propriété associée (en France la CNIL est très claire sur ce dernier point, nous avons besoin d’une voie aussi claire au niveau européen !), en promouvant leur valeur sociale, et non commerciale, et enfin qu’il fasse que le fruit de ce travail doit servir à répondre à des problématiques collectives telles que la santé, l’éducation, la culture, la protection de l’environnement, …
  •  
    "LE CONSTAT : LA GESTION ACTUELLE DES DONNÉES PERSONNELLES S'INTÉRESSE PEU AU COLLECTIF MyData s'intéresse à toutes les dimensions que recouvre le contrôle des données personnelles : si la privacy occupe souvent le devant de la scène, MyData explore également la transparence des organisations et des technologies, l'équité et la dimension collective des données personnelles."
Aurialie Jublin

An Apology for the Internet - From the People Who Built It - 1 views

  • There have always been outsiders who criticized the tech industry — even if their concerns have been drowned out by the oohs and aahs of consumers, investors, and journalists. But today, the most dire warnings are coming from the heart of Silicon Valley itself. The man who oversaw the creation of the original iPhone believes the device he helped build is too addictive. The inventor of the World Wide Web fears his creation is being “weaponized.” Even Sean Parker, Facebook’s first president, has blasted social media as a dangerous form of psychological manipulation. “God only knows what it’s doing to our children’s brains,” he lamented recently.
  • To keep the internet free — while becoming richer, faster, than anyone in history — the technological elite needed something to attract billions of users to the ads they were selling. And that something, it turns out, was outrage. As Jaron Lanier, a pioneer in virtual reality, points out, anger is the emotion most effective at driving “engagement” — which also makes it, in a market for attention, the most profitable one. By creating a self-perpetuating loop of shock and recrimination, social media further polarized what had already seemed, during the Obama years, an impossibly and irredeemably polarized country.
  • The Architects (In order of appearance.) Jaron Lanier, virtual-reality pioneer. Founded first company to sell VR goggles; worked at Atari and Microsoft. Antonio García Martínez, ad-tech entrepreneur. Helped create Facebook’s ad machine. Ellen Pao, former CEO of Reddit. Filed major gender-discrimination lawsuit against VC firm Kleiner Perkins. Can Duruk, programmer and tech writer. Served as project lead at Uber. Kate Losse, Facebook employee No. 51. Served as Mark Zuckerberg’s speechwriter. Tristan Harris, product designer. Wrote internal Google presentation about addictive and unethical design. Rich “Lowtax” Kyanka, entrepreneur who founded influential message board Something Awful. Ethan Zuckerman, MIT media scholar. Invented the pop-up ad. Dan McComas, former product chief at Reddit. Founded community-based platform Imzy. Sandy Parakilas, product manager at Uber. Ran privacy compliance for Facebook apps. Guillaume Chaslot, AI researcher. Helped develop YouTube’s algorithmic recommendation system. Roger McNamee, VC investor. Introduced Mark Zuckerberg to Sheryl Sandberg. Richard Stallman, MIT programmer. Created legendary software GNU and Emacs.
  • ...45 more annotations...
  • How It Went Wrong, in 15 Steps Step 1 Start With Hippie Good Intentions …
  • I think two things are at the root of the present crisis. One was the idealistic view of the internet — the idea that this is the great place to share information and connect with like-minded people. The second part was the people who started these companies were very homogeneous. You had one set of experiences, one set of views, that drove all of the platforms on the internet. So the combination of this belief that the internet was a bright, positive place and the very similar people who all shared that view ended up creating platforms that were designed and oriented around free speech.
  • Step 2 … Then mix in capitalism on steroids. To transform the world, you first need to take it over. The planetary scale and power envisioned by Silicon Valley’s early hippies turned out to be as well suited for making money as they were for saving the world.
  • Step 3 The arrival of Wall Streeters didn’t help … Just as Facebook became the first overnight social-media success, the stock market crashed, sending money-minded investors westward toward the tech industry. Before long, a handful of companies had created a virtual monopoly on digital life.
  • Ethan Zuckerman: Over the last decade, the social-media platforms have been working to make the web almost irrelevant. Facebook would, in many ways, prefer that we didn’t have the internet. They’d prefer that we had Facebook.
  • Step 4 … And we paid a high price for keeping it free. To avoid charging for the internet — while becoming fabulously rich at the same time — Silicon Valley turned to digital advertising. But to sell ads that target individual users, you need to grow a big audience — and use advancing technology to gather reams of personal data that will enable you to reach them efficiently.
  • Harris: If you’re YouTube, you want people to register as many accounts as possible, uploading as many videos as possible, driving as many views to those videos as possible, so you can generate lots of activity that you can sell to advertisers. So whether or not the users are real human beings or Russian bots, whether or not the videos are real or conspiracy theories or disturbing content aimed at kids, you don’t really care. You’re just trying to drive engagement to the stuff and maximize all that activity. So everything stems from this engagement-based business model that incentivizes the most mindless things that harm the fabric of society.
  • Step 5 Everything was designed to be really, really addictive. The social-media giants became “attention merchants,” bent on hooking users no mater the consequences. “Engagement” was the euphemism for the metric, but in practice it evolved into an unprecedented machine for behavior modification.
  • Harris: That blue Facebook icon on your home screen is really good at creating unconscious habits that people have a hard time extinguishing. People don’t see the way that their minds are being manipulated by addiction. Facebook has become the largest civilization-scale mind-control machine that the world has ever seen.
  • Step 6 At first, it worked — almost too well. None of the companies hid their plans or lied about how their money was made. But as users became deeply enmeshed in the increasingly addictive web of surveillance, the leading digital platforms became wildly popular.
  • Pao: There’s this idea that, “Yes, they can use this information to manipulate other people, but I’m not gonna fall for that, so I’m protected from being manipulated.” Slowly, over time, you become addicted to the interactions, so it’s hard to opt out. And they just keep taking more and more of your time and pushing more and more fake news. It becomes easy just to go about your life and assume that things are being taken care of.
  • McNamee: If you go back to the early days of propaganda theory, Edward Bernays had a hypothesis that to implant an idea and make it universally acceptable, you needed to have the same message appearing in every medium all the time for a really long period of time. The notion was it could only be done by a government. Then Facebook came along, and it had this ability to personalize for every single user. Instead of being a broadcast model, it was now 2.2 billion individualized channels. It was the most effective product ever created to revolve around human emotions.
  • Step 7 No one from Silicon Valley was held accountable … No one in the government — or, for that matter, in the tech industry’s user base — seemed interested in bringing such a wealthy, dynamic sector to heel.
  • Step 8 … Even as social networks became dangerous and toxic. With companies scaling at unprecedented rates, user security took a backseat to growth and engagement. Resources went to selling ads, not protecting users from abuse.
  • Lanier: Every time there’s some movement like Black Lives Matter or #MeToo, you have this initial period where people feel like they’re on this magic-carpet ride. Social media is letting them reach people and organize faster than ever before. They’re thinking, Wow, Facebook and Twitter are these wonderful tools of democracy. But it turns out that the same data that creates a positive, constructive process like the Arab Spring can be used to irritate other groups. So every time you have a Black Lives Matter, social media responds by empowering neo-Nazis and racists in a way that hasn’t been seen in generations. The original good intention winds up empowering its opposite.
  • Chaslot: As an engineer at Google, I would see something weird and propose a solution to management. But just noticing the problem was hurting the business model. So they would say, “Okay, but is it really a problem?” They trust the structure. For instance, I saw this conspiracy theory that was spreading. It’s really large — I think the algorithm may have gone crazy. But I was told, “Don’t worry — we have the best people working on it. It should be fine.” Then they conclude that people are just stupid. They don’t want to believe that the problem might be due to the algorithm.
  • Parakilas: One time a developer who had access to Facebook’s data was accused of creating profiles of people without their consent, including children. But when we heard about it, we had no way of proving whether it had actually happened, because we had no visibility into the data once it left Facebook’s servers. So Facebook had policies against things like this, but it gave us no ability to see what developers were actually doing.
  • McComas: Ultimately the problem Reddit has is the same as Twitter: By focusing on growth and growth only, and ignoring the problems, they amassed a large set of cultural norms on their platforms that stem from harassment or abuse or bad behavior. They have worked themselves into a position where they’re completely defensive and they can just never catch up on the problem. I don’t see any way it’s going to improve. The best they can do is figure out how to hide the bad behavior from the average user.
  • Step 9 … And even as they invaded our privacy. The more features Facebook and other platforms added, the more data users willingly, if unwittingly, released to them and the data brokers who power digital advertising.
  • Richard Stallman: What is data privacy? That means that if a company collects data about you, it should somehow protect that data. But I don’t think that’s the issue. The problem is that these companies are collecting data about you, period. We shouldn’t let them do that. The data that is collected will be abused. That’s not an absolute certainty, but it’s a practical extreme likelihood, which is enough to make collection a problem.
  • Losse: I’m not surprised at what’s going on now with Cambridge Analytica and the scandal over the election. For long time, the accepted idea at Facebook was: Giving developers as much data as possible to make these products is good. But to think that, you also have to not think about the data implications for users. That’s just not your priority.
  • Step 10 Then came 2016. The election of Donald Trump and the triumph of Brexit, two campaigns powered in large part by social media, demonstrated to tech insiders that connecting the world — at least via an advertising-surveillance scheme — doesn’t necessarily lead to that hippie utopia.
  • Chaslot: I realized personally that things were going wrong in 2011, when I was working at Google. I was working on this YouTube recommendation algorithm, and I realized that the algorithm was always giving you the same type of content. For instance, if I give you a video of a cat and you watch it, the algorithm thinks, Oh, he must really like cats. That creates these feeder bubbles where people just see one type of information. But when I notified my managers at Google and proposed a solution that would give a user more control so he could get out of the feeder bubble, they realized that this type of algorithm would not be very beneficial for watch time. They didn’t want to push that, because the entire business model is based on watch time.
  • Step 11 Employees are starting to revolt. Tech-industry executives aren’t likely to bite the hand that feeds them. But maybe their employees — the ones who signed up for the mission as much as the money — can rise up and make a change.
  • Harris: There’s a massive demoralizing wave that is hitting Silicon Valley. It’s getting very hard for companies to attract and retain the best engineers and talent when they realize that the automated system they’ve built is causing havoc everywhere around the world. So if Facebook loses a big chunk of its workforce because people don’t want to be part of that perverse system anymore, that is a very powerful and very immediate lever to force them to change.
  • Duruk: I was at Uber when all the madness was happening there, and it did affect recruiting and hiring. I don’t think these companies are going to go down because they can’t attract the right talent. But there’s going to be a measurable impact. It has become less of a moral positive now — you go to Facebook to write some code and then you go home. They’re becoming just another company.
  • Step 12 To fix it, we’ll need a new business model … If the problem is in the way the Valley makes money, it’s going to have to make money a different way. Maybe by trying something radical and new — like charging users for goods and services.
  • Parakilas: They’re going to have to change their business model quite dramatically. They say they want to make time well spent the focus of their product, but they have no incentive to do that, nor have they created a metric by which they would measure that. But if Facebook charged a subscription instead of relying on advertising, then people would use it less and Facebook would still make money. It would be equally profitable and more beneficial to society. In fact, if you charged users a few dollars a month, you would equal the revenue Facebook gets from advertising. It’s not inconceivable that a large percentage of their user base would be willing to pay a few dollars a month.
  • Step 13 … And some tough regulation. Mark Zuckerberg testifying before Congress on April 10. Photo: Jim Watson/AFP/Getty Images While we’re at it, where has the government been in all this? 
  • Stallman: We need a law. Fuck them — there’s no reason we should let them exist if the price is knowing everything about us. Let them disappear. They’re not important — our human rights are important. No company is so important that its existence justifies setting up a police state. And a police state is what we’re heading toward.
  • Duruk: The biggest existential problem for them would be regulation. Because it’s clear that nothing else will stop these companies from using their size and their technology to just keep growing. Without regulation, we’ll basically just be complaining constantly, and not much will change.
  • McNamee: Three things. First, there needs to be a law against bots and trolls impersonating other people. I’m not saying no bots. I’m just saying bots have to be really clearly marked. Second, there have to be strict age limits to protect children. And third, there has to be genuine liability for platforms when their algorithms fail. If Google can’t block the obviously phony story that the kids in Parkland were actors, they need to be held accountable.
  • Stallman: We need a law that requires every system to be designed in a way that achieves its basic goal with the least possible collection of data. Let’s say you want to ride in a car and pay for the ride. That doesn’t fundamentally require knowing who you are. So services which do that must be required by law to give you the option of paying cash, or using some other anonymous-payment system, without being identified. They should also have ways you can call for a ride without identifying yourself, without having to use a cell phone. Companies that won’t go along with this — well, they’re welcome to go out of business. Good riddance.
  • Step 14 Maybe nothing will change. The scariest possibility is that nothing can be done — that the behemoths of the new internet are too rich, too powerful, and too addictive for anyone to fix.
  • García: Look, I mean, advertising sucks, sure. But as the ad tech guys say, “We’re the people who pay for the internet.” It’s hard to imagine a different business model other than advertising for any consumer internet app that depends on network effects.
  • Step 15 … Unless, at the very least, some new people are in charge. If Silicon Valley’s problems are a result of bad decision-making, it might be time to look for better decision-makers. One place to start would be outside the homogeneous group currently in power.
  • Pao: I’ve urged Facebook to bring in people who are not part of a homogeneous majority to their executive team, to every product team, to every strategy discussion. The people who are there now clearly don’t understand the impact of their platforms and the nature of the problem. You need people who are living the problem to clarify the extent of it and help solve it.
  • Things That Ruined the Internet
  • Cookies (1994) The original surveillance tool of the internet. Developed by programmer Lou Montulli to eliminate the need for repeated log-ins, cookies also enabled third parties like Google to track users across the web. The risk of abuse was low, Montulli thought, because only a “large, publicly visible company” would have the capacity to make use of such data. The result: digital ads that follow you wherever you go online.
  • The Farmville vulnerability (2007)   When Facebook opened up its social network to third-party developers, enabling them to build apps that users could share with their friends, it inadvertently opened the door a bit too wide. By tapping into user accounts, developers could download a wealth of personal data — which is exactly what a political-consulting firm called Cambridge Analytica did to 87 million Americans.
  • Algorithmic sorting (2006) It’s how the internet serves up what it thinks you want — automated calculations based on dozens of hidden metrics. Facebook’s News Feed uses it every time you hit refresh, and so does YouTube. It’s highly addictive — and it keeps users walled off in their own personalized loops. “When social media is designed primarily for engagement,” tweets Guillaume Chaslot, the engineer who designed YouTube’s algorithm, “it is not surprising that it hurts democracy and free speech.”
  • The “like” button (2009) Initially known as the “awesome” button, the icon was designed to unleash a wave of positivity online. But its addictive properties became so troubling that one of its creators, Leah Pearlman, has since renounced it. “Do you know that episode of Black Mirror where everyone is obsessed with likes?” she told Vice last year. “I suddenly felt terrified of becoming those people — as well as thinking I’d created that environment for everyone else.”
  • Pull-to-refresh (2009) Developed by software developer Loren Brichter for an iPhone app, the simple gesture — scrolling downward at the top of a feed to fetch more data — has become an endless, involuntary tic. “Pull-to-refresh is addictive,” Brichter told The Guardian last year. “I regret the downsides.”
  • Pop-up ads (1996) While working at an early blogging platform, Ethan Zuckerman came up with the now-ubiquitous tool for separating ads from content that advertisers might find objectionable. “I really did not mean to break the internet,” he told the podcast Reply All. “I really did not mean to bring this horrible thing into people’s lives. I really am extremely sorry about this.”
  • The Silicon Valley dream was born of the counterculture. A generation of computer programmers and designers flocked to the Bay Area’s tech scene in the 1970s and ’80s, embracing new technology as a tool to transform the world for good.
  •  
    Internet en 15 étapes, de sa construction à aujourd'hui, regards et regrets de ceux qui l'ont construit... [...] "Things That Ruined the Internet" les cookies 1994 / la faille Farmville 2007 / le tri algorithmique 2006 / le "like" 2009 / le "pull to refresh" 2009 / les "pop-up ads" 1996 [...]
Aurialie Jublin

14 years of Mark Zuckerberg saying sorry, not sorry about Facebook - Washington Post - 0 views

  •  
    "From the moment the Facebook founder entered the public eye in 2003 for creating a Harvard student hot-or-not rating site, he's been apologizing. So we collected this abbreviated history of his public mea culpas. It reads like a record on repeat. Zuckerberg, who made "move fast and break things" his slogan, says sorry for being naive, and then promises solutions such as privacy "controls," "transparency" and better policy "enforcement." And then he promises it again the next time. You can track his sorries in orange and promises in blue in the timeline below. All the while, Facebook's access to our personal data increases and little changes about the way Zuckerberg handles it. So as Zuckerberg prepares to apologize for the first time in front of Congress, the question that lingers is: What will be different this time?"
Aurialie Jublin

Have We Already Lost the Individual Privacy Battle? - Post - No Jitter - 0 views

  • Individual cookie management and installing special anti-tracking software has never been a common activity, so the majority of users continue to contribute to the collection of personalized data, even if it makes most of us uneasy. We don't trust most Internet sites, but we use them anyway. Pew Research Center wrote about the fate of online trust and summarized that many experts doubt the possibility of progress, saying "people are insured to risk, addicted to convenience, and will not be offered alternatives to online interactions."
Aurialie Jublin

À ceux qui ne voient aucun problème à travailler avec Facebook ou Google - No... - 0 views

  • La Free Software Foundation (FSF) est la principale organisation de défense du logiciel libre. La Software Freedom Conservancy « est une organisation à but non lucratif qui promeut, améliore, développe et défend les projets libres et open source (FLOSS) ». Ce mois-ci, la Software Freedom Conservancy organise la première conférence internationale Copyleft sponsorisée par Google, Microsoft et la FSF. En fait, Google est une telle force pour le bien dans le monde qu’elle est autorisée à sponsoriser une conférence Copyleft alors même que ces licences sont bannies au sein de la compagnie. Si même la FSF n’a aucun problème avec le fait d’avoir son logo juste à côté de celui de Google et Microsoft, qui suis-je pour critiquer ces sociétés ?
  • Mozilla n’a aucun problème avec Google, elle s’associe souvent avec eux et utilise même Google Analytics. Si une fondation aussi honnête et éthique qui a tellement à cœur de protéger notre vie privée n’a aucun problème avec le fait d’avoir Google comme moteur de recherche principal ou de recevoir des millions de dollars de leur part, qui suis-je pour critiquer Google sur la vie privée ?
  • Le CEO d’APPLE, Tim Cook, a personnellement approuvé cet engagement pour la vie privée et c’est pourquoi Apple a intégré Google comme moteur de recherche par défaut dans leur navigateur, c’est aussi pourquoi ils ne crachent pas non plus sur les 12 milliards de dollars de revenus que cet accord leur apporte. Parce que Google est tout comme Apple et a construit ses produits pour protéger notre vie privée. Pour quelle autre raison Apple autoriserait leur présence sur ses smartphones et mettrait en danger notre vie privée ? Si Tim Cook est content d’avoir Google dans son iPhone alors il doit certainement y avoir quelque chose qui ne tourne pas rond chez moi.
  • ...6 more annotations...
  • Si Apple est un exemple trop commercial pour vous, alors il y a GNOME, un projet porté par l’organisation à but non lucratif Fondation Gnome. Ils développent un environnement graphique ergonomique et populaire pour systèmes Unix Linux. GNOME ne voit aucun problème avec Google. En fait, Google siège à leur conseil d’administration et les application sur GNOME offrent un support de première classe aux applications Google.
  • Si Gmail était mauvais pour la vie privée, si par exemple, Google lisait tous vos messages et les utilisait pour créer un profil marketing (je sais, voilà que je recommence avec mes théories du complot à la noix) alors la fondation GNOME n’en ferait certainement pas la promotion dans ses logiciels. Si ils étaient obligés de supporter Gmail juste parce que c’est un service populaire mais détestaient le faire, ils afficheraient un message d’avertissement pour vous protéger. Quelque chose comme « Lorsque vous utilisez Gmail, Google Inc. utilise le contenu de vos messages pour vous profiler. Ne continuez que si vous en comprenez les dangers ». Mais ils ne le font pas. Au contraire, ils le mettent en premier et rendent sa configuration aussi simple que possible donc utiliser Gmail ne doit pas poser de problèmes.
  • Quand j’ai ajouté le support de Fastmail dans Geary, mes changements on été refusés. Si FastMail était un fournisseur de messageries éthique, je suis certain que ça n’aurait pas été le cas. Je ne doute pas que l’équipe aurait promu un service de messagerie éthique plutôt qu’un service non éthique qui lit les messages des gens et les profile. Désormais je m’inquiète et je me demande ce que les gens de GNOME connaissent de FastMail que moi je ne connais pas. Qu’est-ce donc que les gens sournois de Fastmail sont en train de nous préparer ?
  • La Nordic Privacy Arena est un événement annuel réunissant des délégués généraux de la protection des données et des professionnels de la vie privée. Lors de l’édition de cette année, Facebook a fait une présentation et les organisateurs m’ont demandé  d’être gentil avec Facebook et Google, de garder mes remarques pour ma propre présentation et de ne pas embarrasser l’orateur avec des questions après sa présentation comme j’avais pu le faire lors de la session de Mozilla.  
  • Par ailleurs, la présentation Facebook était assurée par Nicolas de Bouville, qui officiait précédemment à la CNIL, une organisation connue pour ses fabuleux pantouflages. Donc, si Nicolas a choisi de travailler pour Facebook après son passage à la CNIL, Facebook ne peut pas être si mauvais.
  • À la lumière de ces soutiens massifs au capitalisme de surveillance de la part d’organisations respectueuses, qui disent œuvrer à la protection de nos droits humains, de notre vie privée et de la démocratie, j’en suis venu à la conclusion que je devais être le seul à être dans l’erreur.   Si Google, Facebook, etc., n’étaient qu’à moitié aussi nuisibles que ce pour quoi je les fais passer, ces organisations ne passeraient pas d’accords avec eux, elles ne les soutiendraient pas non plus.
  •  
    "De nombreuses organisations de défense des libertés numériques, essentiellement anglo-saxonnes (Access, FSF, Mozilla, GNOME, etc.), sont financées par Google ou Facebook. Ces deux sociétés vivent pourtant de l'exploitation des données personnelles de leurs utilisateurs, au mépris de leurs vies privées et de leurs libertés. Ce ne sont des acteurs sains ni pour Internet ni pour la démocratie. Aral Balkan, activiste et développeur, pointe dans son billet au titre ironique les contradictions et l'hypocrisie de ces organisations. Ce billet traduit en grande partie ce que nous pensons chez Nothing2Hide (quitte à frôler l'asphyxie financière). Nous en publions une traduction française ici."
Aurialie Jublin

The pregnancy-tracking app Ovia lets women record their most sensitive data for themsel... - 0 views

  • But someone else was regularly checking in, too: her employer, which paid to gain access to the intimate details of its workers’ personal lives, from their trying-to-conceive months to early motherhood. Diller’s bosses could look up aggregate data on how many workers using Ovia’s fertility, pregnancy and parenting apps had faced high-risk pregnancies or gave birth prematurely; the top medical questions they had researched; and how soon the new moms planned to return to work.
  • “Maybe I’m naive, but I thought of it as positive reinforcement: They’re trying to help me take care of myself,” said Diller, 39, an event planner in Los Angeles for the video game company Activision Blizzard. The decision to track her pregnancy had been made easier by the $1 a day in gift cards the company paid her to use the app: That’s “diaper and formula money,” she said.
  • But Ovia also has become a powerful monitoring tool for employers and health insurers, which under the banner of corporate wellness have aggressively pushed to gather more data about their workers’ lives than ever before.
  • ...13 more annotations...
  • Employers who pay the apps’ developer, Ovia Health, can offer their workers a special version of the apps that relays their health data — in a “de-identified,” aggregated form — to an internal employer website accessible by human resources personnel. The companies offer it alongside other health benefits and incentivize workers to input as much about their bodies as they can, saying the data can help the companies minimize health-care spending, discover medical problems and better plan for the months ahead.
  • By giving counseling and feedback on mothers’ progress, executives said, Ovia has helped women conceive after months of infertility and even saved the lives of women who wouldn’t otherwise have realized they were at risk.
  • But health and privacy advocates say this new generation of “menstrual surveillance” tools is pushing the limits of what women will share about one of the most sensitive moments of their lives. The apps, they say, are designed largely to benefit not the women but their employers and insurers, who gain a sweeping new benchmark on which to assess their workers as they consider the next steps for their families and careers.
  • Experts worry that companies could use the data to bump up the cost or scale back the coverage of health-care benefits, or that women’s intimate information could be exposed in data breaches or security risks. And though the data is made anonymous, experts also fear that the companies could identify women based on information relayed in confidence, particularly in workplaces where few women are pregnant at any given time.
  • The rise of pregnancy-tracking apps shows how some companies increasingly view the human body as a technological gold mine, rich with a vast range of health data their algorithms can track and analyze. Women’s bodies have been portrayed as especially lucrative: The consulting firm Frost & Sullivan said the “femtech” market — including tracking apps for women’s menstruation, nutrition and sexual wellness — could be worth as much as $50 billion by 2025.
  • Companies pay for Ovia’s “family benefits solution” package on a per-employee basis, but Ovia also makes money off targeted in-app advertising, including from sellers of fertility-support supplements, life insurance, cord-blood banking and cleaning products.
  • In 2014, when the company rolled out incentives for workers who tracked their physical activity with a Fitbit, some employees voiced concerns over what they called a privacy-infringing overreach. But as the company offered more health tracking — including for mental health, sleep, diet, autism and cancer care — Ezzard said workers grew more comfortable with the trade-off and enticed by the financial benefits.
  • But a key element of Ovia’s sales pitch is how companies can cut back on medical costs and help usher women back to work. Pregnant women who track themselves, the company says, will live healthier, feel more in control and be less likely to give birth prematurely or via a C-section, both of which cost more in medical bills — for the family and the employer.
  • Women wanting to get pregnant are told they can rely on Ovia’s “fertility algorithms,” which analyze their menstrual data and suggest good times to try to conceive, potentially saving money on infertility treatments. “An average of 33 hours of productivity are lost for every round of treatment,” an Ovia marketing document says.
  • Ovia, in essence, promises companies a tantalizing offer: lower costs and fewer surprises. Wallace gave one example in which a woman had twins prematurely, received unneeded treatments and spent three months in intensive care. “It was a million-dollar birth … so the company comes to us: How can you help us with this?” he said.
  • “The fact that women’s pregnancies are being tracked that closely by employers is very disturbing,” said Deborah C. Peel, a psychiatrist and founder of the Texas nonprofit Patient Privacy Rights. “There’s so much discrimination against mothers and families in the workplace, and they can’t trust their employer to have their best interests at heart.” Federal law forbids companies from discriminating against pregnant women and mandates that pregnancy-related health-care expenses be covered in the same way as other medical conditions. Ovia said the data helps employers provide “better benefits, health coverage and support.”
  • Companies can also see which articles are most read in Ovia’s apps, offering them a potential road map to their workers’ personal questions or anxieties. The how-to guides touch on virtually every aspect of a woman’s changing body, mood, financial needs and lifestyle in hyper-intimate detail, including filing for disability, treating bodily aches and discharges, and suggestions for sex positions during pregnancy.
  • The coming years, however, will probably see companies pushing for more pregnancy data to come straight from the source. The Israeli start-up Nuvo advertises a sensor band strapped around a woman’s belly that can send real-time data on fetal heartbeat and uterine activity “across the home, the workplace, the doctor’s office and the hospital.” Nuvo executives said its “remote pregnancy monitoring platform” is undergoing U.S. Food and Drug Administration review.
  •  
    "As apps to help moms monitor their health proliferate, employers and insurers pay to keep tabs on the vast and valuable data"
Aurialie Jublin

The Landlord Wants Facial Recognition in Its Rent-Stabilized Buildings. Why? - The New ... - 0 views

  • The fact that the Atlantic complex already has 24-hour security in its lobbies as well as a clearly functioning camera system has only caused tenants to further question the necessity of facial recognition technology. The initiative is particularly dubious given the population of the buildings. Last year, a study out of M.I.T. and Stanford looked at the accuracy rates of some of the major facial-analysis programs on the market. It found that although the error rates for determining the gender of light-skinned men never surpassed 1 percent, the same programs failed to identify darker-skinned women up to one-third of the time.
  • The fear that marginalized groups will fall under increased surveillance as these technologies progress in the absence of laws to regulate them hardly seems like dystopian hysteria.
  • In November, the City of Detroit announced that it was introducing the use of real-time police cameras at two public-housing towers. The existing program is known as Project Greenlight, and it was designed to deter criminal behavior. But tower residents worried that relatives would be less likely to visit, given the constant stream of data collected by law enforcement.
  •  
    "Last fall, tenants at the Atlantic Plaza Towers, a rent-stabilized apartment complex in Brooklyn, received an alarming letter in the mail. Their landlord was planning to do away with the key-fob system that allowed them entry into their buildings on the theory that lost fobs could wind up in the wrong hands and were now also relatively easy to duplicate. Instead, property managers planned to install facial recognition technology as a means of access. It would feature "an encrypted reference file" that is "only usable in conjunction with the proprietary algorithm software of the system," the letter explained, in a predictably failed effort to mitigate concerns about privacy. As it happened, not every tenant was aware of these particular Orwellian developments. New mailboxes in the buildings required new keys, and to obtain a new key you had to submit to being photographed; some residents had refused to do this and so were not getting their mail."
Aurialie Jublin

Livre blanc PrivacyTech (pdf) - Une nouvelle gouvernance pour les données du ... - 0 views

  •  
    Une économie digitale performante et éthique nécessite une libre circulation des données personnelles sous le contrôle des individus. Le RGPD (Règlement Général sur la Protection des Données), lancé le 25 mai 2018, est un pas en avant majeur vers une nouvelle économie centrée sur l'individu grâce : au nouveau droit à la portabilité (Article 20) qui encourage la circulation des données, à une série de mesures et principes visant à augmenter la protection des individus comme le consentement spécifique et informé et le privacy by design. Le RGPD s'inscrit dans la stratégie de Marché Unique du Digital (Digital Single Market) de l'Union Européenne et a pour but de créer les conditions pour une économie sans barrières qui bénéficierait autant aux individus et aux entreprises qu'à la société dans son ensemble. Presque un an après le lancement du RGPD, nous observons un paysage prometteur d'organisations qui commencent à s'adapter au nouveau règlement, autant en Europe que dans le reste du monde. Mais il reste encore beaucoup à faire, particulièrement en ce qui concerne la mise en œuvre du contrôle des données par l'individu et de la portabilité. La tâche est éminemment complexe et requiert une coordination internationale, multisectorielle et multi-expertises. Pour réussir nous avons définitivement besoin d'une nouvelle approche ambitieuse qui pourrait partir de l'Europe pour s'étendre à l'international. Dans un tel contexte, nous proposons d'engager un échange constructif entre tous les acteurs de la donnée personnelle (entreprises, administrations, académies, associations) qui voudraient joindre leurs efforts au sein d'une nouvelle forme d'organisation dont le but serait de construire, harmoniser et proposer des standards technologiques, terminologies et bonnes pratiques pour la circulation et la protection des données personnelles, ainsi qu'une gouvernance adaptée. Les grandes pro
Aurialie Jublin

Une IA concertée et open source comme outil éducatif pour les citoyens et les... - 0 views

  • Un autre aspect particulier du projet est d'explorer le concept de "privacy by using" et de faire du projet un outil de sensibilisation et d'éducation aux problématiques d'éthique et de protection de la vie privée dans les services intégrant de l'IA. 
  • Dans ce sens, nous travaillons sur des contenus et des formats (ateliers et rencontres de type barcamp) spécifiquement pensés de manière ludique pour les écoles afin de sensibiliser dès le plus jeune âge aux problématiques liées aux biais et à l'économie de l'attention, problématiques inhérentes à la "vie numérique" de l'époque actuelle et qui vont s'avérer toujours plus critiques avec le développement de l'IA (de plus en plus d'enfants sont confrontés très tôt à des services potentiellement problématiques, en particulier avec le boom des assistants vocaux).
  • Ainsi, en combinant la méthode "society in the loop" et l'approche "privacy by using", le projet permet de constituer de façon concertée une base de connaissances libre et ouverte tout en participant à l'éducation de la société sur les enjeux et les bonnes pratiques d'une intelligence artificielle d'intérêt général éthique et responsable.
  •  
    "Je travaille depuis quelques temps avec Matteo Mazzeri sur un projet de recherche appliquée en partenariat avec le CERN (cf. https://twitter.com/Genial_Project) qui propose une approche assez singulière de l'articulation entre intelligence artificielle (IA) et service public. Il s'agit de concevoir une "IA concertée" et open source qui permette de développer des services d'intérêt général / à vocation de service public. L'originalité de notre approche est d'appliquer le concept de "society in the loop" exposé par le MIT Medialab (concept très bien expliqué en français par Irénée Régnauld dans cet article) de manière à proposer une méthodologie, un travail humain sur les données et des modèles d'apprentissage qui permettent de refléter au plus près les besoins de "la société" et d'aboutir à une IA éthique et responsable."
Aurialie Jublin

Have you heard about Silicon Valley's unpaid research and development department? It's ... - 0 views

  • So what should we do instead? Let’s instead invest in many small and independent not-for-profit organisations and task them with building the ethical alternatives. Let’s get them to compete with each other while doing so. Let’s take what we know works from Silicon Valley (small organisations working iteratively, competing, and failing fast) and remove what is toxic: venture capital, exponential growth, and exits. Instead of startups, lets build stayups in Europe. Instead of disposable businesses that either fail fast or become malignant tumours, let’s fund organisations that either fail fast or become sustainable providers of social good.
  • The EC must stop funding startups and invest in stayups instead. Invest €5M in ten stayups in each area where we want ethical alternatives. Unlike a startup, when stayups are successful, they don’t exit. They can’t get bought by Google or Facebook. They remain sustainable European not-for-profits working to deliver technology as a social good.
  • Furthermore, funding for a stayup must come with a strict specification of the character of the technology it will build. Goods built using public funds must be public goods. Free Software Foundation Europe is currently raising awareness along these lines with their “public money, public code” campaign. However we must go beyond “open source” to stipulate that technology created by stayups must be not only public but also impossible to enclose. For software and hardware, this means using licenses that are copyleft. A copyleft license ensures that if you build on public technology, you must share alike. Share-alike licenses are essential so that our efforts do not become a euphemism for privatisation and to avoid a tragedy of the commons. Corporations with deep pockets must not be able to take what we create with public funds, invest their own millions on top, and not share back the value they’ve added.
  • ...1 more annotation...
  • We must also start to fund ethical, decentralised, free and open alternatives from the commons for the common good. We must ensure that these organisations have social missions ingrained in their very existence that cannot be circumvented. We must make sure that these organisations cannot be bought by surveillance capitalists. Today, we are funding startups and acting as an unofficial and unpaid research and development arm for Silicon Valley. We fund startups and, if they’re successful, they get bought by the Googles and Facebooks. If they’re unsuccessful, the EU taxpayer foots the bill. It’s time for the European Commission and the EU to stop being useful idiots for Silicon Valley, and for us to fund and support our own ethical technological infrastructure.
  •  
    "Who should you thank for Facebook's Libra? "One of the UK's leading privacy researchers" University College London The DECODE project And, if you're an EU citizen who pays their taxes, You. Surprised? Don't be. None of this was unforeseen Today, the EU acts like an unpaid research and development department for Silicon Valley. We fund startups, which, if they're successful, get sold to companies in Silicon Valley. If they fail, the European taxpayer foots the bill. This is madness."
Aurialie Jublin

The urgent case for a new ePrivacy law | European Data Protection Supervisor - 0 views

  •  
    A swarm of misinformation and misunderstanding surrounds the case for revising our rules on the confidentiality of electronic communications, otherwise known as ePrivacy. It's high time for some honest debunking.
Aurialie Jublin

Opinion | There May Soon Be Three Internets. America's Won't Necessarily Be the Best. -... - 0 views

  • The received wisdom was once that a unified, unbounded web promoted democracy through the free flow of information. Things don’t seem quite so simple anymore. China’s tight control of the internet within its borders continues to tamp down talk of democracy, and an increasingly sophisticated system of digital surveillance plays a major role in human rights abuses, such as the persecution of the Uighurs. We’ve also seen the dark side to connecting people to one another — as illustrated by how misinformation on social media played a significant role in the violence in Myanmar.
  • There’s a world of difference between the European Union’s General Data Protection Regulation, known commonly as G.D.P.R., and China’s technologically enforced censorship regime, often dubbed “the Great Firewall.” But all three spheres — Europe, America and China — are generating sets of rules, regulations and norms that are beginning to rub up against one another.
  • The information superhighway cracks apart more easily when so much of it depends on privately owned infrastructure. An error at Amazon Web Services created losses of service across the web in 2017; a storm disrupting a data center in Northern Virginia created similar failures in 2012. These were unintentional blackouts; the corporate custodians of the internet have it within their power to do far more. Of course, nobody wants to turn off the internet completely — that wouldn’t make anyone money. But when a single company with huge market share chooses to comply with a law — or more worryingly, a mere suggestion from the authorities — a large chunk of the internet ends up falling in line.
  • ...7 more annotations...
  • But eight years later, Google is working on a search engine for China known as Dragonfly. Its launch will be conditional on the approval of Chinese officials and will therefore comply with stringent censorship requirements. An internal memo written by one of the engineers on the project described surveillance capabilities built into the engine — namely by requiring users to log in and then tracking their browsing histories. This data will be accessible by an unnamed Chinese partner, presumably the government.
  • Google says all features are speculative and no decision has been made on whether to launch Dragonfly, but a leaked transcript of a meeting inside Google later acquired by The Intercept, a news site, contradicts that line. In the transcript, Google’s head of search, Ben Gomes, is quoted as saying that it hoped to launch within six to nine months, although the unstable American-China relationship makes it difficult to predict when or even whether the Chinese government will give the go-ahead.
  • Internet censorship and surveillance were once hallmarks of oppressive governments — with Egypt, Iran and China being prime examples. It’s since become clear that secretive digital surveillance isn’t just the domain of anti-democratic forces. The Snowden revelations in 2013 knocked the United States off its high horse, and may have pushed the technology industry into an increasingly agnostic outlook on human rights.
  • If the future of the internet is a tripartite cold war, Silicon Valley wants to be making money in all three of those worlds.
  • Yet even the best possible version of the disaggregated web has serious — though still uncertain — implications for a global future: What sorts of ideas and speech will become bounded by borders? What will an increasingly disconnected world do to the spread of innovation and to scientific progress? What will consumer protections around privacy and security look like as the internets diverge? And would the partitioning of the internet precipitate a slowing, or even a reversal, of globalization?
  • What these types of sky-is-falling articles keep getting wrong is the idea that the World Wide Web is the same as the Internet. It’s not. Web sites and the browsers that access them are an application that uses the Internet for transport.The Internet transports far more than just web traffic, but the most crucial one for companies is probably VPN: Companies connect to one another using site-to-site VPNs. Their employees can work from anywhere with remote user VPN. Disconnect the EU from the US, and you’ve removed the cheapest way for companies to connect their networks together.These regulatory worlds will get along somehow. Perhaps someone will write a web app that recognizes where a user is from, and apply appropriate policy to their session. Perhaps that web app will become wildly popular and be deployed on every website everywhere. I don’t know how it will work, but I do know the Internet will not become fragmented.
  • The internet was never meant to be a walled garden. Remember America Online began as a walled garden until the World Wide Web came along and “tore down that wall.” So, Europe can have its Europe Wide Web and China can have its China Wide Web, but we will always be the World Wide Web – truly open and free. The “one internet led by the United States” will remain the world’s “go to” information super highway just as the greenback has remained the world’s reserve currency for decades.
  •  
    "In September, Eric Schmidt, the former Google chief executive and Alphabet chairman, said that in the next 10 to 15 years, the internet would most likely be split in two - one internet led by China and one internet led by the United States. Mr. Schmidt, speaking at a private event hosted by a venture capital firm, did not seem to seriously entertain the possibility that the internet would remain global. He's correct to rule out that possibility - if anything, the flaw in Mr. Schmidt's thinking is that he too quickly dismisses the European internet that is coalescing around the European Union's ever-heightening regulation of technology platforms. All signs point to a future with three internets."
Aurialie Jublin

La plupart des applis pour les réfugiés ne sont pas utilisées - Digital Socie... - 0 views

  • avec la généralisation du mobile. Celle-ci a instauré un mode d’existence différent, fondé sur la joignabilité permanente. Les nouvelles technologies ont apporté des réponses immédiates à certains besoins des migrants : rester proche de la famille, rendre la migration supportable. À mesure que ces technologies se sont développées, elles ont envahi tous les aspects de la vie des migrants. Elles ont aussi amené de nouvelles contraintes : les smart borders , le tracking, la question de la privacy, ainsi que des contraintes sociales nouvelles comme l’obligation d’être présent même à distance, d’envoyer de l’argent…
  • Tout s’accélère en 2015, quand les médias mettent en lumière le migrant connecté. De tous côtés activistes et hackers se mettent à créer des applications pour les réfugiés. Il y a une vague d’idées, une sorte de « technophorie ». Avec une expertise technique sur le sujet, beaucoup d’élan et d’initiative, mais sans vraiment avoir étudié la chose avant, sans avoir réellement regardé les usages.
  • Toutes les associations doivent soudain s’y mettre, le Haut Commissariat aux Réfugiés de l’ONU se lance aussi là-dedans, Techfugees naît et la presse encourage beaucoup ces initiatives. Il y a de l’argent, public et privé, pour développer des prototypes, mais pas assez pour accompagner ces applications vers leur maturation, les phases de testing, le développement d’un business model.
  • ...6 more annotations...
  • Avec mes collègues Léa Macias et Samuel Huron, nous avons inventorié une centaine d’applications et de plateformes construites ces 8 dernières années. Celles-ci proposent des services très variés : Information-service, mise en relation, recherche d’emploi, hébergement, traduction, apprentissage des langues, éducation et formation, identité, santé… Nous avons constaté que ces applications ont une vie de comète : ainsi en juin 2018, 27 % des applications recensées précédemment avaient disparu (13 sur les 48 recensées). En septembre 2018, 29 % avaient disparu (soit 10 sur les 35 recensées).
  • Nous avons aussi testé ces applications avec des réfugiés inscrits à l’école Simplon, dans le programme Refugeek qui les forme à être programmeurs en France. Ce sont des gens habitués au code, aux applications. Dans leur téléphone, nous n’avons trouvé aucune application destinée aux réfugiés. Ils utilisent Facebook, WhatsApp, Google, comme tout le monde
  • Nous avons aussi constaté que toutes les applications, même les plus simples, demandaient des informations personnelles. Toutes ont l’idée de faire reposer leur modèle économique sur les data. C’est classique dans le champ des applications. Mais quand on travaille avec des réfugiés, c’est délicat. Car ces personnes ne sont pas réfugiées par hasard : elles fuient des dangers et il faut faire très attention avec ces données.
  • Les applications qui marchent, comme CALM (Comme à la Maison, application qui met en lien des réfugiés avec des particuliers qui peuvent les héberger pour des durées variables, ndlr), c’est parce qu’elles mettent en pratique une idée longuement mûrie, dans le cas de CALM celle de favoriser l’intégration par l’immersion.
  • CALM, développé par le mouvement citoyen Singa , repose sur l’idée de l’intégration par l’immersion. je crois aussi que c’est très efficace. Les codes d’une société s’acquièrent plus facilement lorsqu’on est immergé dans un environnement, on approche l’altérité différemment. Surtout, des sentiments naissent de cette mise en relation de reconnaissance, de satisfaction... Cela crée une énergie très positive. Parfois ces relations durent quelques semaines, parfois toute une vie. Quand les migrants racontent des trajectoires migratoires réussies, il y a toujours une personne, à un moment donné, qui les a tiré vers la société. Singa a essayé de capter ce moment, de l’investir de manière numérique et je pense que c’est une bonne chose.
  • Ensuite, nous avons vu que l’hospitalité ne pouvait pas être déléguée à des techniques de matching. L’hospitalité, ce n’est pas Tinder. Nous avons essayé d’utiliser le matching pour organiser des rencontres, par exemple, sur la base d’intérêts communs. Mais ça ne marchait pas. Ou bien, quand ça prenait, les gens passaient sur Facebook, qui donne un aperçu plus complet de l’environnement social de la personne... Mais CALM reste une plateforme importante. Elle a été beaucoup médiatisée et a permis d’ouvrir les esprits, de faire parler de l’hébergement chez soi. Je pense qu’elle n’est pas étrangère à l’amendement adopté en octobre 2018 par l’Assemblée Nationale, instaurant un crédit d’impôt pour les personnes hébergeant des réfugiés chez elles .
  •  
    "Depuis le début de la crise migratoire en 2015, de nombreuses applications pour les réfugiés ont vu le jour. Si elles naissent de bonnes intentions, elles sont rarement utiles. C'est ce qu'explique la sociologue Dana Diminescu , enseignante-chercheuse à Télécom Paris-Tech, qui étudie depuis longtemps les usages des TIC chez les migrants (elle avait d'ailleurs coordonné le dossier consacré aux diasporas connectées sur notre site )."
Aurialie Jublin

St. Louis Uber and Lyft Driver Secretly Live-Streamed Passengers, Report Says - The New... - 0 views

  • In it, Jason Gargac, 32, a driver for Uber and Lyft from Florissant, Mo., described an elaborate $3,000 rig of cameras that he used to record and live-stream passengers’ rides to the video platform Twitch. Sometimes passengers’ homes and names were revealed.AdvertisementMr. Gargac told the newspaper that he sought out passengers who might make entertaining content, part of capturing and sharing the everyday reactions that earned him a small but growing following online. Mr. Gargac said he earned $3,500 from the streaming, through subscriptions, donations and tips.He said that at first he had informed passengers that he was recording them, but the videos felt “fake” and “produced.”
  • Mr. Gargac could not be reached for comment. Uber said in a statement Sunday that it had ended its partnership with Mr. Gargac and that “the troubling behavior in the videos is not in line with our Community Guidelines.” Lyft said in a statement that Mr. Gargac had been “deactivated.”
  • Ms. Rosenblat, who is writing a book called “Uberland: How Algorithms Are Rewriting the Rules of Work,” said she had studied the company for four years. There has been an upward trend in recording passengers, she said, driven by “good reasons” like ensuring drivers’ safety, or being able to vouch for the quality of their service.“What we’re seeing with this driver is just a totally different game,” she said. “This is, ‘How can I monetize passengers as content?’”
  • ...1 more annotation...
  • Mr. Gargac had placed a small sign on a passenger window that said the vehicle was equipped with recording devices and that “consent” was given by entering the car.
  •  
    "Sitting in the back of a cab can have a confessional allure: Sealed off to the world, you can take a private moment for yourself or have a conversation - casual or deeply intimate - with a driver you'll never see again. Now imagine finding out days later that those moments were being streamed live on the internet to thousands of people. What's more, some of those people paid to watch you, commenting on your appearance, sometimes explicitly, or musing about your livelihood. This was the reality for potentially hundreds of passengers of a ride-hailing service driver in St. Louis, according to a lengthy article published in The St. Louis Post-Dispatch this weekend."
1 - 20 of 26 Next ›
Showing 20 items per page