Skip to main content

Home/ QN2019/ Group items tagged fake

Rss Feed Group items tagged

Aurialie Jublin

Bienvenue dans le «World Wide Fake» - Libération - 0 views

  • Le Web existe officiellement depuis le mois de mars 1989. Il s’est construit sur différentes strates, dont la rétro-archéologie pourrait être la suivante. D’abord, le «World Wide Web». Le Web des documents : ses utilisateurs, ses ingénieurs, ses interfaces et ses intérêts économiques, tout sur la planète web tourne autour de l’axe documentaire. Il faut indexer, classer, donner accès à ce qui va très vite devenir une quasi-infinité de documents d’abord disponibles sous forme de texte, puis d’images, dans des pages et des sites.
  • Ensuite, un «World Live Web» car tout s’accélère, de la production des contenus à leur mise à disposition quasi instantanée dans les architectures de moteurs de recherche qui se font désormais fort d’indexer toute l’actualité et ce en temps réel.
  • uis, le «World Life Web». L’axe autour duquel tourne toute la planète web n’est plus celui des documents mais celui des «profils». Ce qui change tout, tant sur le plan trivial de l’ergonomie que sur celui - crucial - de l’économie.
  • ...8 more annotations...
  • Enfin, avec l’Internet des objets vient alors le règne du «World Wide Wear». Les interfaces sont désormais celles de nos vêtements, elles siègent sous la forme d’enceintes connectées au milieu même de nos maisons. Des technologies littéralement «prêtes à porter» autant qu’elles sont prêtes et toujours «à portée». Et, avec elles, l’anecdotisation des régimes de surveillance qui conduit tout droit au «World Wide Orwell».
  • Nous sommes aujourd’hui à l’étape d’après. Celle du «World Wide Fake». Un environnement, un écosystème dont l’essentiel des interactions est artificiellement fabriqué sur la base d’une spéculation qui n’a d’autre but que de s’entretenir elle-même. Issue d’une forme de capitalisme linguistique se déclinant en un capitalisme de surveillance, cette spéculation avait initialement pour but de nous maintenir le plus attentionnellement captifs possible, nous rappelant sans cesse qu’il fallait interagir, notamment par le biais de ces contremaîtres cognitifs que sont les notifications. Mais aujourd’hui le «faux» se déploie au sein des architectures techniques toxiques de plateformes prédatrices qui ont presque totalement phagocyté tout ce qui fut l’espace public du Web, et contraint nos usages à prendre place dans ces espaces privés et privatifs.
  • Aujourd’hui, «de faux internautes avec de faux cookies et de faux comptes sur des réseaux sociaux effectuent de faux mouvements de souris, activent de faux clics vers de faux sites webs [….], créant un simulacre d’Internet dans lequel la seule chose encore réelle ce sont les publicités», écrit Max Read dans un papier pour le New York Magazine.
  • Nous y sommes et dans des proportions encore plus ahurissantes : presque 52 % du trafic internet mondial en 2016 a été généré par des bots. De faux utilisateurs donc.
  • Le faux est souvent l’autre nom du «mensonge». Et on semble découvrir que tout le monde ment sur le Web. Puisque chacun est seul à disposer de ses propres chiffres au service de ses propres certitudes ou de ses propres intérêts comment pourrait-il en être autrement ? On a découvert, presque étonnés, que Facebook avait menti sur les chiffres d’audience de ses vidéos, qu’il mentait également sur les métriques liées à «l’engagement». On a découvert que Google mentait si on lui posait la question de savoir si l’Holocauste avait vraiment existé. On a compris qu’en plus de leurs architectures techniques toxiques, les grandes plateformes disposaient chacune de leur propre régime de vérité - celui de la popularité pour Google et celui de l’engagement pour Facebook - qui rendait encore plus difficile la construction d’un espace culturel commun permettant de faire société
  • Au niveau géopolitique même, de faux comptes utilisant de fausses publicités ont permis d’influencer le résultat de vraies élections. Les technologies de l’artefact, les «Deep Fakes», qui permettent à moindre coût de truquer le réel avec un effet de vraisemblance quasi indétectable, sont aujourd’hui en passe de devenir des technologies «grand public» : on peut grâce à elles remplacer le visage d’un acteur par celui d’un autre dans un film mais également modifier la vocalisation du discours tenu par un homme politique pour lui faire dire ce que l’on veut.
  • Ce faisant, c’est tout ce qui dans notre société permettait d’établir la valeur de preuve d’une image, d’un discours, d’un témoignage ou d’un fait, qui vole en éclats et qu’il nous faut réapprendre à définir et à construire. Voilà des années qu’avec d’autres, je milite pour une prise en compte et une intervention sur la dimension non pas économique mais simplement politique des grandes plateformes.
  • Hannah Arendt est morte en 1975 et n’a donc jamais connu Internet. Dans un entretien de 1974 sur la question du totalitarisme, elle écrivait ceci : «Quand tout le monde vous ment en permanence, le résultat n’est pas que vous croyez ces mensonges, mais que plus personne ne croit plus rien. Un peuple qui ne peut plus rien croire ne peut se faire une opinion. Il est privé non seulement de sa capacité d’agir mais aussi de sa capacité de penser et de juger. Et, avec un tel peuple, vous pouvez faire ce qu’il vous plaît.» C’est très exactement cela, le risque et la promesse du World Wide Fake si nous n’y faisons rien : la conjugaison d’un emballement spéculatif autour d’une industrie publicitaire de la falsification et de l’altération et l’annonce d’un effondrement de notre capacité collective à faire société dans un espace public commun. Et, avec un tel peuple, vous pouvez faire ce qu’il vous plaît
  •  
    "Aujourd'hui, plus de la moitié du trafic internet mondial est généré par des bots, c'est-à-dire des faux utilisateurs, qui manipulent les audiences et orientent les débats. Et c'est notre capacité collective à faire société qui est en jeu."
Aurialie Jublin

Fake news et neurosciences - Albert Moukheiber : "Notre cerveau est attiré pa... - 0 views

  • Le scientifique établit un parallèle entre la façon dont fonctionne notre cerveau face aux fausses informations, qui touchent à l'émotionnel et répond au besoin d'explications de notre cerveau et la manière dont nous réagissons aux tours de magie.
  • En matière de manipulation, enfin, Albert Moukheiber souligne aussi combien notre besoin de se situer dans la norme peut influencer nos actions. C'est sur cette théorie du "nudge" que s'appuie d'ailleurs le marketing.
  •  
    "Le docteur en neurosciences cognitives Albert Moukheiber publie "Votre cerveau vous joue des tours", un ouvrage sur la manière dont nos souvenirs peuvent être altérés et la propension que nous avons à adhérer aux fake news. Ces fausses informations, qui versent souvent dans la théorie du complot, répondent à un besoin de notre cerveau, explique le chercheur. En jouant sur l'émotionnel, une fake news va activer un "signal d'alerte" dans notre tête, nous faisant percevoir l'information comme plus crédible."
Aurialie Jublin

Which Face Is Real? - 0 views

  • Computers are good, but your visual processing systems are even better. If you know what to look for, you can spot these fakes at a single glance — at least for the time being. The hardware and software used to generate them will continue to improve, and it may be only a few years until humans fall behind in the arms race between forgery and detection.
  •  
    "Our aim is to make you aware of the ease with which digital identities can be faked, and to help you spot these fakes at a single glance. "
Aurialie Jublin

[Fake] Internet serait-il devenu complètement fake? - Digital Society Forum - 0 views

  • Sur Internet, moins de 60% du trafic serait humain, explique l’auteur. Le reste des Internautes seraient des bots, des logiciels opérant de manière autonome sur le réseau. Au point de brouiller la frontière entre Internautes humains et non-humains. En 2013, la moitié des usagers de Youtube étaient ainsi des bots se faisant passer pour des êtres humains, ce qui avait fait craindre aux employés de la multinationale l’avènement d’une ère où les systèmes de détection du trafic frauduleux jugeraient réelle l’activité des bots, et fausse celle des êtres humains.
  • Au cours des deux dernières années, Facebook aurait également publié des chiffres erronés sur le renvoi du trafic depuis Facebook vers des sites externes, la portée des publications, ou encore le nombre de “vues” des vidéos postées sur la plateforme. Ce qui interroge, là encore, sur la notion de “réel” sur Internet.
  • “Tout ce qui semblait auparavant incontestablement réel semble maintenant légèrement faux; tout ce qui semblait auparavant légèrement faux a maintenant le pouvoir et la présence du réel”, déplore l’auteur. Et de multiplier les exemples: entre les vidéos complotistes pullulant sur Youtube, les trolls russes se faisant passer pour des soutiens de Donald Trump sur Facebook et le “deepfake”, une technique de synthèse permettant de falsifier le visage ou la voix d’une personne sur une vidéo grâce à l’intelligence artificielle, l’auteur s’inquiète de l’effondrement de toute distinction claire entre le réel et l’irréel sur Internet.
  • ...2 more annotations...
  • L’exemple de Lil Miquela , une “influenceuse” suivie par plus d’un million et demi de personnes sur Instagram, est à ce titre particulièrement révélateur. Partageant régulièrement ses états d’âme, ses séances de shopping et ses sorties entre amis sur le réseau social entre deux selfies, ce mannequin américano-brésilien est en réalité un avatar, créé grâce à l’imagerie de synthèse par une start-up californienne spécialisée en intelligence artificielle et en robotique. Un faux mannequin, donc, mais une influence bien réelle: là aussi, avertit l’auteur, la frontière entre le vrai et le faux s’émousse.
  • “Ce qui a vraiment disparu d’internet, ce n’est pas la réalité, c’est la confiance : le sentiment que les personnes et les choses que l’on y rencontre sont ce qu’elles prétendent être,” conclut l’auteur. Remédier à cet état de fait nécessite selon lui une réforme du modèle économique d’Internet qui a permis au mensonge, à la déformation et à la falsification de devenir lucratifs. Sans une telle réforme, estime-t-il, Internet risque de devenir une usine à “fakes”, et les revenus publicitaires qu’ils génèrent la seule réalité.
  •  
    "Une chronique du New York Magazine alerte sur l'abondance de contenus faux sur Internet. Selon son auteur, les manipulations de données et de faits y auraient atteint un seuil critique, au point de compromettre notre capacité à distinguer le réel de l'irréel."
Aurialie Jublin

Giacometti peut-il nous aider à contrer les fake news ? - 0 views

  • Les philosophes allemands Walter Benjamin et Theodor Adorno - qui avaient été confrontés à la montée des fascismes dans les années 1930 -  avaient fait le même constat : face aux discours délirants des nazis, la presse avait eu beau opposer des chiffres, ça n’avait pas suffi. Les journalistes avaient succombé selon eux à la chimère de “l’information impartiale.” Et donc les philosophes proposaient un autre journalisme : narratif, flâneur, plus proche de l’expérience. La philosophe Myriam Revault d’Allonnes dit à peu près la même chose dans le livre qu’elle vient de publier, La faiblesse du vrai, aux éditions du Seuil, mais sur un plan plus politique : elle dit que l’utopie politique et les faits alternatifs ont la même source, la capacité d’imaginer un autre monde. Et donc, il ne faut pas laisser l’imagination aux fabricants de fake news - qui d’ailleurs souvent ne veulent rien changer au monde - mais utiliser l’imagination pour penser un autre monde.
  •  
    "Le plus souvent - et on le voit aujourd'hui avec la manière dont la presse américaine traite les mensonges énoncés en rafale par Trump pendant cette campagne des midterms - c'est la rectification qui est utilisée. A une fake news, on oppose un fait, un chiffre. On cherche à rétablir la vérité en opposant à la fausse information une vision "raisonnée" et "savante", pour reprendre le vocabulaire de Giacometti. C'est nécessaire, mais, on le voit bien, ça ne suffit pas. Pourquoi ? Peut-être parce que cette vision ne correspond pas à ce que l'on voit, à ce que les gens voient. Cette vision est juste, bien sûr, mais elle n'est pas "immédiate ou affective", elle est sur un autre plan. Et donc, ça ne fonctionne pas. "
Aurialie Jublin

De l'utilité des fake news et autres rumeurs - 0 views

  • Or aujourd’hui que constate-t-on ? Que la presse est de moins en moins lue. Qu’il y a une défiance vis-à-vis du discours journalistique. Que le mode de circulation de l’information dans les réseaux sociaux relève de l’oral plus que de l’écrit (dans les réseaux, c’est moins une vaste correspondance à laquelle on assiste, qu’une vaste conversation).  Et que ces mêmes réseaux sociaux sont une invitation pour tous à commenter, avec une diffusion plus large néanmoins. On est donc dans une situation qui est à la fois nouvelle, et pas tant que ça. 
  • Cette prolifération de rumeurs sous la Restauration était aussi, d’une certaine manière, un bon signe. C’était, dans une période pleine d’incertitudes, le signe d’une politisation, d’une envie de parler de la chose publique, le signe paradoxal d’une poussée démocratique (c’est d’ailleurs pour ça que le pouvoir s’en inquiétait autant). Encore une fois il y a des échos avec ce qu’on vit aujourd’hui.
  • Rassurez-vous, je ne vais pas dire que les fake news, c’est super, je ne vais pas me livrer non plus à un éloge de la surveillance d’Etat. En revanche, le pouvoir de l’époque avait peut-être compris une chose qu’on a un peu oubliée : il y a à apprendre des fausses nouvelles. Parfois mieux que les faits eux-mêmes, les fake news disent les angoisses, les espoirs et les obsessions. Ce à quoi on croit – ou fait semblant de croire – est aussi intéressant que ce qu’on sait. Donc, les fake news ont une utilité, c’est là où je voulais en venir. 
  •  
    "L'historien François Ploux y raconte la prolifération des bruits et des rumeurs pendant cette période qui s'étale entre 1815 et 1830. Au point que la préfecture de police de Paris avait créé un vaste réseau d'informateurs qui passaient leur temps dans les lieux de discussions - les salons, les cafés, la Halle, les Tuileries etc. - et envoyaient chaque jour au préfet des rapports très détaillés. Tout était consigné, de la rumeur la plus absurde à la médisance en passant par le commentaire politique. Ce qui intéressait la police n'était pas de savoir qui disait quoi, mais d'essayer de saisir l'état de l'opinion. Le pouvoir de l'époque avait compris que l'opinion se fabrique partout (dans la rue et dans les dîners en ville), qu'il se fabrique avec de tout - du vrai, du faux, de l'analyse et du fantasme -, que tout est important et mérite d'être observé. "
Aurialie Jublin

Dans un monde de la post-vérité, de nouvelles formes de luttes émergent | Met... - 0 views

  • Le terme de « post-vérité » a été utilisé pour la première fois en 2004 par l’écrivain américain Ralph Keyes. Il décrit la post-vérité comme l’apparition d’un système ou d’une société où la différence entre le vrai ou le faux n’a plus d’importance. Cette définition a éclaté aux yeux du monde en 2016 avec le vote pour le Brexit et l’élection de Donald Trump à la présidence des Etats-Unis. Et fake news a été élu terme de l’année 2017.
  • C’est pourquoi RSF a créé une commission sur l’information et la démocratie, composée de 25 personnalités de 18 nationalités différentes. Y figurent des lauréats du prix Nobel, des spécialistes des nouvelles technologies, des journalistes, des juristes et des anciens dirigeants d’organisations internationales. Cette commission a édicté une déclaration qui « vise à entrer dans cette nouvelle logique afin de définir les obligations des entités structurantes de l’espace public ». RSF a donc sélectionné douze pays qui vont s’engager à signer un pacte sur l’information et la démocratie sur la base de la déclaration évoquée ci-dessus. Le but est, à terme, de créer une entité qui associera des experts indépendants qui pourra édicter des propositions avec un monopole de l’initiative et qui pourront être mises en œuvre par des Etats.
  • Elle repose sur des principes forts : Le droit à l’information fiable La liberté de la presse La vie privée La responsabilité des participants du débat public La transparence des pouvoirs
  • ...1 more annotation...
  • Pour lutter face à cela, Gerald Bronner préconise une nouvelle forme de régulation : une régulation individuelle. Cette forme de régulation consiste à développer un esprit critique, on parle même de « système de pensée analytique ». Scientifiquement, il a été prouvé que la stimulation de la pensée analytique réduit l’adhésion à des théories complotistes. C’est la raison pour laquelle le sociologue recommande d’apprendre aux élèves du cycle primaire jusqu’à l’université le fonctionnement de leur cerveau. Le principal biais dont il est question est le biais de la taille de l’échantillon.
  •  
    "« Le sort de la vérité a toujours été très fragile », a déclaré il y a quelques jours Edgar Morin lors du colloque sur la post-vérité à la Cité des sciences et de l'industrie à Paris. A cette occasion, chercheurs, décideurs publics et professionnels des médias ont décrit cette société de la désinformation, rétablit quelques faits sur les fake news et proposé quelques pistes pour lutter contre l'infox.  "
Aurialie Jublin

It's Time to Break Up Facebook - The New York Times - 0 views

  • Mark’s influence is staggering, far beyond that of anyone else in the private sector or in government. He controls three core communications platforms — Facebook, Instagram and WhatsApp — that billions of people use every day. Facebook’s board works more like an advisory committee than an overseer, because Mark controls around 60 percent of voting shares. Mark alone can decide how to configure Facebook’s algorithms to determine what people see in their News Feeds, what privacy settings they can use and even which messages get delivered. He sets the rules for how to distinguish violent and incendiary speech from the merely offensive, and he can choose to shut down a competitor by acquiring, blocking or copying it.
  • Mark is a good, kind person. But I’m angry that his focus on growth led him to sacrifice security and civility for clicks. I’m disappointed in myself and the early Facebook team for not thinking more about how the News Feed algorithm could change our culture, influence elections and empower nationalist leaders. And I’m worried that Mark has surrounded himself with a team that reinforces his beliefs instead of challenging them.
  • We are a nation with a tradition of reining in monopolies, no matter how well intentioned the leaders of these companies may be. Mark’s power is unprecedented and un-American.It is time to break up Facebook.
  • ...26 more annotations...
  • We already have the tools we need to check the domination of Facebook. We just seem to have forgotten about them.America was built on the idea that power should not be concentrated in any one person, because we are all fallible. That’s why the founders created a system of checks and balances. They didn’t need to foresee the rise of Facebook to understand the threat that gargantuan companies would pose to democracy. Jefferson and Madison were voracious readers of Adam Smith, who believed that monopolies prevent the competition that spurs innovation and leads to economic growth.
  • The Sherman Antitrust Act of 1890 outlawed monopolies. More legislation followed in the 20th century, creating legal and regulatory structures to promote competition and hold the biggest companies accountable. The Department of Justice broke up monopolies like Standard Oil and AT&T.
  • For many people today, it’s hard to imagine government doing much of anything right, let alone breaking up a company like Facebook. This isn’t by coincidence. Starting in the 1970s, a small but dedicated group of economists, lawyers and policymakers sowed the seeds of our cynicism. Over the next 40 years, they financed a network of think tanks, journals, social clubs, academic centers and media outlets to teach an emerging generation that private interests should take precedence over public ones. Their gospel was simple: “Free” markets are dynamic and productive, while government is bureaucratic and ineffective. By the mid-1980s, they had largely managed to relegate energetic antitrust enforcement to the history books.
  • It was this drive to compete that led Mark to acquire, over the years, dozens of other companies, including Instagram and WhatsApp in 2012 and 2014. There was nothing unethical or suspicious, in my view, in these moves.
  • Over a decade later, Facebook has earned the prize of domination. It is worth half a trillion dollars and commands, by my estimate, more than 80 percent of the world’s social networking revenue. It is a powerful monopoly, eclipsing all of its rivals and erasing competition from the social networking category. This explains why, even during the annus horribilis of 2018, Facebook’s earnings per share increased by an astounding 40 percent compared with the year before. (I liquidated my Facebook shares in 2012, and I don’t invest directly in any social media companies.)
  • Facebook’s dominance is not an accident of history. The company’s strategy was to beat every competitor in plain view, and regulators and the government tacitly — and at times explicitly — approved. In one of the government’s few attempts to rein in the company, the F.T.C. in 2011 issued a consent decree that Facebook not share any private information beyond what users already agreed to. Facebook largely ignored the decree. Last month, the day after the company predicted in an earnings call that it would need to pay up to $5 billion as a penalty for its negligence — a slap on the wrist — Facebook’s shares surged 7 percent, adding $30 billion to its value, six times the size of the fine.
  • As markets become more concentrated, the number of new start-up businesses declines. This holds true in other high-tech areas dominated by single companies, like search (controlled by Google) and e-commerce (taken over by Amazon). Meanwhile, there has been plenty of innovation in areas where there is no monopolistic domination, such as in workplace productivity (Slack, Trello, Asana), urban transportation (Lyft, Uber, Lime, Bird) and cryptocurrency exchanges (Ripple, Coinbase, Circle).
  • Facebook’s business model is built on capturing as much of our attention as possible to encourage people to create and share more information about who they are and who they want to be. We pay for Facebook with our data and our attention, and by either measure it doesn’t come cheap.
  • The most problematic aspect of Facebook’s power is Mark’s unilateral control over speech. There is no precedent for his ability to monitor, organize and even censor the conversations of two billion people.Facebook engineers write algorithms that select which users’ comments or experiences end up displayed in the News Feeds of friends and family. These rules are proprietary and so complex that many Facebook employees themselves don’t understand them.
  • Facebook has responded to many of the criticisms of how it manages speech by hiring thousands of contractors to enforce the rules that Mark and senior executives develop. After a few weeks of training, these contractors decide which videos count as hate speech or free speech, which images are erotic and which are simply artistic, and which live streams are too violent to be broadcast. (The Verge reported that some of these moderators, working through a vendor in Arizona, were paid $28,800 a year, got limited breaks and faced significant mental health risks.)
  • As if Facebook’s opaque algorithms weren’t enough, last year we learned that Facebook executives had permanently deleted their own messages from the platform, erasing them from the inboxes of recipients; the justification was corporate security concerns. When I look at my years of Facebook messages with Mark now, it’s just a long stream of my own light-blue comments, clearly written in response to words he had once sent me. (Facebook now offers this as a feature to all users.)
  • Mark used to insist that Facebook was just a “social utility,” a neutral platform for people to communicate what they wished. Now he recognizes that Facebook is both a platform and a publisher and that it is inevitably making decisions about values. The company’s own lawyers have argued in court that Facebook is a publisher and thus entitled to First Amendment protection.
  • Mark knows that this is too much power and is pursuing a twofold strategy to mitigate it. He is pivoting Facebook’s focus toward encouraging more private, encrypted messaging that Facebook’s employees can’t see, let alone control. Second, he is hoping for friendly oversight from regulators and other industry executives.
  • In an op-ed essay in The Washington Post in March, he wrote, “Lawmakers often tell me we have too much power over speech, and I agree.” And he went even further than before, calling for more government regulation — not just on speech, but also on privacy and interoperability, the ability of consumers to seamlessly leave one network and transfer their profiles, friend connections, photos and other data to another.
  • Facebook isn’t afraid of a few more rules. It’s afraid of an antitrust case and of the kind of accountability that real government oversight would bring.
  • Mark may never have a boss, but he needs to have some check on his power. The American government needs to do two things: break up Facebook’s monopoly and regulate the company to make it more accountable to the American people.First, Facebook should be separated into multiple companies. The F.T.C., in conjunction with the Justice Department, should enforce antitrust laws by undoing the Instagram and WhatsApp acquisitions and banning future acquisitions for several years. The F.T.C. should have blocked these mergers, but it’s not too late to act. There is precedent for correcting bad decisions — as recently as 2009, Whole Foods settled antitrust complaints by selling off the Wild Oats brand and stores that it had bought a few years earlier.
  • Still others worry that the breakup of Facebook or other American tech companies could be a national security problem. Because advancements in artificial intelligence require immense amounts of data and computing power, only large companies like Facebook, Google and Amazon can afford these investments, they say. If American companies become smaller, the Chinese will outpace us.While serious, these concerns do not justify inaction. Even after a breakup, Facebook would be a hugely profitable business with billions to invest in new technologies — and a more competitive market would only encourage those investments. If the Chinese did pull ahead, our government could invest in research and development and pursue tactical trade policy, just as it is doing today to hold China’s 5G technology at bay.
  • The cost of breaking up Facebook would be next to zero for the government, and lots of people stand to gain economically. A ban on short-term acquisitions would ensure that competitors, and the investors who take a bet on them, would have the space to flourish. Digital advertisers would suddenly have multiple companies vying for their dollars.
  • But the biggest winners would be the American people. Imagine a competitive market in which they could choose among one network that offered higher privacy standards, another that cost a fee to join but had little advertising and another that would allow users to customize and tweak their feeds as they saw fit. No one knows exactly what Facebook’s competitors would offer to differentiate themselves. That’s exactly the point.
  • Just breaking up Facebook is not enough. We need a new agency, empowered by Congress to regulate tech companies. Its first mandate should be to protect privacy.The Europeans have made headway on privacy with the General Data Protection Regulation, a law that guarantees users a minimal level of protection. A landmark privacy bill in the United States should specify exactly what control Americans have over their digital information, require clearer disclosure to users and provide enough flexibility to the agency to exercise effective oversight over time. The agency should also be charged with guaranteeing basic interoperability across platforms.
  • Finally, the agency should create guidelines for acceptable speech on social media. This idea may seem un-American — we would never stand for a government agency censoring speech. But we already have limits on yelling “fire” in a crowded theater, child pornography, speech intended to provoke violence and false statements to manipulate stock prices. We will have to create similar standards that tech companies can use. These standards should of course be subject to the review of the courts, just as any other limits on speech are. But there is no constitutional right to harass others or live-stream violence.
  • These are difficult challenges. I worry that government regulators will not be able to keep up with the pace of digital innovation. I worry that more competition in social networking might lead to a conservative Facebook and a liberal one, or that newer social networks might be less secure if government regulation is weak. But sticking with the status quo would be worse: If we don’t have public servants shaping these policies, corporations will.
  • Similarly, the Justice Department’s 1970s suit accusing IBM of illegally maintaining its monopoly on personal computer sales ended in a stalemate. But along the way, IBM changed many of its behaviors. It stopped bundling its hardware and software, chose an extremely open design for the operating system in its personal computers and did not exercise undue control over its suppliers. Professor Wu has written that this “policeman at the elbow” led IBM to steer clear “of anything close to anticompetitive conduct, for fear of adding to the case against it.”
  • Finally, an aggressive case against Facebook would persuade other behemoths like Google and Amazon to think twice about stifling competition in their own sectors, out of fear that they could be next. If the government were to use this moment to resurrect an effective competition standard that takes a broader view of the full cost of “free” products, it could affect a whole host of industries.
  • I take responsibility for not sounding the alarm earlier. Don Graham, a former Facebook board member, has accused those who criticize the company now as having “all the courage of the last man leaping on the pile at a football game.” The financial rewards I reaped from working at Facebook radically changed the trajectory of my life, and even after I cashed out, I watched in awe as the company grew. It took the 2016 election fallout and Cambridge Analytica to awaken me to the dangers of Facebook’s monopoly. But anyone suggesting that Facebook is akin to a pinned football player misrepresents its resilience and power.
  • This movement of public servants, scholars and activists deserves our support. Mark Zuckerberg cannot fix Facebook, but our government can.
  •  
    "Since then, Mark's personal reputation and the reputation of Facebook have taken a nose-dive. The company's mistakes - the sloppy privacy practices that dropped tens of millions of users' data into a political consulting firm's lap; the slow response to Russian agents, violent rhetoric and fake news; and the unbounded drive to capture ever more of our time and attention - dominate the headlines. It's been 15 years since I co-founded Facebook at Harvard, and I haven't worked at the company in a decade. But I feel a sense of anger and responsibility."
Aurialie Jublin

"I Was Devastated": Tim Berners-Lee, the Man Who Created the World Wide Web, Has Some R... - 2 views

  • Initially, Berners-Lee’s innovation was intended to help scientists share data across a then obscure platform called the Internet, a version of which the U.S. government had been using since the 1960s. But owing to his decision to release the source code for free—to make the Web an open and democratic platform for all—his brainchild quickly took on a life of its own.
  • He also envisioned that his invention could, in the wrong hands, become a destroyer of worlds, as Robert Oppenheimer once infamously observed of his own creation. His prophecy came to life, most recently, when revelations emerged that Russian hackers interfered with the 2016 presidential election, or when Facebook admitted it exposed data on more than 80 million users to a political research firm, Cambridge Analytica, which worked for Donald Trump’s campaign. This episode was the latest in an increasingly chilling narrative. In 2012, Facebook conducted secret psychological experiments on nearly 700,000 users. Both Google and Amazon have filed patent applications for devices designed to listen for mood shifts and emotions in the human voice.
  • This agony, however, has had a profound effect on Berners-Lee. He is now embarking on a third act—determined to fight back through both his celebrity status and, notably, his skill as a coder. In particular, Berners-Lee has, for some time, been working on a new platform, Solid, to reclaim the Web from corporations and return it to its democratic roots.
  • ...10 more annotations...
  • What made the Web powerful, and ultimately dominant, however, would also one day prove to be its greatest vulnerability: Berners-Lee gave it away for free; anyone with a computer and an Internet connection could not only access it but also build off it. Berners-Lee understood that the Web needed to be unfettered by patents, fees, royalties, or any other controls in order to thrive. This way, millions of innovators could design their own products to take advantage of it.
  • “Tim and Vint made the system so that there could be many players that didn’t have an advantage over each other.” Berners-Lee, too, remembers the quixotism of the era. “The spirit there was very decentralized. The individual was incredibly empowered. It was all based on there being no central authority that you had to go to to ask permission,” he said. “That feeling of individual control, that empowerment, is something we’ve lost.”
  • The power of the Web wasn’t taken or stolen. We, collectively, by the billions, gave it away with every signed user agreement and intimate moment shared with technology. Facebook, Google, and Amazon now monopolize almost everything that happens online, from what we buy to the news we read to who we like. Along with a handful of powerful government agencies, they are able to monitor, manipulate, and spy in once unimaginable ways.
  • The idea is simple: re-decentralize the Web. Working with a small team of developers, he spends most of his time now on Solid, a platform designed to give individuals, rather than corporations, control of their own data. “There are people working in the lab trying to imagine how the Web could be different. How society on the Web could look different. What could happen if we give people privacy and we give people control of their data,” Berners-Lee told me. “We are building a whole eco-system.”
  • For now, the Solid technology is still new and not ready for the masses. But the vision, if it works, could radically change the existing power dynamics of the Web. The system aims to give users a platform by which they can control access to the data and content they generate on the Web. This way, users can choose how that data gets used rather than, say, Facebook and Google doing with it as they please. Solid’s code and technology is open to all—anyone with access to the Internet can come into its chat room and start coding.
  • It’s still the early days for Solid, but Berners-Lee is moving fast. Those who work closely with him say he has thrown himself into the project with the same vigor and determination he employed upon the Web’s inception. Popular sentiment also appears to facilitate his time frame. In India, a group of activists successfully blocked Facebook from implementing a new service that would have effectively controlled access to the Web for huge swaths of the country’s population. In Germany, one young coder built a decentralized version of Twitter called Mastodon. In France, another group created Peertube as a decentralized alternative to YouTube.
  • Berners-Lee is not the leader of this revolution—by definition, the decentralized Web shouldn’t have one—but he is a powerful weapon in the fight. And he fully recognizes that re-decentralizing the Web is going to be a lot harder than inventing it was in the first place.
  • But laws written now don’t anticipate future technologies. Nor do lawmakers—many badgered by corporate lobbyists—always choose to protect individual rights. In December, lobbyists for telecom companies pushed the Federal Communications Commission to roll back net-neutrality rules, which protect equal access to the Internet. In January, the U.S. Senate voted to advance a bill that would allow the National Security Agency to continue its mass online-surveillance program. Google’s lobbyists are now working to modify rules on how companies can gather and store biometric data, such as fingerprints, iris scans, and facial-recognition images.
  • It’s hard to believe that anyone—even Zuckerberg—wants the 1984 version. He didn’t found Facebook to manipulate elections; Jack Dorsey and the other Twitter founders didn’t intend to give Donald Trump a digital bullhorn. And this is what makes Berners-Lee believe that this battle over our digital future can be won.
  • When asked what ordinary people can do, Berners-Lee replied, “You don’t have to have any coding skills. You just have to have a heart to decide enough is enough. Get out your Magic Marker and your signboard and your broomstick. And go out on the streets.” In other words, it’s time to rise against the machines.
  •  
    "We demonstrated that the Web had failed instead of served humanity, as it was supposed to have done, and failed in many places," he told me. The increasing centralization of the Web, he says, has "ended up producing-with no deliberate action of the people who designed the platform-a large-scale emergent phenomenon which is anti-human." "While the problems facing the web are complex and large, I think we should see them as bugs: problems with existing code and software systems that have been created by people-and can be fixed by people." Tim Berners-Lee
  •  
    "Berners-Lee has seen his creation debased by everything from fake news to mass surveillance. But he's got a plan to fix it."
Aurialie Jublin

« Dans mon livre, j'ai imaginé un mélange entre l'ONU et Google » - 0 views

  • Un autre anachronisme consiste à penser que plus un pays est grand géographiquement, plus il est puissant. La situation économique et l’étendue du territoire sont de moins en moins intriqués. Je trouve plus intéressant d’imaginer une organisation plus flexible, qui donne à chaque personne la possibilité de choisir le gouvernement qu’elle souhaite.
  • L’Etat-nation, c’est l’idée que nous avons une identité liée à l’Etat plutôt qu’à notre ethnie, à notre religion ou à notre langue. C’est un progrès mais ce n’est qu’une étape si nous voulons échapper au déterminisme de nos identités liés à nos gènes, à notre naissance. Cette étape continue à nous lier par notre naissance puisque, si l’on n’a pas beaucoup d’argent, il reste très difficile de changer de nationalité. Il faut continuer à chercher d’autres étapes pour définir une autre identité, plus globale, en tant qu'être vivant. D’un point de vue écologique, il faut aussi se mettre à penser comme élément d’un écosystème
  • Oui, il y a une grande bureaucratie centrale qui facilite tout ça, qui s’appelle tout simplement « Information ». C’est une espèce de mélange entre l’ONU et Google, qui gère toute l’information dans le monde. Elle permet d’amener l’information à tout le monde tout en exerçant une surveillance globale. C’est une organisation très puissante. L’idée est d’interroger la possibilité et le bien-fondé d’une telle organisation globale, qui se réclame d’une neutralité en réalité impossible.
  • ...2 more annotations...
  • Aujourd’hui, en France comme aux Etats-Unis et dans la plupart des démocraties, il y a le même problème d’un gouvernement devenu monolithique, qui n’offre que peu de choix et peu d’alternatives aux citoyens. Il y a également ce problème de la fragmentation de l’information et des « bulles » d’information favorisées par les réseaux sociaux comme Twitter, qui renforcent mes opinions en me suggérant de suivre seulement des gens qui me ressemblent et qui ne me permettent plus de distinguer ce qui relève d’une information ou d’une opinion.
  • Internet et les réseaux sociaux ont le potentiel d’être très « démocratisants ». J’ai imaginé une organisation qui éviterait cette fragmentation de l’information. Il faut explorer ka gestion de l’information comme bien public. Mais la question du contrôle d’un tel organisme soulèvet évidemment des difficultés…
  •  
    "Le rattachement à un gouvernement par la géographie est aujourd'hui un anachronisme. Ça ne fait plus sens. J'ai travaillé dans beaucoup de pays connaissant des mouvements de sécession régionale ; presque tous les pays du monde sont concernés. C'est un gros problème pour la démocratie. On a d'un coté des populations qui ne se sentent plus appartenir à l'État-nation, qui veulent en sortir, et de l'autre des migrants qui veulent y entrer. C'est une contradiction fondamentale qu'il faut repenser. Ça ne marche plus d'avoir des frontières fixes qui ne changent pas avec les générations, et qui cherchent à conserver un peuple ayant un sentiment cohérent d'appartenance. C'est ridicule. C'est une résistance à l'idée que le monde change, que la démographie change, ainsi que les idées des gens."
Aurialie Jublin

St. Louis Uber and Lyft Driver Secretly Live-Streamed Passengers, Report Says - The New... - 0 views

  • In it, Jason Gargac, 32, a driver for Uber and Lyft from Florissant, Mo., described an elaborate $3,000 rig of cameras that he used to record and live-stream passengers’ rides to the video platform Twitch. Sometimes passengers’ homes and names were revealed.AdvertisementMr. Gargac told the newspaper that he sought out passengers who might make entertaining content, part of capturing and sharing the everyday reactions that earned him a small but growing following online. Mr. Gargac said he earned $3,500 from the streaming, through subscriptions, donations and tips.He said that at first he had informed passengers that he was recording them, but the videos felt “fake” and “produced.”
  • Mr. Gargac could not be reached for comment. Uber said in a statement Sunday that it had ended its partnership with Mr. Gargac and that “the troubling behavior in the videos is not in line with our Community Guidelines.” Lyft said in a statement that Mr. Gargac had been “deactivated.”
  • Ms. Rosenblat, who is writing a book called “Uberland: How Algorithms Are Rewriting the Rules of Work,” said she had studied the company for four years. There has been an upward trend in recording passengers, she said, driven by “good reasons” like ensuring drivers’ safety, or being able to vouch for the quality of their service.“What we’re seeing with this driver is just a totally different game,” she said. “This is, ‘How can I monetize passengers as content?’”
  • ...1 more annotation...
  • Mr. Gargac had placed a small sign on a passenger window that said the vehicle was equipped with recording devices and that “consent” was given by entering the car.
  •  
    "Sitting in the back of a cab can have a confessional allure: Sealed off to the world, you can take a private moment for yourself or have a conversation - casual or deeply intimate - with a driver you'll never see again. Now imagine finding out days later that those moments were being streamed live on the internet to thousands of people. What's more, some of those people paid to watch you, commenting on your appearance, sometimes explicitly, or musing about your livelihood. This was the reality for potentially hundreds of passengers of a ride-hailing service driver in St. Louis, according to a lengthy article published in The St. Louis Post-Dispatch this weekend."
Aurialie Jublin

Réseaux sociaux : « Les utilisateurs sont conscients de leur dépendance aux a... - 0 views

  • Il y a eu une chute des audiences de Facebook depuis les révélations sur son implication dans la diffusion de « fake news » lors des dernières élections présidentielles américaines et l’affaire Cambridge Analytica. Les géants du numérique sont aujourd’hui globalement dans une tentative de reconquête, en affirmant qu’ils agiront de manière plus responsable, plus régulée.Mais leur objectif est aussi d’obtenir de nouvelles métriques pour affiner encore l’extraction intentionnelle des informations sur leurs utilisateurs. C’est sur ces méthodes d’exploitation que repose leur modèle : susciter l’attention, définir notre profil de données pour vendre ensuite de la publicité.
  • Comment changer les choses, plus globalement ?H. G. : Je pense qu’il faut porter cette question de l’attention sur le plan politique, en réglementant davantage. Nous appelons aussi, avec la Fondation Internet nouvelle génération (Fing), à nommer dans les entreprises du numérique, des médiateurs chargés des questions attentionnelles, pour qu’elles soient enfin prises au sérieux.
  •  
    "Facebook a annoncé début août la mise en place en cours d'un outil permettant de mieux maîtriser le temps passé sur le réseau social. Google et Apple avaient déjà lancé ce genre d'option. Analyse d'Hubert Guillaud, spécialiste des questions d'attention sur les réseaux et responsable de la veille à la Fondation Internet nouvelle génération (Fing)."
Aurialie Jublin

Facebook is rating the trustworthiness of its users on a scale from zero to 1 - The Was... - 0 views

  • A user’s trustworthiness score isn’t meant to be an absolute indicator of a person’s credibility, Lyons said, nor is there is a single unified reputation score that users are assigned. Rather, the score is one measurement among thousands of new behavioral clues that Facebook now takes into account as it seeks to understand risk. Facebook is also monitoring which users have a propensity to flag content published by others as problematic and which publishers are considered trustworthy by users.
  • It is unclear what other criteria Facebook measures to determine a user’s score, whether all users have a score and in what ways the scores are used.
  • But how these new credibility systems work is highly opaque, and the companies are wary of discussing them, in part because doing so might invite further gaming — a predicament that the firms increasingly find themselves in as they weigh calls for more transparency around their decision-making.
  • ...1 more annotation...
  • Lyons said she soon realized that many people were reporting posts as false simply because they did not agree with the content. Because Facebook forwards posts that are marked as false to third-party fact-checkers, she said it was important to build systems to assess whether the posts were likely to be false to make efficient use of fact-checkers’ time. That led her team to develop ways to assess whether the people who were flagging posts as false were themselves trustworthy.
  •  
    "Facebook has begun to assign its users a reputation score, predicting their trustworthiness on a scale from zero to 1. The previously unreported ratings system, which Facebook has developed over the past year, shows that the fight against the gaming of tech systems has evolved to include measuring the credibility of users to help identify malicious actors."
Aurialie Jublin

Amazon Investigates Employees Leaking Data for Bribes - WSJ - 0 views

  •  
    "Employees, through intermediaries, are offering internal data to help merchants increase their sales on the website"
Aurialie Jublin

An Apology for the Internet - From the People Who Built It - 1 views

  • There have always been outsiders who criticized the tech industry — even if their concerns have been drowned out by the oohs and aahs of consumers, investors, and journalists. But today, the most dire warnings are coming from the heart of Silicon Valley itself. The man who oversaw the creation of the original iPhone believes the device he helped build is too addictive. The inventor of the World Wide Web fears his creation is being “weaponized.” Even Sean Parker, Facebook’s first president, has blasted social media as a dangerous form of psychological manipulation. “God only knows what it’s doing to our children’s brains,” he lamented recently.
  • To keep the internet free — while becoming richer, faster, than anyone in history — the technological elite needed something to attract billions of users to the ads they were selling. And that something, it turns out, was outrage. As Jaron Lanier, a pioneer in virtual reality, points out, anger is the emotion most effective at driving “engagement” — which also makes it, in a market for attention, the most profitable one. By creating a self-perpetuating loop of shock and recrimination, social media further polarized what had already seemed, during the Obama years, an impossibly and irredeemably polarized country.
  • The Architects (In order of appearance.) Jaron Lanier, virtual-reality pioneer. Founded first company to sell VR goggles; worked at Atari and Microsoft. Antonio García Martínez, ad-tech entrepreneur. Helped create Facebook’s ad machine. Ellen Pao, former CEO of Reddit. Filed major gender-discrimination lawsuit against VC firm Kleiner Perkins. Can Duruk, programmer and tech writer. Served as project lead at Uber. Kate Losse, Facebook employee No. 51. Served as Mark Zuckerberg’s speechwriter. Tristan Harris, product designer. Wrote internal Google presentation about addictive and unethical design. Rich “Lowtax” Kyanka, entrepreneur who founded influential message board Something Awful. Ethan Zuckerman, MIT media scholar. Invented the pop-up ad. Dan McComas, former product chief at Reddit. Founded community-based platform Imzy. Sandy Parakilas, product manager at Uber. Ran privacy compliance for Facebook apps. Guillaume Chaslot, AI researcher. Helped develop YouTube’s algorithmic recommendation system. Roger McNamee, VC investor. Introduced Mark Zuckerberg to Sheryl Sandberg. Richard Stallman, MIT programmer. Created legendary software GNU and Emacs.
  • ...45 more annotations...
  • How It Went Wrong, in 15 Steps Step 1 Start With Hippie Good Intentions …
  • I think two things are at the root of the present crisis. One was the idealistic view of the internet — the idea that this is the great place to share information and connect with like-minded people. The second part was the people who started these companies were very homogeneous. You had one set of experiences, one set of views, that drove all of the platforms on the internet. So the combination of this belief that the internet was a bright, positive place and the very similar people who all shared that view ended up creating platforms that were designed and oriented around free speech.
  • Step 2 … Then mix in capitalism on steroids. To transform the world, you first need to take it over. The planetary scale and power envisioned by Silicon Valley’s early hippies turned out to be as well suited for making money as they were for saving the world.
  • Step 3 The arrival of Wall Streeters didn’t help … Just as Facebook became the first overnight social-media success, the stock market crashed, sending money-minded investors westward toward the tech industry. Before long, a handful of companies had created a virtual monopoly on digital life.
  • Ethan Zuckerman: Over the last decade, the social-media platforms have been working to make the web almost irrelevant. Facebook would, in many ways, prefer that we didn’t have the internet. They’d prefer that we had Facebook.
  • Step 4 … And we paid a high price for keeping it free. To avoid charging for the internet — while becoming fabulously rich at the same time — Silicon Valley turned to digital advertising. But to sell ads that target individual users, you need to grow a big audience — and use advancing technology to gather reams of personal data that will enable you to reach them efficiently.
  • Harris: If you’re YouTube, you want people to register as many accounts as possible, uploading as many videos as possible, driving as many views to those videos as possible, so you can generate lots of activity that you can sell to advertisers. So whether or not the users are real human beings or Russian bots, whether or not the videos are real or conspiracy theories or disturbing content aimed at kids, you don’t really care. You’re just trying to drive engagement to the stuff and maximize all that activity. So everything stems from this engagement-based business model that incentivizes the most mindless things that harm the fabric of society.
  • Step 5 Everything was designed to be really, really addictive. The social-media giants became “attention merchants,” bent on hooking users no mater the consequences. “Engagement” was the euphemism for the metric, but in practice it evolved into an unprecedented machine for behavior modification.
  • Harris: That blue Facebook icon on your home screen is really good at creating unconscious habits that people have a hard time extinguishing. People don’t see the way that their minds are being manipulated by addiction. Facebook has become the largest civilization-scale mind-control machine that the world has ever seen.
  • Step 6 At first, it worked — almost too well. None of the companies hid their plans or lied about how their money was made. But as users became deeply enmeshed in the increasingly addictive web of surveillance, the leading digital platforms became wildly popular.
  • Pao: There’s this idea that, “Yes, they can use this information to manipulate other people, but I’m not gonna fall for that, so I’m protected from being manipulated.” Slowly, over time, you become addicted to the interactions, so it’s hard to opt out. And they just keep taking more and more of your time and pushing more and more fake news. It becomes easy just to go about your life and assume that things are being taken care of.
  • McNamee: If you go back to the early days of propaganda theory, Edward Bernays had a hypothesis that to implant an idea and make it universally acceptable, you needed to have the same message appearing in every medium all the time for a really long period of time. The notion was it could only be done by a government. Then Facebook came along, and it had this ability to personalize for every single user. Instead of being a broadcast model, it was now 2.2 billion individualized channels. It was the most effective product ever created to revolve around human emotions.
  • Step 7 No one from Silicon Valley was held accountable … No one in the government — or, for that matter, in the tech industry’s user base — seemed interested in bringing such a wealthy, dynamic sector to heel.
  • Step 8 … Even as social networks became dangerous and toxic. With companies scaling at unprecedented rates, user security took a backseat to growth and engagement. Resources went to selling ads, not protecting users from abuse.
  • Lanier: Every time there’s some movement like Black Lives Matter or #MeToo, you have this initial period where people feel like they’re on this magic-carpet ride. Social media is letting them reach people and organize faster than ever before. They’re thinking, Wow, Facebook and Twitter are these wonderful tools of democracy. But it turns out that the same data that creates a positive, constructive process like the Arab Spring can be used to irritate other groups. So every time you have a Black Lives Matter, social media responds by empowering neo-Nazis and racists in a way that hasn’t been seen in generations. The original good intention winds up empowering its opposite.
  • Chaslot: As an engineer at Google, I would see something weird and propose a solution to management. But just noticing the problem was hurting the business model. So they would say, “Okay, but is it really a problem?” They trust the structure. For instance, I saw this conspiracy theory that was spreading. It’s really large — I think the algorithm may have gone crazy. But I was told, “Don’t worry — we have the best people working on it. It should be fine.” Then they conclude that people are just stupid. They don’t want to believe that the problem might be due to the algorithm.
  • Parakilas: One time a developer who had access to Facebook’s data was accused of creating profiles of people without their consent, including children. But when we heard about it, we had no way of proving whether it had actually happened, because we had no visibility into the data once it left Facebook’s servers. So Facebook had policies against things like this, but it gave us no ability to see what developers were actually doing.
  • McComas: Ultimately the problem Reddit has is the same as Twitter: By focusing on growth and growth only, and ignoring the problems, they amassed a large set of cultural norms on their platforms that stem from harassment or abuse or bad behavior. They have worked themselves into a position where they’re completely defensive and they can just never catch up on the problem. I don’t see any way it’s going to improve. The best they can do is figure out how to hide the bad behavior from the average user.
  • Step 9 … And even as they invaded our privacy. The more features Facebook and other platforms added, the more data users willingly, if unwittingly, released to them and the data brokers who power digital advertising.
  • Richard Stallman: What is data privacy? That means that if a company collects data about you, it should somehow protect that data. But I don’t think that’s the issue. The problem is that these companies are collecting data about you, period. We shouldn’t let them do that. The data that is collected will be abused. That’s not an absolute certainty, but it’s a practical extreme likelihood, which is enough to make collection a problem.
  • Losse: I’m not surprised at what’s going on now with Cambridge Analytica and the scandal over the election. For long time, the accepted idea at Facebook was: Giving developers as much data as possible to make these products is good. But to think that, you also have to not think about the data implications for users. That’s just not your priority.
  • Step 10 Then came 2016. The election of Donald Trump and the triumph of Brexit, two campaigns powered in large part by social media, demonstrated to tech insiders that connecting the world — at least via an advertising-surveillance scheme — doesn’t necessarily lead to that hippie utopia.
  • Chaslot: I realized personally that things were going wrong in 2011, when I was working at Google. I was working on this YouTube recommendation algorithm, and I realized that the algorithm was always giving you the same type of content. For instance, if I give you a video of a cat and you watch it, the algorithm thinks, Oh, he must really like cats. That creates these feeder bubbles where people just see one type of information. But when I notified my managers at Google and proposed a solution that would give a user more control so he could get out of the feeder bubble, they realized that this type of algorithm would not be very beneficial for watch time. They didn’t want to push that, because the entire business model is based on watch time.
  • Step 11 Employees are starting to revolt. Tech-industry executives aren’t likely to bite the hand that feeds them. But maybe their employees — the ones who signed up for the mission as much as the money — can rise up and make a change.
  • Harris: There’s a massive demoralizing wave that is hitting Silicon Valley. It’s getting very hard for companies to attract and retain the best engineers and talent when they realize that the automated system they’ve built is causing havoc everywhere around the world. So if Facebook loses a big chunk of its workforce because people don’t want to be part of that perverse system anymore, that is a very powerful and very immediate lever to force them to change.
  • Duruk: I was at Uber when all the madness was happening there, and it did affect recruiting and hiring. I don’t think these companies are going to go down because they can’t attract the right talent. But there’s going to be a measurable impact. It has become less of a moral positive now — you go to Facebook to write some code and then you go home. They’re becoming just another company.
  • Step 12 To fix it, we’ll need a new business model … If the problem is in the way the Valley makes money, it’s going to have to make money a different way. Maybe by trying something radical and new — like charging users for goods and services.
  • Parakilas: They’re going to have to change their business model quite dramatically. They say they want to make time well spent the focus of their product, but they have no incentive to do that, nor have they created a metric by which they would measure that. But if Facebook charged a subscription instead of relying on advertising, then people would use it less and Facebook would still make money. It would be equally profitable and more beneficial to society. In fact, if you charged users a few dollars a month, you would equal the revenue Facebook gets from advertising. It’s not inconceivable that a large percentage of their user base would be willing to pay a few dollars a month.
  • Step 13 … And some tough regulation. Mark Zuckerberg testifying before Congress on April 10. Photo: Jim Watson/AFP/Getty Images While we’re at it, where has the government been in all this? 
  • Stallman: We need a law. Fuck them — there’s no reason we should let them exist if the price is knowing everything about us. Let them disappear. They’re not important — our human rights are important. No company is so important that its existence justifies setting up a police state. And a police state is what we’re heading toward.
  • Duruk: The biggest existential problem for them would be regulation. Because it’s clear that nothing else will stop these companies from using their size and their technology to just keep growing. Without regulation, we’ll basically just be complaining constantly, and not much will change.
  • McNamee: Three things. First, there needs to be a law against bots and trolls impersonating other people. I’m not saying no bots. I’m just saying bots have to be really clearly marked. Second, there have to be strict age limits to protect children. And third, there has to be genuine liability for platforms when their algorithms fail. If Google can’t block the obviously phony story that the kids in Parkland were actors, they need to be held accountable.
  • Stallman: We need a law that requires every system to be designed in a way that achieves its basic goal with the least possible collection of data. Let’s say you want to ride in a car and pay for the ride. That doesn’t fundamentally require knowing who you are. So services which do that must be required by law to give you the option of paying cash, or using some other anonymous-payment system, without being identified. They should also have ways you can call for a ride without identifying yourself, without having to use a cell phone. Companies that won’t go along with this — well, they’re welcome to go out of business. Good riddance.
  • Step 14 Maybe nothing will change. The scariest possibility is that nothing can be done — that the behemoths of the new internet are too rich, too powerful, and too addictive for anyone to fix.
  • García: Look, I mean, advertising sucks, sure. But as the ad tech guys say, “We’re the people who pay for the internet.” It’s hard to imagine a different business model other than advertising for any consumer internet app that depends on network effects.
  • Step 15 … Unless, at the very least, some new people are in charge. If Silicon Valley’s problems are a result of bad decision-making, it might be time to look for better decision-makers. One place to start would be outside the homogeneous group currently in power.
  • Pao: I’ve urged Facebook to bring in people who are not part of a homogeneous majority to their executive team, to every product team, to every strategy discussion. The people who are there now clearly don’t understand the impact of their platforms and the nature of the problem. You need people who are living the problem to clarify the extent of it and help solve it.
  • Things That Ruined the Internet
  • Cookies (1994) The original surveillance tool of the internet. Developed by programmer Lou Montulli to eliminate the need for repeated log-ins, cookies also enabled third parties like Google to track users across the web. The risk of abuse was low, Montulli thought, because only a “large, publicly visible company” would have the capacity to make use of such data. The result: digital ads that follow you wherever you go online.
  • The Farmville vulnerability (2007)   When Facebook opened up its social network to third-party developers, enabling them to build apps that users could share with their friends, it inadvertently opened the door a bit too wide. By tapping into user accounts, developers could download a wealth of personal data — which is exactly what a political-consulting firm called Cambridge Analytica did to 87 million Americans.
  • Algorithmic sorting (2006) It’s how the internet serves up what it thinks you want — automated calculations based on dozens of hidden metrics. Facebook’s News Feed uses it every time you hit refresh, and so does YouTube. It’s highly addictive — and it keeps users walled off in their own personalized loops. “When social media is designed primarily for engagement,” tweets Guillaume Chaslot, the engineer who designed YouTube’s algorithm, “it is not surprising that it hurts democracy and free speech.”
  • The “like” button (2009) Initially known as the “awesome” button, the icon was designed to unleash a wave of positivity online. But its addictive properties became so troubling that one of its creators, Leah Pearlman, has since renounced it. “Do you know that episode of Black Mirror where everyone is obsessed with likes?” she told Vice last year. “I suddenly felt terrified of becoming those people — as well as thinking I’d created that environment for everyone else.”
  • Pull-to-refresh (2009) Developed by software developer Loren Brichter for an iPhone app, the simple gesture — scrolling downward at the top of a feed to fetch more data — has become an endless, involuntary tic. “Pull-to-refresh is addictive,” Brichter told The Guardian last year. “I regret the downsides.”
  • Pop-up ads (1996) While working at an early blogging platform, Ethan Zuckerman came up with the now-ubiquitous tool for separating ads from content that advertisers might find objectionable. “I really did not mean to break the internet,” he told the podcast Reply All. “I really did not mean to bring this horrible thing into people’s lives. I really am extremely sorry about this.”
  • The Silicon Valley dream was born of the counterculture. A generation of computer programmers and designers flocked to the Bay Area’s tech scene in the 1970s and ’80s, embracing new technology as a tool to transform the world for good.
  •  
    Internet en 15 étapes, de sa construction à aujourd'hui, regards et regrets de ceux qui l'ont construit... [...] "Things That Ruined the Internet" les cookies 1994 / la faille Farmville 2007 / le tri algorithmique 2006 / le "like" 2009 / le "pull to refresh" 2009 / les "pop-up ads" 1996 [...]
Aurialie Jublin

Appel aux ingénieurs face à l'urgence climatique - C'est maintenant. - 0 views

  • Comme cette remarque d’un collègue « Tu te rends compte, cette génération d’ingénieurs brillants qui dédient leur temps à faire cliquer des gens sur des pubs… » Pourquoi ?Google Apple Facebook Amazon, façonnent notre quotidien, dépassent les états et sont maintenant des forces qui ébranlent nos sociétés : évasion fiscale, contrôle des données personnelles, bouleversement de l’information, incitation à la consommation à renfort de publicité et d’obsolescence programmée. Pourquoi ?
  • La Charte d’Éthique de l’Ingénieur lit entre autres :« L’ingénieur est […] moteur de progrès. »« [il] fait prendre conscience de l’impact des réalisations techniques sur l’environnement. »« L’ingénieur cherche à atteindre le meilleur résultat […] en intégrant les dimensions […] économique, […], sociale et environnementale. »« [il] sait admettre ses erreurs, […] et en tirer des leçons pour l’avenir. »
  • Internet a 30 ans. Mais pour son inventeur, « sa créature lui a échappé ».Aujourd’hui, la data et l’intelligence artificielle sont annoncées comme le pétrole du 21e siècle et son moteur. Peut-être plus juste qu’on ne croit : le numérique génère déjà plus d’émissions que l’aviation civile. Pensons bien à ces questions pour que cette nouvelle créature-là ne nous échappe pas.
  • ...4 more annotations...
  • Préservons-nous du temps, pour nous informer.Rappelons-nous de lever le nez du guidon, questionnons, interrogeons nos fascinations, discutons, gardons le doigt sur le pouls du monde. N’attendons pas d’avoir 30 ans. C’est maintenant.
  • Et avec éthique et conscience, agissons, collectivement.Il y a un monde à réinventer, si nous apprenons du passé et avons le courage d’oser au présent. C’est maintenant.
  • Project Drawdown, un comité d’experts et de scientifiques a aussi évalué les initiatives les plus prometteuses pour réduire massivement les émissions de gaz à effet de serre. Une mine d’idées assez précises à y trouver parmi les 100 initiatives les plus impactantes.
  • Nous pouvons également imaginer de nouveaux modèles pour une économie durable :Développer l’économie circulaire et le marché de l’occasion par exemple. Connaissez-vous la Blue Economy ?Repensez les services et infrastructures que nous utilisons comme plateformes en commun, au service de l’intérêt collectif et non d’intérêts privés.
  •  
    "Il y a quelques semaines, j'ai été contactée par les alumni de mon école d'ingénieur - Télécom Paristech - pour venir témoigner de mon expérience lors de la cérémonie de remise des diplômes de la promotion 2018, et donner quelques conseils aux jeunes diplômés à l'aube de leur carrière. Alors moi-même en pleine transition professionnelle suite à ma prise de conscience accélérée de l'urgence climatique, je me suis dit que décidément oui, j'avais quelques messages qui me semblaient potentiellement utiles, et que j'aurais sûrement aimé entendre à la cérémonie de remise de diplôme que je n'ai pas vécue il y a 13 ans de cela."
Aurialie Jublin

Cambridge analytica: «Le scandale a montré que les politiciens sont aussi mal... - 0 views

  • Le scandale Cambridge analytica a-t-il changé notre manière de voir la démocratie ? Le lien avec la démocratie n’est pas direct. Je ne crois pas qu’avec quelques posts finement ajustés, on a pu modifier lourdement le résultat des élections. Des gens qui cherchent à modifier le résultat des élections, c’est vieux comme Hérode, cela s’appelle des candidats. En revanche, Cambridge analytica a montré quelque chose de plus embêtant. Avec des mots crus : il y a différentes familles de margoulins qui tripatouillent les données personnelles pour faire des choses malsaines. Et je constate que les politiciens sont aussi malhonnêtes que les autres. Je trouve extrêmement embêtant qu’on retrouve dans la même classe le capitaliste qui veut faire du pognon pour vendre des produits et le politicien qui veut se faire élire à tout prix. Ce qui met en danger la démocratie, ce n’est pas tant le fait que certaines personnes font n’importe quoi avec les données personnelles, mais le fait que les politiciens trouvent normal d’avoir recours à ces méthodes.
  • Un an après, a-t-on des leviers plus efficaces pour lutter contre ces dérives ? Les leviers existent, mais on les utilise très faiblement. Le texte européen sur les données personnelles (RGPD) interdit d’utiliser les données personnelles des gens pour n’importe quoi. Les personnes doivent donner leur consentement à l’usage qu’on fait de leurs données. Pour chaque usage, il faut demander l’autorisation. A l’heure actuelle, ce n’est pas respecté, puisqu’on utilise vos données personnelles pour cibler la publicité. Je pense qu’aucun être humain sur terre n’a dit explicitement : « Oui, je souhaite que vous analysiez ma vie privée pour mieux me cibler ». C’est toujours plus ou moins implicite, plus ou moins inclus dans un contrat. Si on applique fermement le droit européen, c’est interdit. Les textes européens devraient écrire : « La publicité ciblée est interdite ». C’est écrit autrement parce qu’il y a eu un lobbying au Parlement européen pour espérer gagner quelques années, quelques exceptions, quelques passe-droits, quelques tolérances autour de certaines dérives…
  • Selon vous, Cambridge analytica n’a pas changé le cours de l’élection de Donald Trump. Pourquoi pensez-vous cela ? Donald Trump s’est fait élire avec des méthodes électorales d’une malhonnêteté fabuleuse. Au mieux Cambridge analytica a pu convaincre quelques républicains conservateurs d’aller voter alors qu’ils n’en avaient pas très envie. Je ne crois pas que Cambridge analytica a eu plus d’effet qu’un meeting électoral. Ce qui est embêtant, c’est de le faire de manière malhonnête. Le but de ces plateformes est de vous manipuler, elles gagnent leur vie de cette manière. Le client chez Facebook, c’est le publicitaire et le but de Facebook est de satisfaire ses clients. Les plateformes pourraient vous manipuler pour vous mettre dans la main des politiques. La France y travaille d’arrache pied. Quand on nous dit : « L’État va collaborer avec Facebook pour faire la chasse aux fakes news ». C’est extrêmement inquiétant. Un gouvernement décide le vrai du faux et se met d’accord avec les plus puissants manipulateurs pour vous faire admettre que le vrai gouvernemental est vrai et que le faux gouvernemental est faux. C’est de la manipulation à très grande échelle.
  • ...3 more annotations...
  • Comment décririez-vous la démocratie de demain ? Depuis une trentaine d’années, il n’y a plus de discussion politique. Tous les gouvernements, depuis 1983, sont des partisans de l’économie libérale de marché et ils expliquent qu’ils ne font pas de politique, mais de la bonne gestion. Fondamentalement, la politique, c’est arbitrer entre des intérêts divergents dans la société. Il faut résoudre le conflit entre deux catégories qui s’opposent sur un sujet. D’un côté, il y a des gens qui voudraient protéger leur vie privée et, de l’autre, il y a Facebook qui ne voudrait pas. Les deux s’affrontent et les politiques doivent trancher. Un conflit n’est pas malsain, mais tant qu’on n’a pas posé les termes d’un conflit, on ne sait pas qui cherche à obtenir quoi, et on obtient des arbitrages implicites au lieu d’arbitrages explicites.
  • Dans le futur, la démocratie va-t-elle retrouver la discussion politique ? Je pense qu’elle va la retrouver via Internet. Pas forcément via les réseaux sociaux tels qu’on les connaît aujourd’hui, mais ce n’est pas exclu. Internet comme outil de communication qui permet à tout le monde de s’exprimer tout le temps et de former des sujets politiques. Il ne s’agit pas forcément de sujets très complexes, il s’agit d’identifier le problème et d’identifier son ennemi.
  • Sur le même rond-point, vous avez des gens dont les intérêts divergent. Ils évitent de parler d’immigration, d’impôts, de libéralisme, de gauche, de droite… Mais à un moment, ils vont se remettre à en discuter. Je pense que la démocratie du futur ressemblera à ça : on va se remettre à discuter, à identifier de qui on dépend, de qui on a besoin.
  •  
    "Benjamin Bayart, cofondateur de la Quadrature du Net, revient sur le scandale Cambridge analytica et ouvre les portes à la démocratie du futur"
Aurialie Jublin

Pour «liker», veuillez patienter - Libération - 0 views

  •  
    Les machines ne connaissent pas le concept de vérité et la vitesse qui commande nos réactions sur les réseaux sociaux nous empêche d'y réfléchir vraiment. Pourquoi ne pas imposer un délai avant de pouvoir cliquer sur le pouce vers le haut ?
Aurialie Jublin

Facebook while black: Users call it getting 'Zucked,' say talking about racism is censo... - 0 views

  • Black activists say hate speech policies and content moderation systems formulated by a company built by and dominated by white men fail the very people Facebook claims it's trying to protect. Not only are the voices of marginalized groups disproportionately stifled, Facebook rarely takes action on repeated reports of racial slurs, violent threats and harassment campaigns targeting black users, they say.
  • For Wysinger, an activist whose podcast The C-Dubb Show frequently explores anti-black racism, the troubling episode recalled the nation's dark history of lynching, when charges of sexual violence against a white woman were used to justify mob murders of black men. "White men are so fragile," she fired off, sharing William's post with her friends, "and the mere presence of a black person challenges every single thing in them." It took just 15 minutes for Facebook to delete her post for violating its community standards for hate speech. And she was warned if she posted it again, she'd be banned for 72 hours.
  • So to avoid being flagged, they use digital slang such as "wypipo," emojis or hashtags to elude Facebook's computer algorithms and content moderators. They operate under aliases and maintain back-up accounts to avoid losing content and access to their community. And they've developed a buddy system to alert friends and followers when a fellow black activist has been sent to Facebook jail, sharing the news of the suspension and the posts that put them there.
  • ...4 more annotations...
  • They call it getting "Zucked" and black activists say these bans have serious repercussions, not just cutting people off from their friends and family for hours, days or weeks at a time, but often from the Facebook pages they operate for their small businesses and nonprofits.
  • A couple of weeks ago, Black Lives Matter organizer Tanya Faison had one of her posts removed as hate speech. "Dear white people," she wrote in the post, "it is not my job to educate you or to donate my emotional labor to make sure you are informed. If you take advantage of that time and labor, you will definitely get the elbow when I see you." After being alerted by USA TODAY, Facebook apologized to Faison and reversed its decision.
  • "Black people are punished on Facebook for speaking directly to the racism we have experienced," says Seattle black anti-racism consultant and conceptual artist Natasha Marin.
  • There are just too many sensitive decisions for Facebook to make them all on its own, Zuckerberg said last month. A string of violent attacks, including a mass shooting at two mosques in New Zealand, recently forced Facebook to reckon with the scourge of white nationalist content on its platform. "Lawmakers often tell me we have too much power over speech," Zuckerberg wrote, "and frankly I agree."  In late 2017 and early 2018, Facebook explored whether certain groups should be afforded more protection than others. For now, the company has decided to maintain its policy of protecting all racial and ethnic groups equally, even if they do not face oppression or marginalization, says Neil Potts, public policy director at Facebook. Applying more "nuanced" rules to the daily tidal wave of content rushing through Facebook and its other apps would be very challenging, he says.
  •  
    "STORY HIGHLIGHTS Black activists say hate speech policies and content moderation stifle marginalized groups. Mark Zuckerberg says lawmakers tell him Facebook has too much power over speech. "Frankly I agree." Civil rights groups say Facebook has not cut down on hate speech against African Americans. "
Aurialie Jublin

[Internet]MIT research scientist David Clark has a 17-point wish list for how to fix th... - 0 views

  • In Designing an Internet, he presents a 17-point wish list for a better internet compiled from policy papers, speeches, and manifestos:Catalog of Aspirations The Internet should reach every person by some means. (Reach) The Internet should be available to us everywhere. (Ubiquity) The Internet should continue to evolve to match the pace and direction of the larger IT sector. (Evolution) The Internet should be used by more of the population. (Uptake) Cost should not be a barrier to the use of the Internet. (Affordable) The Internet should provide experiences that are sufficiently free of frustration, fears, and unpleasant experiences that people are not deterred from using it. (Trustworthy) The Internet should not be an effective space for lawbreakers. (Lawful) The Internet should not raise concerns about national security. (National Security) The Internet should be a platform for vigorous innovation and thus a driver of the economy. (Innovation) The Internet should support a wide range of services and applications. (Generality) Internet content should be accessible to all without blocking or censorship. (Unblocked) The consumer should have choices in their Internet experience. (Choice) The Internet should serve as a mechanism for the distribution of wealth among different sectors and countries. (Redistribution) The Internet (and Internet technology, whether in the public network or not) should become a unified technology platform for communication. (Unification) For any region of the globe, the behavior of the Internet should be consistent with and reflect its core cultural/political values. (Local values) The Internet should be a tool to promote social, cultural, and political values, especially universal ones. (Universal values) The Internet should be a means of communication between citizens of the world. (Global)
  • He categorizes the aspirations into three pragmatic buckets: utility, economics, and security. But the subtext of each aspiration is a longing for structures that would entice users to be better humans—an internet that is moral.
  • The notion of shutting down the internet to start anew is a fairytale. Any improvement will be incremental, hard-fought and deliberate. Clark notes that even the US Federal Communications Commission which regulates the country’s telecommunications and broadcast media industries isn’t interested in shaping the public character of the internet. Its solution is to encourage competition and hope for the best, as stated in its latest strategic plan. Clearly, the laissez-faire attitude hasn’t exactly worked out for the country.
  • ...2 more annotations...
  • The problem—or the “fundamental tussle,” as Clark puts it—lies in the fact that private companies, not governments control the internet today, and relying on the good conscience of profit-driven technocrats offers little assurance. “That’s a pretty inconsistent hope to lean on,” he says. “Remember, on Facebook, you’re not the customer. You’re the product served to advertisers.”
  • Ultimately, improving the internet hinges on seeing it as public service first, instead of a money-making venture. Barring one global entity with the will and the resources to enforce this, fixing it will be a kind of community project, and one of the most urgent kind.
  •  
    "This refrain echoes across all corners of the internet, and has become a general, all-purpose complaint for all of the bad things we encounter online. Trolling, fake news, dark patterns, identity theft, cyber bulling, nasty 4Chan threads are just some of the symptoms of this corruption. But the first step to fixing the internet requires an understanding of what it actually is."
1 - 20 of 25 Next ›
Showing 20 items per page