Skip to main content

Home/ QN2019/ Group items tagged rating

Rss Feed Group items tagged

Aurialie Jublin

Facebook is rating the trustworthiness of its users on a scale from zero to 1 - The Was... - 0 views

  • A user’s trustworthiness score isn’t meant to be an absolute indicator of a person’s credibility, Lyons said, nor is there is a single unified reputation score that users are assigned. Rather, the score is one measurement among thousands of new behavioral clues that Facebook now takes into account as it seeks to understand risk. Facebook is also monitoring which users have a propensity to flag content published by others as problematic and which publishers are considered trustworthy by users.
  • It is unclear what other criteria Facebook measures to determine a user’s score, whether all users have a score and in what ways the scores are used.
  • But how these new credibility systems work is highly opaque, and the companies are wary of discussing them, in part because doing so might invite further gaming — a predicament that the firms increasingly find themselves in as they weigh calls for more transparency around their decision-making.
  • ...1 more annotation...
  • Lyons said she soon realized that many people were reporting posts as false simply because they did not agree with the content. Because Facebook forwards posts that are marked as false to third-party fact-checkers, she said it was important to build systems to assess whether the posts were likely to be false to make efficient use of fact-checkers’ time. That led her team to develop ways to assess whether the people who were flagging posts as false were themselves trustworthy.
  •  
    "Facebook has begun to assign its users a reputation score, predicting their trustworthiness on a scale from zero to 1. The previously unreported ratings system, which Facebook has developed over the past year, shows that the fight against the gaming of tech systems has evolved to include measuring the credibility of users to help identify malicious actors."
Aurialie Jublin

There is no easy fix for Facebook's reliability problem - 0 views

  •  
    "Rating news sources is a complicated task. But rating Facebook is especially hard, because of the way it's designed. Here's a look at several critical issues the company is facing."
Aurialie Jublin

14 years of Mark Zuckerberg saying sorry, not sorry about Facebook - Washington Post - 0 views

  •  
    "From the moment the Facebook founder entered the public eye in 2003 for creating a Harvard student hot-or-not rating site, he's been apologizing. So we collected this abbreviated history of his public mea culpas. It reads like a record on repeat. Zuckerberg, who made "move fast and break things" his slogan, says sorry for being naive, and then promises solutions such as privacy "controls," "transparency" and better policy "enforcement." And then he promises it again the next time. You can track his sorries in orange and promises in blue in the timeline below. All the while, Facebook's access to our personal data increases and little changes about the way Zuckerberg handles it. So as Zuckerberg prepares to apologize for the first time in front of Congress, the question that lingers is: What will be different this time?"
Aurialie Jublin

The Landlord Wants Facial Recognition in Its Rent-Stabilized Buildings. Why? - The New ... - 0 views

  • The fact that the Atlantic complex already has 24-hour security in its lobbies as well as a clearly functioning camera system has only caused tenants to further question the necessity of facial recognition technology. The initiative is particularly dubious given the population of the buildings. Last year, a study out of M.I.T. and Stanford looked at the accuracy rates of some of the major facial-analysis programs on the market. It found that although the error rates for determining the gender of light-skinned men never surpassed 1 percent, the same programs failed to identify darker-skinned women up to one-third of the time.
  • The fear that marginalized groups will fall under increased surveillance as these technologies progress in the absence of laws to regulate them hardly seems like dystopian hysteria.
  • In November, the City of Detroit announced that it was introducing the use of real-time police cameras at two public-housing towers. The existing program is known as Project Greenlight, and it was designed to deter criminal behavior. But tower residents worried that relatives would be less likely to visit, given the constant stream of data collected by law enforcement.
  •  
    "Last fall, tenants at the Atlantic Plaza Towers, a rent-stabilized apartment complex in Brooklyn, received an alarming letter in the mail. Their landlord was planning to do away with the key-fob system that allowed them entry into their buildings on the theory that lost fobs could wind up in the wrong hands and were now also relatively easy to duplicate. Instead, property managers planned to install facial recognition technology as a means of access. It would feature "an encrypted reference file" that is "only usable in conjunction with the proprietary algorithm software of the system," the letter explained, in a predictably failed effort to mitigate concerns about privacy. As it happened, not every tenant was aware of these particular Orwellian developments. New mailboxes in the buildings required new keys, and to obtain a new key you had to submit to being photographed; some residents had refused to do this and so were not getting their mail."
Aurialie Jublin

Exclusive: dramatic slowdown in global growth of internet access | Technology | The Gua... - 0 views

  • The growth of internet access around the world has slowed dramatically, according to new data, suggesting the digital revolution will remain a distant dream for billions of the poorest and most isolated people on the planet.
  • In 2014 the UN predicted that half the world would be online by 2017, but the slowdown means that line will not be crossed until May 2019, only months before the UN sustainable development goal of affordable internet access for all by 2020. The UN defines being online as having used the internet from any device in any location at least once in the past three months.
  • Had growth rates held steady near the 11% average for 2005 to 2017, more than half a billion extra people would now be online. Of the 3.8 billion who remain unconnected, an alarming proportion are women. In poor urban areas, men can outnumber women on the internet as much as two to one.
  • ...4 more annotations...
  • Beyond missing out on economic opportunities, people who are unconnected are cut off from online public debates, education, social groups and the means to access digital government services such as filing taxes and applying for ID cards. “As our daily lives become increasingly digital, these offline populations will continue to be pushed farther to the margins of society,” the report states.
  • Many of those offline are in areas that are difficult, and therefore costly, to hook up to the internet. The expense puts telecoms providers off because the communities are those least able to afford the high prices they must charge to get a return on the investment. At the same time, the internet may have little appeal for people in the world’s most remote regions. Even if they can afford the mobile phone and data costs, they may lack the skills to go online, and find little of interest in a language they know if they do.
  • The persistent wage gap between men and women plays a large part in the digital gender divide but is far from the only factor. “Women are more likely to be left out because of economic inequalities and to a great extent social norms,” said Nanjira Sambuli, who leads the Web Foundation’s efforts to promote equal access to the web. “In some communities the whole idea of women owning anything of their own, even a mobile phone, is frowned upon.”
  • She added: “It’s a stark reminder that technology is not a silver bullet that is going to solve inequalities that exist and have continued to exist because of real factors that need to be addressed. These are challenges that have been kicked down the road.”
  •  
    "Report showing dramatic decline in internet access growth suggests digital revolution will remain a distant dream for billions of people"
Aurialie Jublin

Let's make private data into a public good - MIT Technology Review - 0 views

  • Why is this a problem? Well, maybe because these giants are making huge profits from technologies originally created with taxpayer money. Google’s algorithm was developed with funding from the National Science Foundation, and the internet came from DARPA funding. The same is true for touch-screen displays, GPS, and Siri. From this the tech giants have created de facto monopolies while evading the type of regulation that would rein in monopolies in any other industry. And their business model is built on taking advantage of the habits and private information of the taxpayers who funded the technologies in the first place.
  • Apologists like to portray the internet giants as forces for good. They praise the sharing economy in which digital platforms empower people via free access to everything from social networking to GPS navigation to health monitoring. But Google doesn’t give us anything for free. It’s really the other way around—we’re handing over to Google exactly what it needs. When you use Google’s services it might feel as if you’re getting something for nothing, but you’re not even the customer—you’re the product. The bulk of Google’s profits come from selling advertising space and users’ data to firms. Facebook’s and Google’s business models are built on the commodification of personal data, transforming our friendships, interests, beliefs, and preferences into sellable propositions.
  • And because of network effects, the new gig economy doesn’t spread the wealth so much as concentrate it even more in the hands of a few firms (see Rein in the Data Barons). Like the internal-combustion engine or the QWERTY keyboard, a company that establishes itself as the leader in a market achieves a dominance that becomes self-perpetuating almost automatically.
  • ...5 more annotations...
  • The low tax rates that technology companies are typically paying on these large rewards are also perverse, given that their success was built on technologies funded and developed by high-risk public investments: if anything, companies that owe their fortunes to taxpayer-funded investment should be repaying the taxpayer, not seeking tax breaks.
  • We should ask how the value of these companies has been created, how that value has been measured, and who benefits from it. If we go by national accounts, the contribution of internet platforms to national income (as measured, for example, by GDP) is represented by the advertisement-related services they sell. But does that make sense? It’s not clear that ads really contribute to the national product, let alone to social well-being—which should be the aim of economic activity. Measuring the value of a company like Google or Facebook by the number of ads it sells is consistent with standard neoclassical economics, which interprets any market-based transaction as signaling the production of some kind of output—in other words, no matter what the thing is, as long as a price is received, it must be valuable. But in the case of these internet companies, that’s misleading: if online giants contribute to social well-being, they do it through the services they provide to users, not through the accompanying advertisements.
  • This way we have of ascribing value to what the internet giants produce is completely confusing, and it’s generating a paradoxical result: their advertising activities are counted as a net contribution to national income, while the more valuable services they provide to users are not.
  • Let’s not forget that a large part of the technology and necessary data was created by all of us, and should thus belong to all of us. The underlying infrastructure that all these companies rely on was created collectively (via the tax dollars that built the internet), and it also feeds off network effects that are produced collectively. There is indeed no reason why the public’s data should not be owned by a public repository that sells the data to the tech giants, rather than vice versa. But the key issue here is not just sending a portion of the profits from data back to citizens but also allowing them to shape the digital economy in a way that satisfies public needs. Using big data and AI to improve the services provided by the welfare state—from health care to social housing—is just one example.
  • Only by thinking about digital platforms as collective creations can we construct a new model that offers something of real value, driven by public purpose. We’re never far from a media story that stirs up a debate about the need to regulate tech companies, which creates a sense that there’s a war between their interests and those of national governments. We need to move beyond this narrative. The digital economy must be subject to the needs of all sides; it’s a partnership of equals where regulators should have the confidence to be market shapers and value creators. 
  •  
    "The internet giants depend on our data. A new relationship between us and them could deliver real value to society."
Aurialie Jublin

Comunes collective - 0 views

  • Ourproject.org is a web-based collaborative free content repository. It acts as a central location for offering web space and tools for projects of any topic, focusing on free knowledge. It aims to extend the ideas and methodology of free software to social areas and free culture in general. Thus, it provides multiple web services (hosting, mailing lists, wiki, ftp, forums…) to social/cultural/artistic projects as long as they share their contents with Creative Commons licenses (or other free/libre licenses). Active since 2002, nowadays it hosts 1,733 projects and its services receive around 1,000,000 monthly visits.
  • Kune is a platform for encouraging collaboration, content sharing & free culture. It aims to improve/modernize/replicate the labor of what ourproject.org does, but in an easier manner and expanding on its features for community-building. It allows for the creation of online spaces of collaborative work, where organizations and individuals can build projects online, coordinate common agendas, set up virtual meetings and join people/orgs with similar interests. It sums up the characteristics of online social networks with collaborative software, aimed at groups and boosting the sharing of contents among orgs/peers.
  • Move Commons (MC) is a simple web tool for initiatives, collectives and NGOs to declare and visualize the core principles they are committed to. The idea behind MC follows the same mechanics of Creative Commons tagging cultural works, providing a user-friendly, bottom-up, labeling system for each initiative with 4 meaningful icons and some keywords. It aims to boost the visibility and diffusion of such initiatives, building a network among related initiatives/collectives across the world and allowing mutual discovery.
  • ...1 more annotation...
  •   Other projects   Alerta! is a community-driven alert system Plantaré is a community currency for seed exchange The World of Alternatives is a proof-of-concept initiative that aims to classify and document collectively alternatives of our “Another World is Possible” in Wikipedia Karma is a proof-of-concept gadget for a decentralized reputation rating system Massmob is a proof-of-concept gadget for calling and organizing meetings and smart mobs Troco is a proof-of-concept gadget of a peer-to-peer currency Brick (temporal nickname) is a forthcoming initiative for guiding student assignments towards the solution of real problems and the sharing of their results for reusing/replicating/adapting the solutions Ideas (temporal nickname) is a forthcoming initiative for brainstorming ideas of possible social projects related to the Commons
  •  
    "Comunes is a non-profit collective dedicated to facilitating the use of free/libre web tools and resources to collectives and activists alike, with the hopes of encouraging the Commons."
Aurialie Jublin

An Apology for the Internet - From the People Who Built It - 1 views

  • There have always been outsiders who criticized the tech industry — even if their concerns have been drowned out by the oohs and aahs of consumers, investors, and journalists. But today, the most dire warnings are coming from the heart of Silicon Valley itself. The man who oversaw the creation of the original iPhone believes the device he helped build is too addictive. The inventor of the World Wide Web fears his creation is being “weaponized.” Even Sean Parker, Facebook’s first president, has blasted social media as a dangerous form of psychological manipulation. “God only knows what it’s doing to our children’s brains,” he lamented recently.
  • To keep the internet free — while becoming richer, faster, than anyone in history — the technological elite needed something to attract billions of users to the ads they were selling. And that something, it turns out, was outrage. As Jaron Lanier, a pioneer in virtual reality, points out, anger is the emotion most effective at driving “engagement” — which also makes it, in a market for attention, the most profitable one. By creating a self-perpetuating loop of shock and recrimination, social media further polarized what had already seemed, during the Obama years, an impossibly and irredeemably polarized country.
  • The Architects (In order of appearance.) Jaron Lanier, virtual-reality pioneer. Founded first company to sell VR goggles; worked at Atari and Microsoft. Antonio García Martínez, ad-tech entrepreneur. Helped create Facebook’s ad machine. Ellen Pao, former CEO of Reddit. Filed major gender-discrimination lawsuit against VC firm Kleiner Perkins. Can Duruk, programmer and tech writer. Served as project lead at Uber. Kate Losse, Facebook employee No. 51. Served as Mark Zuckerberg’s speechwriter. Tristan Harris, product designer. Wrote internal Google presentation about addictive and unethical design. Rich “Lowtax” Kyanka, entrepreneur who founded influential message board Something Awful. Ethan Zuckerman, MIT media scholar. Invented the pop-up ad. Dan McComas, former product chief at Reddit. Founded community-based platform Imzy. Sandy Parakilas, product manager at Uber. Ran privacy compliance for Facebook apps. Guillaume Chaslot, AI researcher. Helped develop YouTube’s algorithmic recommendation system. Roger McNamee, VC investor. Introduced Mark Zuckerberg to Sheryl Sandberg. Richard Stallman, MIT programmer. Created legendary software GNU and Emacs.
  • ...45 more annotations...
  • How It Went Wrong, in 15 Steps Step 1 Start With Hippie Good Intentions …
  • I think two things are at the root of the present crisis. One was the idealistic view of the internet — the idea that this is the great place to share information and connect with like-minded people. The second part was the people who started these companies were very homogeneous. You had one set of experiences, one set of views, that drove all of the platforms on the internet. So the combination of this belief that the internet was a bright, positive place and the very similar people who all shared that view ended up creating platforms that were designed and oriented around free speech.
  • Step 2 … Then mix in capitalism on steroids. To transform the world, you first need to take it over. The planetary scale and power envisioned by Silicon Valley’s early hippies turned out to be as well suited for making money as they were for saving the world.
  • Step 3 The arrival of Wall Streeters didn’t help … Just as Facebook became the first overnight social-media success, the stock market crashed, sending money-minded investors westward toward the tech industry. Before long, a handful of companies had created a virtual monopoly on digital life.
  • Ethan Zuckerman: Over the last decade, the social-media platforms have been working to make the web almost irrelevant. Facebook would, in many ways, prefer that we didn’t have the internet. They’d prefer that we had Facebook.
  • Step 4 … And we paid a high price for keeping it free. To avoid charging for the internet — while becoming fabulously rich at the same time — Silicon Valley turned to digital advertising. But to sell ads that target individual users, you need to grow a big audience — and use advancing technology to gather reams of personal data that will enable you to reach them efficiently.
  • Harris: If you’re YouTube, you want people to register as many accounts as possible, uploading as many videos as possible, driving as many views to those videos as possible, so you can generate lots of activity that you can sell to advertisers. So whether or not the users are real human beings or Russian bots, whether or not the videos are real or conspiracy theories or disturbing content aimed at kids, you don’t really care. You’re just trying to drive engagement to the stuff and maximize all that activity. So everything stems from this engagement-based business model that incentivizes the most mindless things that harm the fabric of society.
  • Step 5 Everything was designed to be really, really addictive. The social-media giants became “attention merchants,” bent on hooking users no mater the consequences. “Engagement” was the euphemism for the metric, but in practice it evolved into an unprecedented machine for behavior modification.
  • Harris: That blue Facebook icon on your home screen is really good at creating unconscious habits that people have a hard time extinguishing. People don’t see the way that their minds are being manipulated by addiction. Facebook has become the largest civilization-scale mind-control machine that the world has ever seen.
  • Step 6 At first, it worked — almost too well. None of the companies hid their plans or lied about how their money was made. But as users became deeply enmeshed in the increasingly addictive web of surveillance, the leading digital platforms became wildly popular.
  • Pao: There’s this idea that, “Yes, they can use this information to manipulate other people, but I’m not gonna fall for that, so I’m protected from being manipulated.” Slowly, over time, you become addicted to the interactions, so it’s hard to opt out. And they just keep taking more and more of your time and pushing more and more fake news. It becomes easy just to go about your life and assume that things are being taken care of.
  • McNamee: If you go back to the early days of propaganda theory, Edward Bernays had a hypothesis that to implant an idea and make it universally acceptable, you needed to have the same message appearing in every medium all the time for a really long period of time. The notion was it could only be done by a government. Then Facebook came along, and it had this ability to personalize for every single user. Instead of being a broadcast model, it was now 2.2 billion individualized channels. It was the most effective product ever created to revolve around human emotions.
  • Step 7 No one from Silicon Valley was held accountable … No one in the government — or, for that matter, in the tech industry’s user base — seemed interested in bringing such a wealthy, dynamic sector to heel.
  • Step 8 … Even as social networks became dangerous and toxic. With companies scaling at unprecedented rates, user security took a backseat to growth and engagement. Resources went to selling ads, not protecting users from abuse.
  • Lanier: Every time there’s some movement like Black Lives Matter or #MeToo, you have this initial period where people feel like they’re on this magic-carpet ride. Social media is letting them reach people and organize faster than ever before. They’re thinking, Wow, Facebook and Twitter are these wonderful tools of democracy. But it turns out that the same data that creates a positive, constructive process like the Arab Spring can be used to irritate other groups. So every time you have a Black Lives Matter, social media responds by empowering neo-Nazis and racists in a way that hasn’t been seen in generations. The original good intention winds up empowering its opposite.
  • Chaslot: As an engineer at Google, I would see something weird and propose a solution to management. But just noticing the problem was hurting the business model. So they would say, “Okay, but is it really a problem?” They trust the structure. For instance, I saw this conspiracy theory that was spreading. It’s really large — I think the algorithm may have gone crazy. But I was told, “Don’t worry — we have the best people working on it. It should be fine.” Then they conclude that people are just stupid. They don’t want to believe that the problem might be due to the algorithm.
  • Parakilas: One time a developer who had access to Facebook’s data was accused of creating profiles of people without their consent, including children. But when we heard about it, we had no way of proving whether it had actually happened, because we had no visibility into the data once it left Facebook’s servers. So Facebook had policies against things like this, but it gave us no ability to see what developers were actually doing.
  • McComas: Ultimately the problem Reddit has is the same as Twitter: By focusing on growth and growth only, and ignoring the problems, they amassed a large set of cultural norms on their platforms that stem from harassment or abuse or bad behavior. They have worked themselves into a position where they’re completely defensive and they can just never catch up on the problem. I don’t see any way it’s going to improve. The best they can do is figure out how to hide the bad behavior from the average user.
  • Step 9 … And even as they invaded our privacy. The more features Facebook and other platforms added, the more data users willingly, if unwittingly, released to them and the data brokers who power digital advertising.
  • Richard Stallman: What is data privacy? That means that if a company collects data about you, it should somehow protect that data. But I don’t think that’s the issue. The problem is that these companies are collecting data about you, period. We shouldn’t let them do that. The data that is collected will be abused. That’s not an absolute certainty, but it’s a practical extreme likelihood, which is enough to make collection a problem.
  • Losse: I’m not surprised at what’s going on now with Cambridge Analytica and the scandal over the election. For long time, the accepted idea at Facebook was: Giving developers as much data as possible to make these products is good. But to think that, you also have to not think about the data implications for users. That’s just not your priority.
  • Step 10 Then came 2016. The election of Donald Trump and the triumph of Brexit, two campaigns powered in large part by social media, demonstrated to tech insiders that connecting the world — at least via an advertising-surveillance scheme — doesn’t necessarily lead to that hippie utopia.
  • Chaslot: I realized personally that things were going wrong in 2011, when I was working at Google. I was working on this YouTube recommendation algorithm, and I realized that the algorithm was always giving you the same type of content. For instance, if I give you a video of a cat and you watch it, the algorithm thinks, Oh, he must really like cats. That creates these feeder bubbles where people just see one type of information. But when I notified my managers at Google and proposed a solution that would give a user more control so he could get out of the feeder bubble, they realized that this type of algorithm would not be very beneficial for watch time. They didn’t want to push that, because the entire business model is based on watch time.
  • Step 11 Employees are starting to revolt. Tech-industry executives aren’t likely to bite the hand that feeds them. But maybe their employees — the ones who signed up for the mission as much as the money — can rise up and make a change.
  • Harris: There’s a massive demoralizing wave that is hitting Silicon Valley. It’s getting very hard for companies to attract and retain the best engineers and talent when they realize that the automated system they’ve built is causing havoc everywhere around the world. So if Facebook loses a big chunk of its workforce because people don’t want to be part of that perverse system anymore, that is a very powerful and very immediate lever to force them to change.
  • Duruk: I was at Uber when all the madness was happening there, and it did affect recruiting and hiring. I don’t think these companies are going to go down because they can’t attract the right talent. But there’s going to be a measurable impact. It has become less of a moral positive now — you go to Facebook to write some code and then you go home. They’re becoming just another company.
  • Step 12 To fix it, we’ll need a new business model … If the problem is in the way the Valley makes money, it’s going to have to make money a different way. Maybe by trying something radical and new — like charging users for goods and services.
  • Parakilas: They’re going to have to change their business model quite dramatically. They say they want to make time well spent the focus of their product, but they have no incentive to do that, nor have they created a metric by which they would measure that. But if Facebook charged a subscription instead of relying on advertising, then people would use it less and Facebook would still make money. It would be equally profitable and more beneficial to society. In fact, if you charged users a few dollars a month, you would equal the revenue Facebook gets from advertising. It’s not inconceivable that a large percentage of their user base would be willing to pay a few dollars a month.
  • Step 13 … And some tough regulation. Mark Zuckerberg testifying before Congress on April 10. Photo: Jim Watson/AFP/Getty Images While we’re at it, where has the government been in all this? 
  • Stallman: We need a law. Fuck them — there’s no reason we should let them exist if the price is knowing everything about us. Let them disappear. They’re not important — our human rights are important. No company is so important that its existence justifies setting up a police state. And a police state is what we’re heading toward.
  • Duruk: The biggest existential problem for them would be regulation. Because it’s clear that nothing else will stop these companies from using their size and their technology to just keep growing. Without regulation, we’ll basically just be complaining constantly, and not much will change.
  • McNamee: Three things. First, there needs to be a law against bots and trolls impersonating other people. I’m not saying no bots. I’m just saying bots have to be really clearly marked. Second, there have to be strict age limits to protect children. And third, there has to be genuine liability for platforms when their algorithms fail. If Google can’t block the obviously phony story that the kids in Parkland were actors, they need to be held accountable.
  • Stallman: We need a law that requires every system to be designed in a way that achieves its basic goal with the least possible collection of data. Let’s say you want to ride in a car and pay for the ride. That doesn’t fundamentally require knowing who you are. So services which do that must be required by law to give you the option of paying cash, or using some other anonymous-payment system, without being identified. They should also have ways you can call for a ride without identifying yourself, without having to use a cell phone. Companies that won’t go along with this — well, they’re welcome to go out of business. Good riddance.
  • Step 14 Maybe nothing will change. The scariest possibility is that nothing can be done — that the behemoths of the new internet are too rich, too powerful, and too addictive for anyone to fix.
  • García: Look, I mean, advertising sucks, sure. But as the ad tech guys say, “We’re the people who pay for the internet.” It’s hard to imagine a different business model other than advertising for any consumer internet app that depends on network effects.
  • Step 15 … Unless, at the very least, some new people are in charge. If Silicon Valley’s problems are a result of bad decision-making, it might be time to look for better decision-makers. One place to start would be outside the homogeneous group currently in power.
  • Pao: I’ve urged Facebook to bring in people who are not part of a homogeneous majority to their executive team, to every product team, to every strategy discussion. The people who are there now clearly don’t understand the impact of their platforms and the nature of the problem. You need people who are living the problem to clarify the extent of it and help solve it.
  • Things That Ruined the Internet
  • Cookies (1994) The original surveillance tool of the internet. Developed by programmer Lou Montulli to eliminate the need for repeated log-ins, cookies also enabled third parties like Google to track users across the web. The risk of abuse was low, Montulli thought, because only a “large, publicly visible company” would have the capacity to make use of such data. The result: digital ads that follow you wherever you go online.
  • The Farmville vulnerability (2007)   When Facebook opened up its social network to third-party developers, enabling them to build apps that users could share with their friends, it inadvertently opened the door a bit too wide. By tapping into user accounts, developers could download a wealth of personal data — which is exactly what a political-consulting firm called Cambridge Analytica did to 87 million Americans.
  • Algorithmic sorting (2006) It’s how the internet serves up what it thinks you want — automated calculations based on dozens of hidden metrics. Facebook’s News Feed uses it every time you hit refresh, and so does YouTube. It’s highly addictive — and it keeps users walled off in their own personalized loops. “When social media is designed primarily for engagement,” tweets Guillaume Chaslot, the engineer who designed YouTube’s algorithm, “it is not surprising that it hurts democracy and free speech.”
  • The “like” button (2009) Initially known as the “awesome” button, the icon was designed to unleash a wave of positivity online. But its addictive properties became so troubling that one of its creators, Leah Pearlman, has since renounced it. “Do you know that episode of Black Mirror where everyone is obsessed with likes?” she told Vice last year. “I suddenly felt terrified of becoming those people — as well as thinking I’d created that environment for everyone else.”
  • Pull-to-refresh (2009) Developed by software developer Loren Brichter for an iPhone app, the simple gesture — scrolling downward at the top of a feed to fetch more data — has become an endless, involuntary tic. “Pull-to-refresh is addictive,” Brichter told The Guardian last year. “I regret the downsides.”
  • Pop-up ads (1996) While working at an early blogging platform, Ethan Zuckerman came up with the now-ubiquitous tool for separating ads from content that advertisers might find objectionable. “I really did not mean to break the internet,” he told the podcast Reply All. “I really did not mean to bring this horrible thing into people’s lives. I really am extremely sorry about this.”
  • The Silicon Valley dream was born of the counterculture. A generation of computer programmers and designers flocked to the Bay Area’s tech scene in the 1970s and ’80s, embracing new technology as a tool to transform the world for good.
  •  
    Internet en 15 étapes, de sa construction à aujourd'hui, regards et regrets de ceux qui l'ont construit... [...] "Things That Ruined the Internet" les cookies 1994 / la faille Farmville 2007 / le tri algorithmique 2006 / le "like" 2009 / le "pull to refresh" 2009 / les "pop-up ads" 1996 [...]
1 - 8 of 8
Showing 20 items per page