Skip to main content

Home/ TOK Friends/ Group items tagged algorithms

Rss Feed Group items tagged

pier-paolo

Computers Already Learn From Us. But Can They Teach Themselves? - The New York Times - 0 views

  • We teach computers to see patterns, much as we teach children to read. But the future of A.I. depends on computer systems that learn on their own, without supervision, researchers say.
  • When a mother points to a dog and tells her baby, “Look at the doggy,” the child learns what to call the furry four-legged friends. That is supervised learning. But when that baby stands and stumbles, again and again, until she can walk, that is something else.Computers are the same.
  • ven if a supervised learning system read all the books in the world, he noted, it would still lack human-level intelligence because so much of our knowledge is never written down.
  • ...9 more annotations...
  • upervised learning depends on annotated data: images, audio or text that is painstakingly labeled by hordes of workers. They circle people or outline bicycles on pictures of street traffic. The labeled data is fed to computer algorithms, teaching the algorithms what to look for. After ingesting millions of labeled images, the algorithms become expert at recognizing what they have been taught to see.
  • There is also reinforcement learning, with very limited supervision that does not rely on training data. Reinforcement learning in computer science,
  • is modeled after reward-driven learning in the brain: Think of a rat learning to push a lever to receive a pellet of food. The strategy has been developed to teach computer systems to take actions.
  • My money is on self-supervised learning,” he said, referring to computer systems that ingest huge amounts of unlabeled data and make sense of it all without supervision or reward. He is working on models that learn by observation, accumulating enough background knowledge that some sort of common sense can emerge.
  • redict outcomes and choose a course of action. “Everybody agrees we need predictive learning, but we disagree about how to get there,”
  • A more inclusive term for the future of A.I., he said, is “predictive learning,” meaning systems that not only recognize patterns but also p
  • A huge fraction of what we do in our day-to-day jobs is constantly refining our mental models of the world and then using those mental models to solve problems,” he said. “That encapsulates an awful lot of what we’d like A.I. to do.”Image
  • Currently, robots can operate only in well-defined environments with little variation.
  • “Our working assumption is that if we build sufficiently general algorithms, then all we really have to do, once that’s done, is to put them in robots that are out there in the real world doing real things,”
Javier E

The Economic Case for Regulating Social Media - The New York Times - 0 views

  • Social media platforms like Facebook, YouTube and Twitter generate revenue by using detailed behavioral information to direct ads to individual users.
  • this bland description of their business model fails to convey even a hint of its profound threat to the nation’s political and social stability.
  • legislators in Congress to propose the breakup of some tech firms, along with other traditional antitrust measures. But the main hazard posed by these platforms is not aggressive pricing, abusive service or other ills often associated with monopoly.
  • ...16 more annotations...
  • Instead, it is their contribution to the spread of misinformation, hate speech and conspiracy theories.
  • digital platforms, since the marginal cost of serving additional consumers is essentially zero. Because the initial costs of producing a platform’s content are substantial, and because any company’s first goal is to remain solvent, it cannot just give stuff away. Even so, when price exceeds marginal cost, competition relentlessly pressures rival publishers to cut prices — eventually all the way to zero. This, in a nutshell, is the publisher’s dilemma in the digital age.
  • These firms make money not by charging for access to content but by displaying it with finely targeted ads based on the specific types of things people have already chosen to view. If the conscious intent were to undermine social and political stability, this business model could hardly be a more effective weapon.
  • The algorithms that choose individual-specific content are crafted to maximize the time people spend on a platform
  • As the developers concede, Facebook’s algorithms are addictive by design and exploit negative emotional triggers. Platform addiction drives earnings, and hate speech, lies and conspiracy theories reliably boost addiction.
  • the subscription model isn’t fully efficient: Any positive fee would inevitably exclude at least some who would value access but not enough to pay the fee
  • a conservative think tank, says, for example, that government has no business second-guessing people’s judgments about what to post or read on social media.
  • That position would be easier to defend in a world where individual choices had no adverse impact on others. But negative spillover effects are in fact quite common
  • individual and collective incentives about what to post or read on social media often diverge sharply.
  • There is simply no presumption that what spreads on these platforms best serves even the individual’s own narrow interests, much less those of society as a whole.
  • a simpler step may hold greater promise: Platforms could be required to abandon that model in favor of one relying on subscriptions, whereby members gain access to content in return for a modest recurring fee.
  • Major newspapers have done well under this model, which is also making inroads in book publishing. The subscription model greatly weakens the incentive to offer algorithmically driven addictive content provided by individuals, editorial boards or other sources.
  • Careful studies have shown that Facebook’s algorithms have increased political polarization significantly
  • More worrisome, those excluded would come disproportionately from low-income groups. Such objections might be addressed specifically — perhaps with a modest tax credit to offset subscription fees — or in a more general way, by making the social safety net more generous.
  • Adam Smith, the 18th-century Scottish philosopher widely considered the father of economics, is celebrated for his “invisible hand” theory, which describes conditions under which market incentives promote socially benign outcomes. Many of his most ardent admirers may view steps to constrain the behavior of social media platforms as regulatory overreach.
  • But Smith’s remarkable insight was actually more nuanced: Market forces often promote society’s welfare, but not always. Indeed, as he saw clearly, individual interests are often squarely at odds with collective aspirations, and in many such instances it is in society’s interest to intervene. The current information crisis is a case in point.
Javier E

Don't Be Surprised About Facebook and Teen Girls. That's What Facebook Is. | Talking Po... - 0 views

  • First, set aside all morality. Let’s say we have a 16 year old girl who’s been doing searches about average weights, whether boys care if a girl is overweight and maybe some diets. She’s also spent some time on a site called AmIFat.com. Now I set you this task. You’re on the other side of the Facebook screen and I want you to get her to click on as many things as possible and spend as much time clicking or reading as possible. Are you going to show her movie reviews? Funny cat videos? Homework tips? Of course, not.
  • If you’re really trying to grab her attention you’re going to show her content about really thin girls, how their thinness has gotten them the attention of boys who turn out to really love them, and more diets
  • We both know what you’d do if you were operating within the goals and structure of the experiment.
  • ...17 more annotations...
  • This is what artificial intelligence and machine learning are. Facebook is a series of algorithms and goals aimed at maximizing engagement with Facebook. That’s why it’s worth hundreds of billions of dollars. It has a vast army of computer scientists and programmers whose job it is to make that machine more efficient.
  • the Facebook engine is designed to scope you out, take a psychographic profile of who you are and then use its data compiled from literally billions of humans to serve you content designed to maximize your engagement with Facebook.
  • Put in those terms, you barely have a chance.
  • Of course, Facebook can come in and say, this is damaging so we’re going to add some code that says don’t show this dieting/fat-shaming content but girls 18 and under. But the algorithms will find other vulnerabilities
  • So what to do? The decision of all the companies, if not all individuals, was just to lie. What else are you going to do? Say we’re closing down our multi-billion dollar company because our product shouldn’t exist?
  • why exactly are you creating a separate group of subroutines that yanks Facebook back when it does what it’s supposed to do particularly well? This, indeed, was how the internal dialog at Facebook developed, as described in the article I read. Basically, other executives said: Our business is engagement, why are we suggesting people log off for a while when they get particularly engaged?
  • what it makes me think about more is the conversations at Tobacco companies 40 or 50 years ago. At a certain point you realize: our product is bad. If used as intended it causes lung cancer, heart disease and various other ailments in a high proportion of the people who use the product. And our business model is based on the fact that the product is chemically addictive. Our product is getting people addicted to tobacco so that they no longer really have a choice over whether to buy it. And then a high proportion of them will die because we’ve succeeded.
  • . The algorithms can be taught to find and address an infinite numbers of behaviors. But really you’re asking the researchers and programmers to create an alternative set of instructions where Instagram (or Facebook, same difference) jumps in and does exactly the opposite of its core mission, which is to drive engagement
  • You can add filters and claim you’re not marketing to kids. But really you’re only ramping back the vast social harm marginally at best. That’s the product. It is what it is.
  • there is definitely an analogy inasmuch as what you’re talking about here aren’t some glitches in the Facebook system. These aren’t some weird unintended consequences that can be ironed out of the product. It’s also in most cases not bad actors within Facebook. It’s what the product is. The product is getting attention and engagement against which advertising is sold
  • How good is the machine learning? Well, trial and error with between 3 and 4 billion humans makes you pretty damn good. That’s the product. It is inherently destructive, though of course the bad outcomes aren’t distributed evenly throughout the human population.
  • The business model is to refine this engagement engine, getting more attention and engagement and selling ads against the engagement. Facebook gets that revenue and the digital roadkill created by the product gets absorbed by the society at large
  • Facebook is like a spectacularly profitable nuclear energy company which is so profitable because it doesn’t build any of the big safety domes and dumps all the radioactive waste into the local river.
  • in the various articles describing internal conversations at Facebook, the shrewder executives and researchers seem to get this. For the company if not every individual they seem to be following the tobacco companies’ lead.
  • Ed. Note: TPM Reader AS wrote in to say I was conflating Facebook and Instagram and sometimes referring to one or the other in a confusing way. This is a fair
  • I spoke of them as the same intentionally. In part I’m talking about Facebook’s corporate ownership. Both sites are owned and run by the same parent corporation and as we saw during yesterday’s outage they are deeply hardwired into each other.
  • the main reason I spoke of them in one breath is that they are fundamentally the same. AS points out that the issues with Instagram are distinct because Facebook has a much older demographic and Facebook is a predominantly visual medium. (Indeed, that’s why Facebook corporate is under such pressure to use Instagram to drive teen and young adult engagement.) But they are fundamentally the same: AI and machine learning to drive engagement. Same same. Just different permutations of the same dynamic.
Javier E

Opinion | You Are the Object of Facebook's Secret Extraction Operation - The New York T... - 0 views

  • Facebook is not just any corporation. It reached trillion-dollar status in a single decade by applying the logic of what I call surveillance capitalism — an economic system built on the secret extraction and manipulation of human data
  • Facebook and other leading surveillance capitalist corporations now control information flows and communication infrastructures across the world.
  • These infrastructures are critical to the possibility of a democratic society, yet our democracies have allowed these companies to own, operate and mediate our information spaces unconstrained by public law.
  • ...56 more annotations...
  • The result has been a hidden revolution in how information is produced, circulated and acted upon
  • The world’s liberal democracies now confront a tragedy of the “un-commons.” Information spaces that people assume to be public are strictly ruled by private commercial interests for maximum profit.
  • The internet as a self-regulating market has been revealed as a failed experiment. Surveillance capitalism leaves a trail of social wreckage in its wake: the wholesale destruction of privacy, the intensification of social inequality, the poisoning of social discourse with defactualized information, the demolition of social norms and the weakening of democratic institutions.
  • These social harms are not random. They are tightly coupled effects of evolving economic operations. Each harm paves the way for the next and is dependent on what went before.
  • There is no way to escape the machine systems that surveil u
  • All roads to economic and social participation now lead through surveillance capitalism’s profit-maximizing institutional terrain, a condition that has intensified during nearly two years of global plague.
  • Will Facebook’s digital violence finally trigger our commitment to take back the “un-commons”?
  • Will we confront the fundamental but long ignored questions of an information civilization: How should we organize and govern the information and communication spaces of the digital century in ways that sustain and advance democratic values and principles?
  • Mark Zuckerberg’s start-up did not invent surveillance capitalism. Google did that. In 2000, when only 25 percent of the world’s information was stored digitally, Google was a tiny start-up with a great search product but little revenue.
  • By 2001, in the teeth of the dot-com bust, Google’s leaders found their breakthrough in a series of inventions that would transform advertising. Their team learned how to combine massive data flows of personal information with advanced computational analyses to predict where an ad should be placed for maximum “click through.”
  • Google’s scientists learned how to extract predictive metadata from this “data exhaust” and use it to analyze likely patterns of future behavior.
  • Prediction was the first imperative that determined the second imperative: extraction.
  • Lucrative predictions required flows of human data at unimaginable scale. Users did not suspect that their data was secretly hunted and captured from every corner of the internet and, later, from apps, smartphones, devices, cameras and sensors
  • User ignorance was understood as crucial to success. Each new product was a means to more “engagement,” a euphemism used to conceal illicit extraction operations.
  • When asked “What is Google?” the co-founder Larry Page laid it out in 2001,
  • “Storage is cheap. Cameras are cheap. People will generate enormous amounts of data,” Mr. Page said. “Everything you’ve ever heard or seen or experienced will become searchable. Your whole life will be searchable.”
  • Instead of selling search to users, Google survived by turning its search engine into a sophisticated surveillance medium for seizing human data
  • Company executives worked to keep these economic operations secret, hidden from users, lawmakers, and competitors. Mr. Page opposed anything that might “stir the privacy pot and endanger our ability to gather data,” Mr. Edwards wrote.
  • As recently as 2017, Eric Schmidt, the executive chairman of Google’s parent company, Alphabet, acknowledged the role of Google’s algorithmic ranking operations in spreading corrupt information. “There is a line that we can’t really get across,” he said. “It is very difficult for us to understand truth.” A company with a mission to organize and make accessible all the world’s information using the most sophisticated machine systems cannot discern corrupt information.
  • This is the economic context in which disinformation wins
  • In March 2008, Mr. Zuckerberg hired Google’s head of global online advertising, Sheryl Sandberg, as his second in command. Ms. Sandberg had joined Google in 2001 and was a key player in the surveillance capitalism revolution. She led the build-out of Google’s advertising engine, AdWords, and its AdSense program, which together accounted for most of the company’s $16.6 billion in revenue in 2007.
  • A Google multimillionaire by the time she met Mr. Zuckerberg, Ms. Sandberg had a canny appreciation of Facebook’s immense opportunities for extraction of rich predictive data. “We have better information than anyone else. We know gender, age, location, and it’s real data as opposed to the stuff other people infer,” Ms. Sandberg explained
  • The company had “better data” and “real data” because it had a front-row seat to what Mr. Page had called “your whole life.”
  • Facebook paved the way for surveillance economics with new privacy policies in late 2009. The Electronic Frontier Foundation warned that new “Everyone” settings eliminated options to restrict the visibility of personal data, instead treating it as publicly available information.
  • Mr. Zuckerberg “just went for it” because there were no laws to stop him from joining Google in the wholesale destruction of privacy. If lawmakers wanted to sanction him as a ruthless profit-maximizer willing to use his social network against society, then 2009 to 2010 would have been a good opportunity.
  • Facebook was the first follower, but not the last. Google, Facebook, Amazon, Microsoft and Apple are private surveillance empires, each with distinct business models.
  • In 2021 these five U.S. tech giants represent five of the six largest publicly traded companies by market capitalization in the world.
  • As we move into the third decade of the 21st century, surveillance capitalism is the dominant economic institution of our time. In the absence of countervailing law, this system successfully mediates nearly every aspect of human engagement with digital information
  • Today all apps and software, no matter how benign they appear, are designed to maximize data collection.
  • Historically, great concentrations of corporate power were associated with economic harms. But when human data are the raw material and predictions of human behavior are the product, then the harms are social rather than economic
  • The difficulty is that these novel harms are typically understood as separate, even unrelated, problems, which makes them impossible to solve. Instead, each new stage of harm creates the conditions for the next stage.
  • Fifty years ago the conservative economist Milton Friedman exhorted American executives, “There is one and only one social responsibility of business — to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game.” Even this radical doctrine did not reckon with the possibility of no rules.
  • With privacy out of the way, ill-gotten human data are concentrated within private corporations, where they are claimed as corporate assets to be deployed at will.
  • The sheer size of this knowledge gap is conveyed in a leaked 2018 Facebook document, which described its artificial intelligence hub, ingesting trillions of behavioral data points every day and producing six million behavioral predictions each second.
  • Next, these human data are weaponized as targeting algorithms, engineered to maximize extraction and aimed back at their unsuspecting human sources to increase engagement
  • Targeting mechanisms change real life, sometimes with grave consequences. For example, the Facebook Files depict Mr. Zuckerberg using his algorithms to reinforce or disrupt the behavior of billions of people. Anger is rewarded or ignored. News stories become more trustworthy or unhinged. Publishers prosper or wither. Political discourse turns uglier or more moderate. People live or die.
  • Occasionally the fog clears to reveal the ultimate harm: the growing power of tech giants willing to use their control over critical information infrastructure to compete with democratically elected lawmakers for societal dominance.
  • when it comes to the triumph of surveillance capitalism’s revolution, it is the lawmakers of every liberal democracy, especially in the United States, who bear the greatest burden of responsibility. They allowed private capital to rule our information spaces during two decades of spectacular growth, with no laws to stop it.
  • All of it begins with extraction. An economic order founded on the secret massive-scale extraction of human data assumes the destruction of privacy as a nonnegotiable condition of its business operations.
  • We can’t fix all our problems at once, but we won’t fix any of them, ever, unless we reclaim the sanctity of information integrity and trustworthy communications
  • The abdication of our information and communication spaces to surveillance capitalism has become the meta-crisis of every republic, because it obstructs solutions to all other crises.
  • Neither Google, nor Facebook, nor any other corporate actor in this new economic order set out to destroy society, any more than the fossil fuel industry set out to destroy the earth.
  • like global warming, the tech giants and their fellow travelers have been willing to treat their destructive effects on people and society as collateral damage — the unfortunate but unavoidable byproduct of perfectly legal economic operations that have produced some of the wealthiest and most powerful corporations in the history of capitalism.
  • Where does that leave us?
  • Democracy is the only countervailing institutional order with the legitimate authority and power to change our course. If the ideal of human self-governance is to survive the digital century, then all solutions point to one solution: a democratic counterrevolution.
  • instead of the usual laundry lists of remedies, lawmakers need to proceed with a clear grasp of the adversary: a single hierarchy of economic causes and their social harms.
  • We can’t rid ourselves of later-stage social harms unless we outlaw their foundational economic causes
  • This means we move beyond the current focus on downstream issues such as content moderation and policing illegal content. Such “remedies” only treat the symptoms without challenging the illegitimacy of the human data extraction that funds private control over society’s information spaces
  • Similarly, structural solutions like “breaking up” the tech giants may be valuable in some cases, but they will not affect the underlying economic operations of surveillance capitalism.
  • Instead, discussions about regulating big tech should focus on the bedrock of surveillance economics: the secret extraction of human data from realms of life once called “private.
  • No secret extraction means no illegitimate concentrations of knowledge about people. No concentrations of knowledge means no targeting algorithms. No targeting means that corporations can no longer control and curate information flows and social speech or shape human behavior to favor their interests
  • the sober truth is that we need lawmakers ready to engage in a once-a-century exploration of far more basic questions:
  • How should we structure and govern information, connection and communication in a democratic digital century?
  • What new charters of rights, legislative frameworks and institutions are required to ensure that data collection and use serve the genuine needs of individuals and society?
  • What measures will protect citizens from unaccountable power over information, whether it is wielded by private companies or governments?
  • The corporation that is Facebook may change its name or its leaders, but it will not voluntarily change its economics.
criscimagnael

Living better with algorithms | MIT News | Massachusetts Institute of Technology - 0 views

  • At a talk on ethical artificial intelligence, the speaker brought up a variation on the famous trolley problem, which outlines a philosophical choice between two undesirable outcomes.
  • Say a self-driving car is traveling down a narrow alley with an elderly woman walking on one side and a small child on the other, and no way to thread between both without a fatality. Who should the car hit?
  • To get a sense of what this means, suppose that regulators require that any public health content — for example, on vaccines — not be vastly different for politically left- and right-leaning users. How should auditors check that a social media platform complies with this regulation? Can a platform be made to comply with the regulation without damaging its bottom line? And how does compliance affect the actual content that users do see?
  • ...12 more annotations...
  • a self-driving car could have avoided choosing between two bad outcomes by making a decision earlier on — the speaker pointed out that, when entering the alley, the car could have determined that the space was narrow and slowed to a speed that would keep everyone safe.
  • Auditors have to inspect the algorithm without accessing sensitive user data.
  • Other considerations come into play as well, such as balancing the removal of misinformation with the protection of free speech.
  • To meet these challenges, Cen and Shah developed an auditing procedure that does not need more than black-box access to the social media algorithm (which respects trade secrets), does not remove content (which avoids issues of censorship), and does not require access to users (which preserves users’ privacy).
  • which is known to help reduce the spread of misinformation
  • In labor markets, for example, workers learn their preferences about what kinds of jobs they want, and employers learn their preferences about the qualifications they seek from workers.
  • But learning can be disrupted by competition
  • it is indeed possible to get to a stable outcome (workers aren’t incentivized to leave the matching market), with low regret (workers are happy with their long-term outcomes), fairness (happiness is evenly distributed), and high social welfare.
  • For instance, when Covid-19 cases surged in the pandemic, many cities had to decide what restrictions to adopt, such as mask mandates, business closures, or stay-home orders. They had to act fast and balance public health with community and business needs, public spending, and a host of other considerations.
  • But of course, no county exists in a vacuum.
  • These complex interactions matter,
  • “Accountability, legitimacy, trust — these principles play crucial roles in society and, ultimately, will determine which systems endure with time.” 
Javier E

'The Power of One,' Facebook whistleblower Frances Haugen's memoir - The Washington Post - 0 views

  • When an internal group proposed the conditions under which Facebook should step in and take down speech from political actors, Zuckerberg discarded its work. He said he’d address the issue himself over a weekend. His “solution”? Facebook would not touch speech by any politician, under any circumstances — a fraught decision under the simplistic surface, as Haugen points out. After all, who gets to count as a politician? The municipal dogcatcher?
  • t was also Zuckerberg, she says, who refused to make a small change that would have made the content in people’s feeds less incendiary — possibly because doing so would have caused a key metric to decline.
  • When the Wall Street Journal’s Jeff Horwitz began to break the stories that Haugen helped him document, the most damning one concerned Facebook’s horrifyingly disingenuous response to a congressional inquiry asking if the company had any research showing that its products were dangerous to teens. Facebook said it wasn’t aware of any consensus indicating how much screen time was too much. What Facebook did have was a pile of research showing that kids were being harmed by its products. Allow a clever company a convenient deflection, and you get something awfully close to a lie.
  • ...5 more annotations...
  • after the military in Myanmar used Facebook to stoke the murder of the Rohingya people, Haugen began to worry that this was a playbook that could be infinitely repeated — and only because Facebook chose not to invest in safety measures, such as detecting hate speech in poorer, more vulnerable places. “The scale of the problems was so vast,” she writes. “I believed people were going to die (in certain countries, at least) and for no reason other than higher profit margins.”
  • After a trip to Cambodia, where neighbors killed neighbors in the 1970s because of a “story that labeled people who had lived next to each other for generations as existential threats,” she’d started to wonder about what caused people to turn on one another to such a horr
  • ifying degree. “How quickly could a story become the truth people perceived?”
  • she points out is the false choice posited by most social media companies: free speech vs. censorship. She argues that lack of transparency is what contributed most to the problems at Facebook. No one on the outside can see inside the algorithms. Even many of those on the inside can’t. “You can’t take a single academic course, anywhere in the world, on the tradeoffs and choices that go into building a social media algorithm or, more importantly, the consequences of those choices,” she writes.
  • In that lack of accountability, social media is a very different ecosystem than the one that helped Ralph Nader take on the auto industry back in the 1960s. Then, there was a network of insurers and plaintiff’s lawyers who also wanted change — and the images of mangled bodies were a lot more visible than what happens inside the mind of a teenage girl. But what if the government forced companies to share their inner workings in the same way it mandates that food companies disclose the nutrition in what they make? What if the government forced social media companies to allow academics and other researchers access to the algorithms they use?
Javier E

Quantum Computing Advance Begins New Era, IBM Says - The New York Times - 0 views

  • While researchers at Google in 2019 claimed that they had achieved “quantum supremacy” — a task performed much more quickly on a quantum computer than a conventional one — IBM’s researchers say they have achieved something new and more useful, albeit more modestly named.
  • “We’re entering this phase of quantum computing that I call utility,” said Jay Gambetta, a vice president of IBM Quantum. “The era of utility.”
  • Present-day computers are called digital, or classical, because they deal with bits of information that are either 1 or 0, on or off. A quantum computer performs calculations on quantum bits, or qubits, that capture a more complex state of information. Just as a thought experiment by the physicist Erwin Schrödinger postulated that a cat could be in a quantum state that is both dead and alive, a qubit can be both 1 and 0 simultaneously.
  • ...15 more annotations...
  • That allows quantum computers to make many calculations in one pass, while digital ones have to perform each calculation separately. By speeding up computation, quantum computers could potentially solve big, complex problems in fields like chemistry and materials science that are out of reach today.
  • When Google researchers made their supremacy claim in 2019, they said their quantum computer performed a calculation in 3 minutes 20 seconds that would take about 10,000 years on a state-of-the-art conventional supercomputer.
  • The IBM researchers in the new study performed a different task, one that interests physicists. They used a quantum processor with 127 qubits to simulate the behavior of 127 atom-scale bar magnets — tiny enough to be governed by the spooky rules of quantum mechanics — in a magnetic field. That is a simple system known as the Ising model, which is often used to study magnetism.
  • This problem is too complex for a precise answer to be calculated even on the largest, fastest supercomputers.
  • On the quantum computer, the calculation took less than a thousandth of a second to complete. Each quantum calculation was unreliable — fluctuations of quantum noise inevitably intrude and induce errors — but each calculation was quick, so it could be performed repeatedly.
  • Indeed, for many of the calculations, additional noise was deliberately added, making the answers even more unreliable. But by varying the amount of noise, the researchers could tease out the specific characteristics of the noise and its effects at each step of the calculation.“We can amplify the noise very precisely, and then we can rerun that same circuit,” said Abhinav Kandala, the manager of quantum capabilities and demonstrations at IBM Quantum and an author of the Nature paper. “And once we have results of these different noise levels, we can extrapolate back to what the result would have been in the absence of noise.”In essence, the researchers were able to subtract the effects of noise from the unreliable quantum calculations, a process they call error mitigation.
  • Altogether, the computer performed the calculation 600,000 times, converging on an answer for the overall magnetization produced by the 127 bar magnets.
  • Although an Ising model with 127 bar magnets is too big, with far too many possible configurations, to fit in a conventional computer, classical algorithms can produce approximate answers, a technique similar to how compression in JPEG images throws away less crucial data to reduce the size of the file while preserving most of the image’s details
  • Certain configurations of the Ising model can be solved exactly, and both the classical and quantum algorithms agreed on the simpler examples. For more complex but solvable instances, the quantum and classical algorithms produced different answers, and it was the quantum one that was correct.
  • Thus, for other cases where the quantum and classical calculations diverged and no exact solutions are known, “there is reason to believe that the quantum result is more accurate,”
  • Mr. Anand is currently trying to add a version of error mitigation for the classical algorithm, and it is possible that could match or surpass the performance of the quantum calculations.
  • In the long run, quantum scientists expect that a different approach, error correction, will be able to detect and correct calculation mistakes, and that will open the door for quantum computers to speed ahead for many uses.
  • Error correction is already used in conventional computers and data transmission to fix garbles. But for quantum computers, error correction is likely years away, requiring better processors able to process many more qubits
  • “This is one of the simplest natural science problems that exists,” Dr. Gambetta said. “So it’s a good one to start with. But now the question is, how do you generalize it and go to more interesting natural science problems?”
  • Those might include figuring out the properties of exotic materials, accelerating drug discovery and modeling fusion reactions.
Javier E

How 2020 Forced Facebook and Twitter to Step In - The Atlantic - 0 views

  • mainstream platforms learned their lesson, accepting that they should intervene aggressively in more and more cases when users post content that might cause social harm.
  • During the wildfires in the American West in September, Facebook and Twitter took down false claims about their cause, even though the platforms had not done the same when large parts of Australia were engulfed in flames at the start of the year
  • Twitter, Facebook, and YouTube cracked down on QAnon, a sprawling, incoherent, and constantly evolving conspiracy theory, even though its borders are hard to delineate.
  • ...15 more annotations...
  • Content moderation comes to every content platform eventually, and platforms are starting to realize this faster than ever.
  • Nothing symbolizes this shift as neatly as Facebook’s decision in October (and Twitter’s shortly after) to start banning Holocaust denial. Almost exactly a year earlier, Zuckerberg had proudly tied himself to the First Amendment in a widely publicized “stand for free expression” at Georgetown University.
  • The evolution continues. Facebook announced earlier this month that it will join platforms such as YouTube and TikTok in removing, not merely labeling or down-ranking, false claims about COVID-19 vaccines.
  • the pandemic also showed that complete neutrality is impossible. Even though it’s not clear that removing content outright is the best way to correct misperceptions, Facebook and other platforms plainly want to signal that, at least in the current crisis, they don’t want to be seen as feeding people information that might kill them.
  • When internet platforms announce new policies, assessing whether they can and will enforce them consistently has always been difficult. In essence, the companies are grading their own work. But too often what can be gleaned from the outside suggests that they’re failing.
  • It tweaked its algorithm to boost authoritative sources in the news feed and turned off recommendations to join groups based around political or social issues. Facebook is reversing some of these steps now, but it cannot make people forget this toolbox exists in the future
  • As platforms grow more comfortable with their power, they are recognizing that they have options beyond taking posts down or leaving them up. In addition to warning labels, Facebook implemented other “break glass” measures to stem misinformation as the election approached.
  • Platforms don’t deserve praise for belatedly noticing dumpster fires that they helped create and affixing unobtrusive labels to them
  • Warning labels for misinformation might make some commentators feel a little better, but whether labels actually do much to contain the spread of false information is still unknown.
  • News reporting suggests that insiders at Facebook knew they could and should do more about misinformation, but higher-ups vetoed their ideas. YouTube barely acted to stem the flood of misinformation about election results on its platform.
  • Even before the pandemic, YouTube had begun adjusting its recommendation algorithm to reduce the spread of borderline and harmful content, and is introducing pop-up nudges to encourage user
  • And if 2020 finally made clear to platforms the need for greater content moderation, it also exposed the inevitable limits of content moderation.
  • Down-ranking, labeling, or deleting content on an internet platform does not address the social or political circumstances that caused it to be posted in the first place
  • even the most powerful platform will never be able to fully compensate for the failures of other governing institutions or be able to stop the leader of the free world from constructing an alternative reality when a whole media ecosystem is ready and willing to enable him. As Renée DiResta wrote in The Atlantic last month, “reducing the supply of misinformation doesn’t eliminate the demand.”
  • Even so, this year’s events showed that nothing is innate, inevitable, or immutable about platforms as they currently exist. The possibilities for what they might become—and what role they will play in society—are limited more by imagination than any fixed technological constraint, and the companies appear more willing to experiment than ever.
Javier E

Elon Musk May Kill Us Even If Donald Trump Doesn't - 0 views

  • In his extraordinary 2021 book, The Constitution of Knowledge: A Defense of Truth, Jonathan Rauch, a scholar at Brookings, writes that modern societies have developed an implicit “epistemic” compact–an agreement about how we determine truth–that rests on a broad public acceptance of science and reason, and a respect and forbearance towards institutions charged with advancing knowledge.
  • Today, Rauch writes, those institutions have given way to digital “platforms” that traffic in “information” rather than knowledge and disseminate that information not according to its accuracy but its popularity. And what is popular is sensation, shock, outrage. The old elite consensus has given way to an algorithm. Donald Trump, an entrepreneur of outrage, capitalized on the new technology to lead what Rauch calls “an epistemic secession.”
  • Rauch foresees the arrival of “Internet 3.0,” in which the big companies accept that content regulation is in their interest and erect suitable “guardrails.” In conversation with me, Rauch said that social media companies now recognize that their algorithm are “toxic,” and spoke hopefully of alternative models like Mastodon, which eschews algorithms and allows users to curate their own feeds
  • ...10 more annotations...
  • In an Atlantic essay, “Why The Past Ten Years of American Life have Been Uniquely Stupid,” and in a follow-up piece, Haidt argued that the Age of Gutenberg–of books and the depth understanding that comes with them–ended somewhere around 2014 with the rise of “Share,” “Like” and “Retweet” buttons that opened the way for trolls, hucksters and Trumpists
  • The new age of “hyper-virality,” he writes, has given us both January 6 and cancel culture–ugly polarization in both directions. On the subject of stupidification, we should add the fact that high school students now get virtually their entire stock of knowledge about the world from digital platforms.
  • Haidt proposed several reforms, including modifying Facebook’s “Share” function and requiring “user verification” to get rid of trolls. But he doesn’t really believe in his own medicine
  • Haidt said that the era of “shared understanding” is over–forever. When I asked if he could envision changes that would help protect democracy, Haidt quoted Goldfinger: “Do you expect me to talk?” “No, Mr. Bond, I expect you to die!”
  • Social media is a public health hazard–the cognitive equivalent of tobacco and sugary drinks. Adopting a public health model, we could, for examople, ban the use of algorithms to reduce virality, or even require social media platforms to adopt a subscription rather than advertising revenue model and thus remove their incentive to amass ev er more eyeballs.
  • We could, but we won’t, because unlike other public health hazards, digital platforms are forms of speech. Fox New is probably responsible for more polarization than all social media put together, but the federal government could not compel it–and all other media firms–to change its revenue model.
  • If Mark Zuckerberg or Elon Musk won’t do so out of concern for the public good–a pretty safe bet–they could be compelled to do so only by public or competitive pressure. 
  • Taiwan has provide resilient because its society is resilient; people reject China’s lies. We, here, don’t lack for fact-checkers, but rather for people willing to believe them. The problem is not the technology, but ourselves.
  • you have to wonder if people really are repelled by our poisonous discourse, or by the hailstorm of disinformation, or if they just want to live comfortably inside their own bubble, and not somebody else’
  • If Jonathan Haidt is right, it’s not because we’ve created a self-replicating machine that is destined to annihilate reason; it’s because we are the self-replicating machine.
Javier E

The Age of 'Infopolitics' - NYTimes.com - 0 views

  • we need a new way of thinking about our informational milieu. What we need is a concept of infopolitics that would help us understand the increasingly dense ties between politics and information
  • Infopolitics encompasses not only traditional state surveillance and data surveillance, but also “data analytics” (the techniques that enable marketers at companies like Target to detect, for instance, if you are pregnant), digital rights movements (promoted by organizations like the Electronic Frontier Foundation), online-only crypto-currencies (like Bitcoin or Litecoin), algorithmic finance (like automated micro-trading) and digital property disputes (from peer-to-peer file sharing to property claims in the virtual world of Second Life)
  • Surveying this iceberg is crucial because atop it sits a new kind of person: the informational person. Politically and culturally, we are increasingly defined through an array of information architectures: highly designed environments of data, like our social media profiles, into which we often have to squeeze ourselves
  • ...12 more annotations...
  • We have become what the privacy theorist Daniel Solove calls “digital persons.” As such we are subject to infopolitics (or what the philosopher Grégoire Chamayou calls “datapower,” the political theorist Davide Panagia “datapolitik” and the pioneering thinker Donna Haraway “informatics of domination”).
  • Once fingerprints, biometrics, birth certificates and standardized names were operational, it became possible to implement an international passport system, a social security number and all other manner of paperwork that tells us who someone is. When all that paper ultimately went digital, the reams of data about us became radically more assessable and subject to manipulation,
  • We like to think of ourselves as somehow apart from all this information. We are real — the information is merely about us.
  • But what is it that is real? What would be left of you if someone took away all your numbers, cards, accounts, dossiers and other informational prostheses? Information is not just about you — it also constitutes who you are.
  • We understandably do not want to see ourselves as bits and bytes. But unless we begin conceptualizing ourselves in this way, we leave it to others to do it for us
  • agencies and corporations will continue producing new visions of you and me, and they will do so without our input if we remain stubbornly attached to antiquated conceptions of selfhood that keep us from admitting how informational we already are.
  • What should we do about our Internet and phone patterns’ being fastidiously harvested and stored away in remote databanks where they await inspection by future algorithms developed at the National Security Agency, Facebook, credit reporting firms like Experian and other new institutions of information and control that will come into existence in future decades?
  • What bits of the informational you will fall under scrutiny? The political you? The sexual you? What next-generation McCarthyisms await your informational self? And will those excesses of oversight be found in some Senate subcommittee against which we democratic citizens might hope to rise up in revolt — or will they lurk among algorithmic automatons that silently seal our fates in digital filing systems?
  • Despite their decidedly different political sensibilities, what links together the likes of Senator Wyden and the international hacker network known as Anonymous is that they respect the severity of what is at stake in our information.
  • information is a site for the call of justice today, alongside more quintessential battlefields like liberty of thought and equality of opportunity.
  • we lack the intellectual framework to grasp the new kinds of political injustices characteristic of today’s information society.
  • though nearly all of us have a vague sense that something is wrong with the new regimes of data surveillance, it is difficult for us to specify exactly what is happening and why it raises serious concern
Javier E

Google Alters Search to Handle More Complex Queries - NYTimes.com - 0 views

  • Google on Thursday announced one of the biggest changes to its search engine, a rewriting of its algorithm to handle more complex queries that affects 90 percent of all searches.
  • The company made the changes, executives said, because Google users are asking increasingly long and complex questions and are searching Google more often on mobile phones with voice search.
  • “They said, ‘Let’s go back and basically replace the engine of a 1950s car,’ ” said Danny Sullivan, founding editor of Search Engine Land, an industry blog. “It’s fair to say the general public seemed not to have noticed that Google ripped out its engine while driving down the road and replaced it with something else.
  • ...3 more annotations...
  • Google originally matched keywords in a search query to the same words on Web pages. Hummingbird is the culmination of a shift to understanding the meaning of phrases in a query and displaying Web pages that more accurately match that meaning
  • The algorithm also builds on work Google has done to understand conversational language, like interpreting what pronouns in a search query refer to. Hummingbird extends that to all Web searches, not just results related to entities included in the Knowledge Graph. It tries to connect phrases and understand concepts in a long query.
  • The outcome is not a change in how Google searches the Web, but in the results that it shows. Unlike some of its other algorithm changes, including one that pushed down so-called content farms in search results, Hummingbird is unlikely to noticeably affect certain categories of Web businesses, Mr. Sullivan said. Instead, Google says it believes that users will see more precise results
Javier E

Mark Zuckerberg, Let Me Pay for Facebook - NYTimes.com - 0 views

  • 93 percent of the public believes that “being in control of who can get information about them is important,” and yet the amount of information we generate online has exploded and we seldom know where it all goes.
  • the pop-up and the ad-financed business model. The former is annoying but it’s the latter that is helping destroy the fabric of a rich, pluralistic Internet.
  • Facebook makes about 20 cents per user per month in profit. This is a pitiful sum, especially since the average user spends an impressive 20 hours on Facebook every month, according to the company. This paltry profit margin drives the business model: Internet ads are basically worthless unless they are hyper-targeted based on tracking and extensive profiling of users. This is a bad bargain, especially since two-thirds of American adults don’t want ads that target them based on that tracking and analysis of personal behavior.
  • ...10 more annotations...
  • This way of doing business rewards huge Internet platforms, since ads that are worth so little can support only companies with hundreds of millions of users.
  • Ad-based businesses distort our online interactions. People flock to Internet platforms because they help us connect with one another or the world’s bounty of information — a crucial, valuable function. Yet ad-based financing means that the companies have an interest in manipulating our attention on behalf of advertisers, instead of letting us connect as we wish.
  • Many users think their feed shows everything that their friends post. It doesn’t. Facebook runs its billion-plus users’ newsfeed by a proprietary, ever-changing algorithm that decides what we see. If Facebook didn’t have to control the feed to keep us on the site longer and to inject ads into our stream, it could instead offer us control over this algorithm.
  • Many nonprofits and civic groups that were initially thrilled about their success in using Facebook to reach people are now despondent as their entries are less and less likely to reach people who “liked” their posts unless they pay Facebook to help boost their updates.
  • What to do? It’s simple: Internet sites should allow their users to be the customers. I would, as I bet many others would, happily pay more than 20 cents per month for a Facebook or a Google that did not track me, upgraded its encryption and treated me as a customer whose preferences and privacy matter.
  • Many people say that no significant number of users will ever pay directly for Internet services. But that is because we are misled by the mantra that these services are free. With growing awareness of the privacy cost of ads, this may well change. Millions of people pay for Netflix despite the fact that pirated copies of many movies are available free. We eventually pay for ads, anyway, as that cost is baked into products we purchase
  • A seamless, secure micropayment system that spreads a few pennies at a time as we browse a social network, up to a preset monthly limit, would alter the whole landscape for the better.
  • we’re not starting from scratch. Micropayment systems that would allow users to spend a few cents here and there, not be so easily tracked by all the Big Brothers, and even allow personalization were developed in the early days of the Internet. Big banks and large Internet platforms didn’t show much interest in this micropayment path, which would limit their surveillance abilities. We can revive it.
  • If even a quarter of Facebook’s 1.5 billion users were willing to pay $1 per month in return for not being tracked or targeted based on their data, that would yield more than $4 billion per year — surely a number worth considering.
  • Mr. Zuckerberg has reportedly spent more than $30 million to buy the homes around his in Palo Alto, Calif., and more than $100 million for a secluded parcel of land in Hawaii. He knows privacy is worth paying for. So he should let us pay a few dollars to protect ours.
Javier E

Dealing With an Identity Hijacked on the Online Highway - NYTimes.com - 0 views

  • his predicament stands as a chilling example of what it means to be at the mercy of the Google algorithm.
  • The question is best directed at the search engines. And Google’s defense — that the behavior of its ever-improving algorithm should be considered independent of the results it produces in a particular controversial case — has a particularly patronizing air, especially when it comes to hurting living, breathing people.
  • it was the algorithm that took the hit, and washed away accountability.
  • ...1 more annotation...
  • “When a company is filled with engineers, it turns to engineering to solve problems,” he wrote candidly. “Reduce each decision to a simple logic problem. Remove all subjectivity and just look at the data.”
Javier E

How Game Theory Helped Improve New York City's High School Application Process - NYTime... - 0 views

  • “It was an allocation problem,” explained Neil Dorosin, the director of high-school admissions at the time of the redesign. The city had a scarce resource — in this case, good schools — and had to work out an equitable way to distribute it. “But unlike a scarce resource like Rolling Stones tickets, where whoever’s willing to pay the most gets the tickets, here we can’t use price,”
  • In the early 1960s, the economists David Gale and Lloyd Shapley proved that it was theoretically possible to pair an unlimited number of men and women in stable marriages according to their preferences.In game theory, “stable” means that every player’s preferences are optimized; in this case, no man and no woman matched with another partner would both prefer to be with each other.
  • a “deferred acceptance algorithm.”
  • ...6 more annotations...
  • Here is how it works: Each suitor proposes to his first-choice mate; each woman has her own list of favorites. (The economists worked from the now-quaint premise that men only married women, and did the proposing.) She rejects all proposals except her favorite — but does not give him a firm answer. Each suitor rejected by his most beloved then proposes to his second choice, and each woman being wooed in this round again rejects all but her favorite.
  • The courting continues until everyone is betrothed. But because each woman has waited to give her final answer (the “deferred acceptance”), she has the opportunity to accept a proposal later from a suitor whom she prefers to someone she had tentatively considered earlier. The later match is preferable for her, and therefore more stable.
  • The deferred acceptance algorithm, Professor Pathak said, is “one of the great ideas in economics.” It quickly became the basis for a standard lesson in graduate-level economics courses.
  • In the case of rejection, the algorithm looks to make a match with a student’s second-choice school, and so on. Like the brides and grooms of Professors Gale and Shapley, students and schools connect only tentatively until the very end of the process.
  • Professor Abdulkadiroglu said he had fielded calls from anguished parents seeking advice on how their children could snare the best match. His advice: “Rank them in true preference order.”
  • It seems that most students prefer to go to school close to home, and if nearby schools are underperforming, students will choose them nevertheless. Researching other options is labor intensive, and poor and immigrant children in particular may not get the help they need to do it.
Javier E

Why It Matters That Google's Android Is Coming to All the Screens - Alexis C. Madrigal ... - 1 views

  • If Android is to be the core of your own personal swarm of screens and robots, it means that all of the data Google knows about you will come to the interactions you have with the stuff around you. The profile you build up at your computer and on your tablet will now apply to your television watching and your commuting, seamlessly
  • Sitting underneath all the interactions you have across these devices, Google's algorithms will be churning. The way Google thinks—its habits of data collection, analysis, and optimization—will become part of these experiences that have previously remained outside the company's reach.
  • Google's announcement that they will be determining "the most important" notifications to show you. Like Facebook's News Feed or Gmail's Priority Inbox, you'll see a selection of possible notifications, rather than all of them, when you glance at the phone. (A tap lets you see all of them.) That is to say, Google is asserting algorithmic control over the new interface frontier. And the software that determines what you see will be opaque to you. 
  • ...1 more annotation...
  • let's be clear: Google is almost always willing to trade user control for small increases in efficiency. 
Javier E

An Algorithm Isn't Always the Answer - The New York Times - 1 views

  • in just about every aspect of my life I seek order and safety.
  • Picture me on Tinder circa 2014.
  • Here are my search criteria: I’m looking for men in my area (no farther than three miles away, because traveling is such a hassle and I take too many cabs as it is) who are anywhere from two years younger than me up to 10 years older (going on the assumption that women mature more quickly than men). And for goodness’ sake, my friends would tell me, find a man who isn’t a writer — they’re way too emotionally unstable. Certainly if I could check most of those items off the checklist, I’d find love or some good enough approximation of it. Advertisement Continue reading the main story
  • ...4 more annotations...
  • How did it go? I was absolutely miserable dating appropriate-age marketing associates who lived near me. I always wanted to be at home reading instead
  • Then one night I held a reading with some authors I admire at a bookstore, and I threw an after-party at my favorite dive bar. In walked a friend of a friend who I sort of knew from the internet but who I’d never met in real life. He is six years younger than I am (way too young for me) and he lived in Harlem (that’s a $40 cab fare from my home in Brooklyn) and he’s a writer/comedian (warning flags coming at me from every direction). But we talked and he charmed me. He was online dating, too, but I never would’ve found him on an app. He wasn’t on my metaphorical vision board, but he fit into my real life in ways I never could’ve imagined. He’s my husband now. (He likes David Foster Wallace.)
  • The internet is supposed to make it easier for us to find people and places and perfect gifts, and more profitable for companies that offer those services. And yet here I am, with my too-old dog and my too-young husband and my ever-growing book collection, happier than I could have predicted.
  • It’s risky not to have data, to be without numbers you can plug in when you’re looking for something or someone to love. We think we know exactly what we want. But I hope that our guts remain true to our hearts, and in this world measured by clicks and stars and highest customer reviews, we remember that some rules are made to be broken in the most delightful of ways.
Javier E

Facebook and Twitter Dodge a 2016 Repeat, and Ignite a 2020 Firestorm - The New York Times - 1 views

  • It’s true that banning links to a story published by a 200-year-old American newspaper — albeit one that is now a Rupert Murdoch-owned tabloid — is a more dramatic step than cutting off WikiLeaks or some lesser-known misinformation purveyor. Still, it’s clear that what Facebook and Twitter were actually trying to prevent was not free expression, but a bad actor using their services as a conduit for a damaging cyberattack or misinformation.
  • These decisions get made quickly, in the heat of the moment, and it’s possible that more contemplation and debate would produce more satisfying choices. But time is a luxury these platforms don’t always have. In the past, they have been slow to label or remove dangerous misinformation about Covid-19, mail-in voting and more, and have only taken action after the bad posts have gone viral, defeating the purpose.
  • That left the companies with three options, none of them great. Option A: They could treat the Post’s article as part of a hack-and-leak operation, and risk a backlash if it turned out to be more innocent. Option B: They could limit the article’s reach, allowing it to stay up but choosing not to amplify it until more facts emerged. Or, Option C: They could do nothing, and risk getting played again by a foreign actor seeking to disrupt an American election.
  • ...8 more annotations...
  • On Wednesday, several prominent Republicans, including Mr. Trump, repeated their calls for Congress to repeal Section 230 of the Communications Decency Act, a law that shields tech platforms from many lawsuits over user-generated content.
  • That leaves the companies in a precarious spot. They are criticized when they allow misinformation to spread. They are also criticized when they try to prevent it.
  • Perhaps the strangest idea to emerge in the past couple of days, though, is that these services are only now beginning to exert control over what we see. Representative Doug Collins, Republican of Georgia, made this point in a letter to Mark Zuckerberg, the chief executive of Facebook, in which he derided the social network for using “its monopoly to control what news Americans have access to.”
  • The truth, of course, is that tech platforms have been controlling our information diets for years, whether we realized it or not. Their decisions were often buried in obscure “community standards” updates, or hidden in tweaks to the black-box algorithms that govern which posts users see.
  • Their leaders have always been editors masquerading as engineers.
  • What’s happening now is simply that, as these companies move to rid their platforms of bad behavior, their influence is being made more visible.
  • Rather than letting their algorithms run amok (which is an editorial choice in itself), they’re making high-stakes decisions about flammable political misinformation in full public view, with human decision makers who can be debated and held accountable for their choices.
  • After years of inaction, Facebook and Twitter are finally starting to clean up their messes. And in the process, they’re enraging the powerful people who have thrived under the old system.
Javier E

Guillaume Chalost est venu parler au Think Tank intelligence artificielle d'InnoCherche... - 0 views

  • Effaré devant les résultats, il propose à Google son employeur, de développer pour les utilisateurs une option “marche arrière”– ou encore “zoom out” – qui leur permettrait de sortir de la bulle où ils s’enfoncent et changer de sujet. Refus de Google car cela aurait pu aller contre le but recherché qui est d’augmenter leurs revenus donc le nombre d’heures de l’internaute.
  • La plateforme aujourd’hui ne vous propose même pas des like pour connaître votre goût et vous orienter de façon personnelle.
  • Guillaume Chaslot dénonce inlassablement les dérives des systèmes de recommandations vidéos, et appelle à un « éveil collectif » sur la transparence des algorithmes.  Aujourd’hui, Guillaume confirme qu’il est tout à fait possible de retourner l’utilisation de ces algorithmes d’intelligence artificielle pour faire des recommandations utilisateur avec des critères d’ouverture d’esprit et non plus d’addiction …
huffem4

Algorithms reveal changes in stereotypes | Stanford News - 2 views

  • The researchers used word embeddings – an algorithmic technique that can map relationships and associations between words –  to measure changes in gender and ethnic stereotypes over the past century in the United States.
  • Our prior research has shown that embeddings effectively capture existing stereotypes and that those biases can be systematically removed. But we think that, instead of removing those stereotypes, we can also use embeddings as a historical lens for quantitative, linguistic and sociological analyses of biases.”
  • Take the word “honorable.” Using the embedding tool, previous research found that the adjective has a closer relationship to the word “man” than the word “woman.”
  • ...4 more annotations...
  • One of the key findings to emerge was how biases toward women changed for the better – in some ways – over time.
  • For example, adjectives such as “intelligent,” “logical” and “thoughtful” were associated more with men in the first half of the 20th century. But since the 1960s, the same words have increasingly been associated with women with every following decade, correlating with the women’s movement in the 1960s, although a gap still remains.
  • For example, in the 1910s, words like “barbaric,” “monstrous” and “cruel” were the adjectives most associated with Asian last names. By the 1990s, those adjectives were replaced by words like “inhibited,” “passive” and “sensitive.” This linguistic change correlates with a sharp increase in Asian immigration to the United States in the 1960s and 1980s and a change in cultural stereotypes, the researchers said.
  • “It underscores the importance of humanists and computer scientists working together. There is a power to these new machine-learning methods in humanities research that is just being understood,”
‹ Previous 21 - 40 of 163 Next › Last »
Showing 20 items per page