Skip to main content

Home/ TOK Friends/ Group items tagged internet

Rss Feed Group items tagged

12More

Resist the Internet - The New York Times - 0 views

  • Definitely if you’re young, increasingly if you’re old, your day-to-day, minute-to-minute existence is dominated by a compulsion to check email and Twitter and Facebook and Instagram with a frequency that bears no relationship to any communicative need.
  • it requires you to focus intensely, furiously, and constantly on the ephemera that fills a tiny little screen, and experience the traditional graces of existence — your spouse and friends and children, the natural world, good food and great art — in a state of perpetual distraction.
  • Used within reasonable limits, of course, these devices also offer us new graces. But we are not using them within reasonable limits.
  • ...9 more annotations...
  • They are the masters; we are not. They are built to addict us, as the social psychologist Adam Alter’s new book “Irresistible” points out — and to madden us, distract us, arouse us and deceive us.
  • We primp and perform for them as for a lover; we surrender our privacy to their demands; we wait on tenterhooks for every “like.” The smartphone is in the saddle, and it rides mankind.
  • the internet, like alcohol, may be an example of a technology that should be sensibly restricted in custom and in law.
  • It certainly delivers some social benefits, some intellectual advantages, and contributes an important share to recent economic growth.
  • there are also excellent reasons to think that online life breeds narcissism, alienation and depression, that it’s an opiate for the lower classes and an insanity-inducing influence on the politically-engaged, and that it takes more than it gives from creativity and deep thought. Meanwhile the age of the internet has been, thus far, an era of bubbles, stagnation and democratic decay — hardly a golden age whose customs must be left inviolate.
  • So a digital temperance movement would start by resisting the wiring of everything, and seek to create more spaces in which internet use is illegal, discouraged or taboo. Toughen laws against cellphone use in cars, keep computers out of college lecture halls, put special “phone boxes” in restaurants where patrons would be expected to deposit their devices, confiscate smartphones being used in museums and libraries and cathedrals, create corporate norms that strongly discourage checking email in a meeting.
  • Then there are the starker steps. Get computers — all of them — out of elementary schools, where there is no good evidence that they improve learning. Let kids learn from books for years before they’re asked to go online for research; let them play in the real before they’re enveloped by the virtual
  • The age of consent should be 16, not 13, for Facebook accounts. Kids under 16 shouldn’t be allowed on gaming networks. High school students shouldn’t bring smartphones to school. Kids under 13 shouldn’t have them at all.
  • I suspect that versions of these ideas will be embraced within my lifetime by a segment of the upper class and a certain kind of religious family. But the masses will still be addicted, and the technology itself will have evolved to hook and immerse — and alienate and sedate — more completely and efficiently.
8More

The Choose-Your-Own-News Adventure - The New York Times - 0 views

  • some new twist on the modern media sphere’s rush to give you exactly what you want when you want it.
  • No matter how far the experiment goes, Netflix is again in step with the national zeitgeist. After all, there are algorithms for streaming music services like Spotify, for Facebook’s news feed and for Netflix’s own program menu, working to deliver just what you like while filtering out whatever might turn you off and send you away — the sorts of data-driven honey traps that are all the talk at the South by Southwest Interactive Festival going on here through this week.
  • “You used to be a consumer of reality, and now you’re a designer of reality.”
  • ...4 more annotations...
  • It started with President Trump’s Twitter posts accusing former President Barack Obama of having wiretapped his phones at Trump Tower.
  • The proof, you would have heard him say, was already out there in the mainstream media — what with a report on the website Heat Street saying that the Federal Bureau of Investigation had secured a warrant to investigate ties between people in Mr. Trump’s campaign and Russia, and articles in The New York Times, in The Washington Post and elsewhere about intelligence linking people in Mr. Trump’s campaign to Russia, some of it from wiretaps.
  • You could throw on the goggles, become a bird and fly around. If virtual reality can allow a human to become a bird, why couldn’t it allow you to live more fully in your own political reality — don the goggles and go live full time in the adventure of your choosing: A, B or C.
  • Just watch out for that wall you’re about to walk into IRL (in real life). Or, hey, don’t — knock yourself out.
  •  
    This new design reminds me of how the internet is limiting us in our comfort zone. Although in theory, there is almost infinite amount of information on the internet, we can only get a very small proportion of it. And people tends to read the information that support their idea or fit their interests. So the news servers start to design system that only provide readers with what they want to see or like to see. It does not do good to diversify people's mind as what internet should be doing. In the quote, Dan Wagner said: "you're a designer of reality", but I interpret this as we are the designer of our own reality. This will only isolate people from each other. Without living in the same reality, people won't have real communication, so I think this new design does have cons. --Sissi (3/14/2017)
75More

Thieves of experience: On the rise of surveillance capitalism - 1 views

  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • ...72 more annotations...
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
  • Social skills and relationships seem to suffer as well.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Google’s once-patient investors grew restive, demanding that the founders figure out a way to make money, preferably lots of it.
  • nder pressure, Page and Brin authorized the launch of an auction system for selling advertisements tied to search queries. The system was designed so that the company would get paid by an advertiser only when a user clicked on an ad. This feature gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content. Even tiny increases in click rates would bring big gains in income. And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits. Surveillance capitalism had arrived.
  • Google’s business now hinged on what Zuboff calls “the extraction imperative.” To improve its predictions, it had to mine as much information as possible from web users. It aggressively expanded its online services to widen the scope of its surveillance.
  • Through Gmail, it secured access to the contents of people’s emails and address books. Through Google Maps, it gained a bead on people’s whereabouts and movements. Through Google Calendar, it learned what people were doing at different moments during the day and whom they were doing it with. Through Google News, it got a readout of people’s interests and political leanings. Through Google Shopping, it opened a window onto people’s wish lists,
  • The company gave all these services away for free to ensure they’d be used by as many people as possible. It knew the money lay in the data.
  • the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements.
  • Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors.
  • What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information
  • The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.
  • Google conducted its great data heist under the cover of novelty. The web was an exciting frontier — something new in the world — and few people understood or cared about what they were revealing as they searched and surfed. In those innocent days, data was there for the taking, and Google took it
  • Google also benefited from decisions made by lawmakers, regulators, and judges — decisions that granted internet companies free use of a vast taxpayer-funded communication infrastructure, relieved them of legal and ethical responsibility for the information and messages they distributed, and gave them carte blanche to collect and exploit user data.
  • Consider the terms-of-service agreements that govern the division of rights and the delegation of ownership online. Non-negotiable, subject to emendation and extension at the company’s whim, and requiring only a casual click to bind the user, TOS agreements are parodies of contracts, yet they have been granted legal legitimacy by the court
  • Law professors, writes Zuboff, “call these ‘contracts of adhesion’ because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped Google and other firms commandeer personal data as if by fiat.
  • n the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way
  • Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one
  • but her case would have been stronger still had she more fully addressed the benefits side of the ledger.
  • there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on
  • hat the industries of the future will seek to manufacture is the self.
  • Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots.
  • All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.”
  • “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”
  • This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists
  • Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.
  • it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react.
  • spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.
  • competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives
  • “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.
  • What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not
  • Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.
  • Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public
  • Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agenc
  • As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations.
  • The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,”
6More

What Facebook Owes to Journalism - The New York Times - 0 views

  • declared that “a strong news industry is also critical to building an informed community.”
  • Unfortunately, his memo ignored two major points — the role that Facebook and other technology platforms are playing in inadvertently damaging local news media, and the one way they could actually save journalism: with a massive philanthropic commitment.
  • As advertising spending shifted from print, TV and radio to the internet, the money didn’t mostly go to digital news organizations. Increasingly, it goes to Facebook and Google.
  • ...2 more annotations...
  • But just because the result is unintentional doesn’t mean it is fantasy: Newsrooms have been decimated, with basic accountability reporting slashed as a result.
  • I’m not saying that the good stuff — the mobile revolution, blocking intrusive ads, better marketing options for small businesses — doesn’t outweigh the bad. And local news organizations absolutely contributed to the problem with their sluggish and often uncreative reaction to the digital revolution.
  •  
    This article discuss the impact of internet on local news organizations. I agree with the author that the internet do get a lot of ad money and make local news organizations have less funding. Although there are donations, it is still very little compare to what local news organizations used to have. This might be part of the reason why local news organizations don't do well on giving great informations.But I think the time is moving forward, Facebook and google should take some of the responsibility as they get more funding and resources. This article is very persuasive as it has many data and evidence in support. I really like that the author acknowledge the counterargument in his article to make it more reliable. --Sissi (2/22/2017)
23More

After the Fact - The New Yorker - 1 views

  • newish is the rhetoric of unreality, the insistence, chiefly by Democrats, that some politicians are incapable of perceiving the truth because they have an epistemological deficit: they no longer believe in evidence, or even in objective reality.
  • the past of proof is strange and, on its uncertain future, much in public life turns. In the end, it comes down to this: the history of truth is cockamamie, and lately it’s been getting cockamamier.
  • . Michael P. Lynch is a philosopher of truth. His fascinating new book, “The Internet of Us: Knowing More and Understanding Less in the Age of Big Data,” begins with a thought experiment: “Imagine a society where smartphones are miniaturized and hooked directly into a person’s brain.” As thought experiments go, this one isn’t much of a stretch. (“Eventually, you’ll have an implant,” Google’s Larry Page has promised, “where if you think about a fact it will just tell you the answer.”) Now imagine that, after living with these implants for generations, people grow to rely on them, to know what they know and forget how people used to learn—by observation, inquiry, and reason. Then picture this: overnight, an environmental disaster destroys so much of the planet’s electronic-communications grid that everyone’s implant crashes. It would be, Lynch says, as if the whole world had suddenly gone blind. There would be no immediate basis on which to establish the truth of a fact. No one would really know anything anymore, because no one would know how to know. I Google, therefore I am not.
  • ...20 more annotations...
  • In England, the abolition of trial by ordeal led to the adoption of trial by jury for criminal cases. This required a new doctrine of evidence and a new method of inquiry, and led to what the historian Barbara Shapiro has called “the culture of fact”: the idea that an observed or witnessed act or thing—the substance, the matter, of fact—is the basis of truth and the only kind of evidence that’s admissible not only in court but also in other realms where truth is arbitrated. Between the thirteenth century and the nineteenth, the fact spread from law outward to science, history, and journalism.
  • Lynch isn’t terribly interested in how we got here. He begins at the arrival gate. But altering the flight plan would seem to require going back to the gate of departure.
  • Lynch thinks we are frighteningly close to this point: blind to proof, no longer able to know. After all, we’re already no longer able to agree about how to know. (See: climate change, above.)
  • We now only rarely discover facts, Lynch observes; instead, we download them.
  • For the length of the eighteenth century and much of the nineteenth, truth seemed more knowable, but after that it got murkier. Somewhere in the middle of the twentieth century, fundamentalism and postmodernism, the religious right and the academic left, met up: either the only truth is the truth of the divine or there is no truth; for both, empiricism is an error.
  • That epistemological havoc has never ended: much of contemporary discourse and pretty much all of American politics is a dispute over evidence. An American Presidential debate has a lot more in common with trial by combat than with trial by jury,
  • came the Internet. The era of the fact is coming to an end: the place once held by “facts” is being taken over by “data.” This is making for more epistemological mayhem, not least because the collection and weighing of facts require investigation, discernment, and judgment, while the collection and analysis of data are outsourced to machines
  • “Most knowing now is Google-knowing—knowledge acquired online,”
  • Empiricists believed they had deduced a method by which they could discover a universe of truth: impartial, verifiable knowledge. But the movement of judgment from God to man wreaked epistemological havoc.
  • “The Internet didn’t create this problem, but it is exaggerating it,”
  • nothing could be less well settled in the twenty-first century than whether people know what they know from faith or from facts, or whether anything, in the end, can really be said to be fully proved.
  • In his 2012 book, “In Praise of Reason,” Lynch identified three sources of skepticism about reason: the suspicion that all reasoning is rationalization, the idea that science is just another faith, and the notion that objectivity is an illusion. These ideas have a specific intellectual history, and none of them are on the wane.
  • Their consequences, he believes, are dire: “Without a common background of standards against which we measure what counts as a reliable source of information, or a reliable method of inquiry, and what doesn’t, we won’t be able to agree on the facts, let alone values.
  • When we Google-know, Lynch argues, we no longer take responsibility for our own beliefs, and we lack the capacity to see how bits of facts fit into a larger whole
  • Essentially, we forfeit our reason and, in a republic, our citizenship. You can see how this works every time you try to get to the bottom of a story by reading the news on your smartphone.
  • what you see when you Google “Polish workers” is a function of, among other things, your language, your location, and your personal Web history. Reason can’t defend itself. Neither can Google.
  • rump doesn’t reason. He’s a lot like that kid who stole my bat. He wants combat. Cruz’s appeal is to the judgment of God. “Father God, please . . . awaken the body of Christ, that we might pull back from the abyss,” he preached on the campaign trail. Rubio’s appeal is to Google.
  • Is there another appeal? People who care about civil society have two choices: find some epistemic principles other than empiricism on which everyone can agree or else find some method other than reason with which to defend empiricism
  • Lynch suspects that doing the first of these things is not possible, but that the second might be. He thinks the best defense of reason is a common practical and ethical commitment.
  • That, anyway, is what Alexander Hamilton meant in the Federalist Papers, when he explained that the United States is an act of empirical inquiry: “It seems to have been reserved to the people of this country, by their conduct and example, to decide the important question, whether societies of men are really capable or not of establishing good government from reflection and choice, or whether they are forever destined to depend for their political constitutions on accident and force.”
46More

'Our minds can be hijacked': the tech insiders who fear a smartphone dystopia | Technol... - 0 views

  • Rosenstein belongs to a small but growing band of Silicon Valley heretics who complain about the rise of the so-called “attention economy”: an internet shaped around the demands of an advertising economy.
  • “It is very common,” Rosenstein says, “for humans to develop things with the best of intentions and for them to have unintended, negative consequences.”
  • most concerned about the psychological effects on people who, research shows, touch, swipe or tap their phone 2,617 times a day.
  • ...43 more annotations...
  • There is growing concern that as well as addicting users, technology is contributing toward so-called “continuous partial attention”, severely limiting people’s ability to focus, and possibly lowering IQ. One recent study showed that the mere presence of smartphones damages cognitive capacity – even when the device is turned off. “Everyone is distracted,” Rosenstein says. “All of the time.”
  • Drawing a straight line between addiction to social media and political earthquakes like Brexit and the rise of Donald Trump, they contend that digital forces have completely upended the political system and, left unchecked, could even render democracy as we know it obsolete.
  • Without irony, Eyal finished his talk with some personal tips for resisting the lure of technology. He told his audience he uses a Chrome extension, called DF YouTube, “which scrubs out a lot of those external triggers” he writes about in his book, and recommended an app called Pocket Points that “rewards you for staying off your phone when you need to focus”.
  • “One reason I think it is particularly important for us to talk about this now is that we may be the last generation that can remember life before,” Rosenstein says. It may or may not be relevant that Rosenstein, Pearlman and most of the tech insiders questioning today’s attention economy are in their 30s, members of the last generation that can remember a world in which telephones were plugged into walls.
  • One morning in April this year, designers, programmers and tech entrepreneurs from across the world gathered at a conference centre on the shore of the San Francisco Bay. They had each paid up to $1,700 to learn how to manipulate people into habitual use of their products, on a course curated by conference organiser Nir Eyal.
  • Eyal, 39, the author of Hooked: How to Build Habit-Forming Products, has spent several years consulting for the tech industry, teaching techniques he developed by closely studying how the Silicon Valley giants operate.
  • “The technologies we use have turned into compulsions, if not full-fledged addictions,” Eyal writes. “It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” None of this is an accident, he writes. It is all “just as their designers intended”
  • He explains the subtle psychological tricks that can be used to make people develop habits, such as varying the rewards people receive to create “a craving”, or exploiting negative emotions that can act as “triggers”. “Feelings of boredom, loneliness, frustration, confusion and indecisiveness often instigate a slight pain or irritation and prompt an almost instantaneous and often mindless action to quell the negative sensation,” Eyal writes.
  • The most seductive design, Harris explains, exploits the same psychological susceptibility that makes gambling so compulsive: variable rewards. When we tap those apps with red icons, we don’t know whether we’ll discover an interesting email, an avalanche of “likes”, or nothing at all. It is the possibility of disappointment that makes it so compulsive.
  • Finally, Eyal confided the lengths he goes to protect his own family. He has installed in his house an outlet timer connected to a router that cuts off access to the internet at a set time every day. “The idea is to remember that we are not powerless,” he said. “We are in control.
  • But are we? If the people who built these technologies are taking such radical steps to wean themselves free, can the rest of us reasonably be expected to exercise our free will?
  • Not according to Tristan Harris, a 33-year-old former Google employee turned vocal critic of the tech industry. “All of us are jacked into this system,” he says. “All of our minds can be hijacked. Our choices are not as free as we think they are.”
  • Harris, who has been branded “the closest thing Silicon Valley has to a conscience”, insists that billions of people have little choice over whether they use these now ubiquitous technologies, and are largely unaware of the invisible ways in which a small number of people in Silicon Valley are shaping their lives.
  • “I don’t know a more urgent problem than this,” Harris says. “It’s changing our democracy, and it’s changing our ability to have the conversations and relationships that we want with each other.” Harris went public – giving talks, writing papers, meeting lawmakers and campaigning for reform after three years struggling to effect change inside Google’s Mountain View headquarters.
  • He explored how LinkedIn exploits a need for social reciprocity to widen its network; how YouTube and Netflix autoplay videos and next episodes, depriving users of a choice about whether or not they want to keep watching; how Snapchat created its addictive Snapstreaks feature, encouraging near-constant communication between its mostly teenage users.
  • The techniques these companies use are not always generic: they can be algorithmically tailored to each person. An internal Facebook report leaked this year, for example, revealed that the company can identify when teens feel “insecure”, “worthless” and “need a confidence boost”. Such granular information, Harris adds, is “a perfect model of what buttons you can push in a particular person”.
  • Tech companies can exploit such vulnerabilities to keep people hooked; manipulating, for example, when people receive “likes” for their posts, ensuring they arrive when an individual is likely to feel vulnerable, or in need of approval, or maybe just bored. And the very same techniques can be sold to the highest bidder. “There’s no ethics,” he says. A company paying Facebook to use its levers of persuasion could be a car business targeting tailored advertisements to different types of users who want a new vehicle. Or it could be a Moscow-based troll farm seeking to turn voters in a swing county in Wisconsin.
  • It was Rosenstein’s colleague, Leah Pearlman, then a product manager at Facebook and on the team that created the Facebook “like”, who announced the feature in a 2009 blogpost. Now 35 and an illustrator, Pearlman confirmed via email that she, too, has grown disaffected with Facebook “likes” and other addictive feedback loops. She has installed a web browser plug-in to eradicate her Facebook news feed, and hired a social media manager to monitor her Facebook page so that she doesn’t have to.
  • Harris believes that tech companies never deliberately set out to make their products addictive. They were responding to the incentives of an advertising economy, experimenting with techniques that might capture people’s attention, even stumbling across highly effective design by accident.
  • It’s this that explains how the pull-to-refresh mechanism, whereby users swipe down, pause and wait to see what content appears, rapidly became one of the most addictive and ubiquitous design features in modern technology. “Each time you’re swiping down, it’s like a slot machine,” Harris says. “You don’t know what’s coming next. Sometimes it’s a beautiful photo. Sometimes it’s just an ad.”
  • The reality TV star’s campaign, he said, had heralded a watershed in which “the new, digitally supercharged dynamics of the attention economy have finally crossed a threshold and become manifest in the political realm”.
  • “Smartphones are useful tools,” he says. “But they’re addictive. Pull-to-refresh is addictive. Twitter is addictive. These are not good things. When I was working on them, it was not something I was mature enough to think about. I’m not saying I’m mature now, but I’m a little bit more mature, and I regret the downsides.”
  • All of it, he says, is reward-based behaviour that activates the brain’s dopamine pathways. He sometimes finds himself clicking on the red icons beside his apps “to make them go away”, but is conflicted about the ethics of exploiting people’s psychological vulnerabilities. “It is not inherently evil to bring people back to your product,” he says. “It’s capitalism.”
  • He identifies the advent of the smartphone as a turning point, raising the stakes in an arms race for people’s attention. “Facebook and Google assert with merit that they are giving users what they want,” McNamee says. “The same can be said about tobacco companies and drug dealers.”
  • McNamee chooses his words carefully. “The people who run Facebook and Google are good people, whose well-intentioned strategies have led to horrific unintended consequences,” he says. “The problem is that there is nothing the companies can do to address the harm unless they abandon their current advertising models.”
  • But how can Google and Facebook be forced to abandon the business models that have transformed them into two of the most profitable companies on the planet?
  • McNamee believes the companies he invested in should be subjected to greater regulation, including new anti-monopoly rules. In Washington, there is growing appetite, on both sides of the political divide, to rein in Silicon Valley. But McNamee worries the behemoths he helped build may already be too big to curtail.
  • Rosenstein, the Facebook “like” co-creator, believes there may be a case for state regulation of “psychologically manipulative advertising”, saying the moral impetus is comparable to taking action against fossil fuel or tobacco companies. “If we only care about profit maximisation,” he says, “we will go rapidly into dystopia.”
  • James Williams does not believe talk of dystopia is far-fetched. The ex-Google strategist who built the metrics system for the company’s global search advertising business, he has had a front-row view of an industry he describes as the “largest, most standardised and most centralised form of attentional control in human history”.
  • It is a journey that has led him to question whether democracy can survive the new technological age.
  • He says his epiphany came a few years ago, when he noticed he was surrounded by technology that was inhibiting him from concentrating on the things he wanted to focus on. “It was that kind of individual, existential realisation: what’s going on?” he says. “Isn’t technology supposed to be doing the complete opposite of this?
  • That discomfort was compounded during a moment at work, when he glanced at one of Google’s dashboards, a multicoloured display showing how much of people’s attention the company had commandeered for advertisers. “I realised: this is literally a million people that we’ve sort of nudged or persuaded to do this thing that they weren’t going to otherwise do,” he recalls.
  • Williams and Harris left Google around the same time, and co-founded an advocacy group, Time Well Spent, that seeks to build public momentum for a change in the way big tech companies think about design. Williams finds it hard to comprehend why this issue is not “on the front page of every newspaper every day.
  • “Eighty-seven percent of people wake up and go to sleep with their smartphones,” he says. The entire world now has a new prism through which to understand politics, and Williams worries the consequences are profound.
  • g. “The attention economy incentivises the design of technologies that grab our attention,” he says. “In so doing, it privileges our impulses over our intentions.”
  • That means privileging what is sensational over what is nuanced, appealing to emotion, anger and outrage. The news media is increasingly working in service to tech companies, Williams adds, and must play by the rules of the attention economy to “sensationalise, bait and entertain in order to survive”.
  • It is not just shady or bad actors who were exploiting the internet to change public opinion. The attention economy itself is set up to promote a phenomenon like Trump, who is masterly at grabbing and retaining the attention of supporters and critics alike, often by exploiting or creating outrage.
  • All of which has left Brichter, who has put his design work on the backburner while he focuses on building a house in New Jersey, questioning his legacy. “I’ve spent many hours and weeks and months and years thinking about whether anything I’ve done has made a net positive impact on society or humanity at all,” he says. He has blocked certain websites, turned off push notifications, restricted his use of the Telegram app to message only with his wife and two close friends, and tried to wean himself off Twitter. “I still waste time on it,” he confesses, “just reading stupid news I already know about.” He charges his phone in the kitchen, plugging it in at 7pm and not touching it until the next morning.
  • He stresses these dynamics are by no means isolated to the political right: they also play a role, he believes, in the unexpected popularity of leftwing politicians such as Bernie Sanders and Jeremy Corbyn, and the frequent outbreaks of internet outrage over issues that ignite fury among progressives.
  • All of which, Williams says, is not only distorting the way we view politics but, over time, may be changing the way we think, making us less rational and more impulsive. “We’ve habituated ourselves into a perpetual cognitive style of outrage, by internalising the dynamics of the medium,” he says.
  • It was another English science fiction writer, Aldous Huxley, who provided the more prescient observation when he warned that Orwellian-style coercion was less of a threat to democracy than the more subtle power of psychological manipulation, and “man’s almost infinite appetite for distractions”.
  • If the attention economy erodes our ability to remember, to reason, to make decisions for ourselves – faculties that are essential to self-governance – what hope is there for democracy itself?
  • “The dynamics of the attention economy are structurally set up to undermine the human will,” he says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.”
2More

How Alignment Charts Went From Dungeons & Dragons to a Meme - The Atlantic - 0 views

  • Bartle recommends against using an alignment chart in a virtual space or online game because, on the internet, “much of what is good or evil, lawful or chaotic, is intangible.” The internet creates so many unpredictable conflicts and confusing scenarios for human interaction, judgment becomes impossible.
  • At the same time, judgment comes down constantly online. Social-media platforms frequently enforce binary responses: either award something a heart because you love it, or reply with something quick and crude when you hate it. The internet is a space of permutations and addled context, yet, as the Motherboard writer Roisin Kiberd argued in a 2019 essay collection about meme culture, “the internet is full of reductive moral judgment.”
28More

How Joe Biden's Digital Team Tamed the MAGA Internet - The New York Times - 1 views

  • it’s worth looking under the hood of the Biden digital strategy to see what future campaigns might learn from it.
  • while the internet alone didn’t get Mr. Biden elected, a few key decisions helped his chances.
  • 1. Lean On Influencers and Validators
  • ...25 more annotations...
  • In the early days of his campaign, Mr. Biden’s team envisioned setting up its own digital media empire. It posted videos to his official YouTube channel, conducted virtual forums and even set up a podcast hosted by Mr. Biden, “Here’s the Deal.”
  • those efforts were marred by technical glitches and lukewarm receptions, and they never came close to rivaling the reach of Mr. Trump’s social media machine.
  • So the campaign pivoted to a different strategy, which involved expanding Mr. Biden’s reach by working with social media influencers and “validators,
  • Perhaps the campaign’s most unlikely validator was Fox News. Headlines from the outlet that reflected well on Mr. Biden were relatively rare, but the campaign’s tests showed that they were more persuasive to on-the-fence voters than headlines from other outlets
  • the “Rebel Alliance,” a jokey nod to Mr. Parscale’s “Death Star,” and it eventually grew to include the proprietors of pages like Occupy Democrats, Call to Activism, The Other 98 Percent and Being Liberal.
  • 2. Tune Out Twitter, and Focus on ‘Facebook Moms’
  • “The whole Biden campaign ethos was ‘Twitter isn’t real life,’” Mr. Flaherty said. “There are risks of running a campaign that is too hyper-aware of your own ideological corner.”
  • As it focused on Facebook, the Biden campaign paid extra attention to “Facebook moms” — women who spend a lot of time sharing cute and uplifting content
  • “Our goal was really to meet people where they were,”
  • 3. Build a Facebook Brain Trust
  • “When people saw a Fox News headline endorsing Joe Biden, it made them stop scrolling and think.”
  • Ultimately, he said, the campaign’s entire digital strategy — the Malarkey Factory, the TikTok creators and Facebook moms, the Fortnite signs and small-batch creators — was about trying to reach a kinder, gentler version of the internet that it still believed existed.
  • “I had the freedom to go for the jugular,” said Rafael Rivero, a co-founder of Occupy Democrats and Ridin’ With Biden, another big pro-Biden Facebook page.
  • “It was sort of a big, distributed message test,” Mr. Flaherty said of the Rebel Alliance. “If it was popping through Occupy or any of our other partners, we knew there was heat there.”
  • These left-wing pages gave the campaign a bigger Facebook audience than it could have reached on its own. But they also allowed Mr. Biden to keep most of his messaging positive, while still tapping into the anger and outrage many Democratic voters felt.
  • 4. Promote ‘Small-Batch Creators,’ Not Just Slick Commercials
  • the Biden campaign found that traditional political ads — professionally produced, slick-looking 30-second spots — were far less effective than impromptu, behind-the-scenes footage and ads that featured regular voters talking directly into their smartphones or webcams about why they were voting for Mr. Biden.
  • “The things that were realer, more grainy and cheaper to produce were more credible.”
  • In addition to hiring traditional Democratic ad firms, the campaign also teamed up with what it called “small-batch creators” — lesser-known producers and digital creators, some of whom had little experience making political ads
  • 5. Fight Misinformation, but Pick Your Battles
  • The campaign formed an in-house effort to combat these rumors, known as the “Malarkey Factory.” But it picked its battles carefully, using data from voter testing to guide its responses.
  • “The Hunter Biden conversation was many times larger than the Hillary Clinton email conversation, but it really didn’t stick, because people think Joe Biden’s a good guy,”
  • the campaign’s focus on empathy had informed how it treated misinformation: not as a cynical Trump ploy that was swallowed by credulous dupes, but as something that required listening to voters to understand their concerns and worries before fighting back
  • On the messaging app Signal, the page owners formed a group text that became a kind of rapid-response brain trust for the campaign.
  • “We made a decision early that we were going to be authentically Joe Biden online, even when people were saying that was a trap.”
11More

Trump's Twitter ban renews calls for tech law changes by many who don't get tech or the... - 1 views

  • There is no way Wednesday's events could have happened without the convenience and ease afforded to white supremacists — and almost everyone else — by the openness of the modern consumer internet.
  • It's ironic, then, that the insurrection unfolded on the heels of President Donald Trump's continual efforts to repeal Section 230 of the Communications Decency Act, which makes it difficult to sue online platforms over the content they host (or don't) — or how they moderate it (or don't).
  • Section 230 is, of course, the rare law that is disliked by Republicans and Democrats. Biden hates it, having said: "I think social media should be more socially conscious in terms of what is important in terms of our democracy. ... Everything should not be about whether they can make a buck."
  • ...8 more annotations...
  • It's one of the most consequential laws governing the internet, and it provided a crucial liability shield for technology companies for content they didn't themselves create, like comment threads.
  • and it has never even been updated to take into account any of the technological changes that have happened since.
  • What Rule 230 isn't (though it's often portrayed that way) is a bedrock for free speech protections: It's simply a rule that permits internet companies to moderate what other people put on their platforms — or not — without being on the hook legally for everything that happens to be there
  • There is an opportunity to use technology to protect people's ability to safely participate in democracy and enable a different America — the America we witnessed in Georgia on Tuesday — and a different world.
  • After Republicans lost the White House, the House and then the Senate, technology companies no longer feel pressure to cozy up to conservatives to keep their prerogatives.
  • But don't mistake the technology industry's lobbying points about free speech as being related to any real care for American democracy.
  • The major technology platforms enabling hate speech all have one thing in common with our 45th president: self-interest.
  • Freedom of speech is truly a value to cherish, but we cherish it through facilitating the expression of truth, not the unfettered right to spew lies and incite violence without consequence.
20More

How the web distorts reality and impairs our judgement skills | Media Network | The Gua... - 0 views

  • IBM estimates that 90% of the world's online data has been created just in the past two years. What's more, it has made information more accessible than ever before.
  • However, rather than enhancing knowledge, the internet has produced an information glut or "infoxication".
  • Furthermore, since online content is often curated to fit our preferences, interests and personality, the internet can even enhance our existing biases and undermine our motivation to learn new things.
    • ilanaprincilus06
       
      When we see our preferences constantly being displayed, we are more likely to go back to wherever the information was or utilize that source, website, etc more often.
  • ...14 more annotations...
  • these filters will isolate people in information bubbles only partly of their own choosing, and the inaccurate beliefs they form as a result may be difficult to correct."
  • the proliferation of search engines, news aggregators and feed-ranking algorithms is more likely to perpetuate ignorance than knowledge.
  • It would seem that excessive social media use may intensify not only feelings of loneliness, but also ideological isolation.
    • ilanaprincilus06
       
      Would social media networks need to stop exploiting these preferences in order for us to limit ideological isolation?
  • "What the human being is best at doing is interpreting all new information so that their prior conclusions remain intact."
  • Recent studies show that although most people consume information that matches their opinions, being exposed to conflicting views tends to reduce prejudice and enhance creative thinking.
  • the desire to prove ourselves right and maintain our current beliefs trumps any attempt to be creative or more open-minded.
  • "our objects of inquiry are not 'truth' or 'meaning' but rather configurations of consciousness. These are figures or patterns of knowledge, cognitive and practical attitudes, which emerge within a definite historical and cultural context."
  • the internet is best understood as a cultural lens through which we construct – or distort – reality.
  • we can only deal with this overwhelming range of choices by ignoring most of them.
  • trolling is so effective for enticing readers' comments, but so ineffective for changing their viewpoints.
  • Will accumulating facts help you understand the world?
    • ilanaprincilus06
       
      We must take an extra step past just reading/learning about facts and develop second order thinking about the claims/facts to truly gain a better sense of what is going on.
  • we have developed a dependency on technology, which has eclipsed our reliance on logic, critical thinking and common sense: if you can find the answer online, why bother thinking?
  • it is conceivable that individuals' capacity to evaluate and produce original knowledge will matter more than the actual acquisition of knowledge.
  • Good judgment and decision-making will be more in demand than sheer expertise or domain-specific knowledge.
12More

Quinn Norton: The New York Times Fired My Doppelgänger - The Atlantic - 0 views

  • Quinn Norton
  • The day before Valentine’s Day, social media created a bizarro-world version of me. I have seen strange ideas about me online before, but this doppelgänger was so far from resembling me that I told friends and loved ones I didn’t want to even try to rebut it. It was a leading question turned into a human form. The net created a person with my name and face, but with so little relationship to me, she could have been an invader from an alternate universe.
  • It started when The New York Times hired me for its editorial board. In January, the Times sought me out because, editorial leaders told me, the Times as an institution is struggling with understanding how technology is shifting society and politics. We talked for a while. I discussed my work, my beliefs, and my background.
  • ...9 more annotations...
  • I was hesitant with the Times. They were far out of my comfort zone, but I felt that the people I was talking to had a sincerity greater than their confusion. Nothing that has happened since then has dissuaded me from that impression.
  • If you’re reading this, especially on the internet, you are the teacher for those institutions at a local, national, and global level. I understand that you didn’t ask for this position. Neither did I. History doesn’t ask you if you want to be born in a time of upheaval, it just tells you when you are. When the backlash began, I got the call from the person who had sought me out and recruited me. The fear I heard in that shaky voice coming through my mobile phone was unmistakable. It was the fear of a mob, of the unknown, and of the idea that maybe they had gotten it wrong and done something terrible. I have felt all of those things. Many of us have. It’s not a place of strength, even when it seems to be coming from someone standing in a place of power. The Times didn’t know what the internet was doing—tearing down a new hire, exposing a fraud, threatening them—everything seemed to be in the mix.
  • I had even written about context collapse myself, but that hadn’t saved me from falling into it, and then hurting other people I didn’t mean to hurt. This particular collapse didn’t create much of a doppelgänger, but it did find me spending a morning as a defensive jerk. I’m very sorry for that dumb mistake. It helped me learn a lesson: Be damn sure when you make angry statements. Check them out long enough that, even if the statements themselves are still angry, you are not angry by the time you make them. Again and again, I have learned this: Don’t internet angry. If you’re angry, internet later.
  • I think if I’d gotten to write for the Times as part of their editorial board, this might have been different. I might have been in a position to show how our media doppelgängers get invented, and how we can unwind them. It takes time and patience. It doesn’t come from denying the doppelgänger—there’s nothing there to deny. I was accused of homophobia because of the in-group language I used with anons when I worked with them. (“Anons” refers to people who identify as part of the activist collective Anonymous.) I was accused of racism for use of taboo language, mainly in a nine-year-old retweet in support of Obama. Intentions aside, it wasn’t a great tweet, and I was probably overemotional when I retweeted it.
  • In late 2015 I woke up a little before 6 a.m., jet-lagged in New York, and started looking at Twitter. There was a hashtag, I don’t remember if it was trending or just in my timeline, called #whitegirlsaremagic. I clicked on it, and found it was racist and sexist dross. It was being promulgated in opposition to another hashtag, #blackgirlsaremagic. I clicked on that, and found a few model shots and borderline soft-core porn of black women. Armed with this impression, I set off to tweet in righteous anger about how much I disliked women being reduced to sex objects regardless of race. I was not just wrong in this moment, I was incoherently wrong. I had made my little mental model of what #blackgirlsaremagic was, and I had no clue that I had no clue what I was talking about. My 60-second impression of #whitegirlsaremagic was dead-on, but #blackgirlsaremagic didn’t fit in the last few tweets my browser had loaded.
  • I had been a victim of something the sociologists Alice Marwick and danah boyd call context collapse, where people create online culture meant for one in-group, but exposed to any number of out-groups without its original context by social-media platforms, where it can be recontextualized easily and accidentally.
  • Not everyone believes loving engagement is the best way to fight evil beliefs, but it has a good track record. Not everyone is in a position to engage safely with racists, sexists, anti-Semites, and homophobes, but for those who are, it’s a powerful tool. Engagement is not the one true answer to the societal problems destabilizing America today, but there is no one true answer. The way forward is as multifarious and diverse as America is, and a method of nonviolent confrontation and accountability, arising from my pacifism, is what I can bring to helping my society.
  • Here is your task, person on the internet, reader of journalism, speaker to the world on social media: You make the world now, in a way that you never did before. Your beliefs have a power they’ve never had in human history. You must learn to investigate with a scientific and loving mind not only what is true, but what is effective in the world. Right now we are a world of geniuses who constantly love to call each other idiots. But humanity is the most complicated thing we’ve found in the universe, and so far as we know, we’re the only thing even looking. We are miracles by the billions with powers and luxuries beyond the dreams of kings of old.
  • We are powerful creatures, but power must come with gentleness and responsibility. No one prepared us for this, no one trained us, no one came before us with an understanding of our world. There were hints, and wise people, and I lean on and cherish them. But their philosophies and imaginations can only take us so far. We have to build our own philosophies and imagine great futures for our world in order to have any futures at all. Let mercy guide us forward in these troubled times. Let yourself imagine, because imagination is the wellspring of hope. Here, in the beginning of the 21st century, hope is our duty to the future.
7More

U.S. and Iran Are Trolling Each Other - in China - The New York Times - 0 views

  • As tensions between the United States and Iran persist after the American killing of a top Iranian general this month, the two countries are waging a heated battle in an unlikely forum: the Chinese internet.The embassies of the United States and Iran in Beijing have published a series of barbed posts in recent days on Weibo, a popular Chinese social media site, attacking each other in Chinese and in plain view of the country’s hundreds of millions of internet users.
  • The battle has captivated people in China, where diplomatic rows rarely break into public view and the government often censors posts about politics.
  • Iran, for its part, has for years sought to hinder the flow of information from the West more broadly, blocking Facebook, Twitter and other social networks.
  • ...4 more annotations...
  • The Chinese authorities operate one of the world’s most aggressive censorship systems, routinely scrubbing reports, comments and posts on the internet that are deemed politically sensitive or subversive. Posts by foreign diplomats are known to have been censored, especially on topics such as North Korea or human rights.
  • China and Iran have sought closer relations in recent years, especially as American sanctions have increased economic pressure on Tehran.
  • In its Weibo posts, the Iranian Embassy made a point of appealing to Chinese internet users, thanking them for their support and even suggesting that they visit Iran for the upcoming Lunar New Year holiday (“safety is not an issue,” the embassy wrote).
  • “China has provided Iran with very important economic and political lifelines in recent years when U.S. sanctions have choked that country,”
27More

The Sad Trombone Debate: The RNC Throws in the Towel and Gets Ready to Roll Over for Tr... - 0 views

  • Death to the Internet
  • Yesterday Ben Thompson published a remarkable essay in which he more or less makes the case that the internet is a socially deleterious invention, that it will necessarily get more toxic, and that the best we can hope for is that it gets so bad, so fast, that everyone is shocked into turning away from it.
  • Ben writes the best and most insightful newsletter about technology and he has been, in all the years I’ve read him, a techno-optimist.
  • ...24 more annotations...
  • this is like if Russell Moore came out and said that, on the whole, Christianity turns out to be a bad thing. It’s that big of a deal.
  • Thompson’s case centers around constraints and supply, particularly as they apply to content creation.
  • In the pre-internet days, creating and distributing content was relatively expensive, which placed content publishers—be they newspapers, or TV stations, or movie studios—high on the value chain.
  • The internet reduced distribution costs to zero and this shifted value away from publishers and over to aggregators: Suddenly it was more important to aggregate an audience—a la Google and Facebook—than to be a content creator.
  • Audiences were valuable; content was commoditized.
  • What has alarmed Thompson is that AI has now reduced the cost of creating content to zero.
  • what does the world look like when both the creation and distribution of content are zero?
  • Hellscape
  • We’re headed to a place where content is artificially created and distributed in such a way as to be tailored to a given user’s preferences. Which will be the equivalent of living in a hall of mirrors.
  • What does that mean for news? Nothing good.
  • It doesn’t really make sense to talk about “news media” because there are fundamental differences between publication models that are driven by scale.
  • So the challenges the New York Times face will be different than the challenges that NPR or your local paper face.
  • Two big takeaways:
  • (1) Ad-supported publications will not survive
  • Zero-cost for content creation combined with zero-cost distribution means an infinite supply of content. The more content you have, the more ad space exists—the lower ad prices go.
  • Actually, some ad-supported publications will survive. They just won’t be news. What will survive will be content mills that exist to serve ads specifically matched to targeted audiences.
  • (2) Size is determinative.
  • The New York Times has a moat by dint of its size. It will see the utility of its soft “news” sections decline in value, because AI is going to be better at creating cooking and style content than breaking hard news. But still, the NYT will be okay because it has pivoted hard into being a subscription-based service over the last decade.
  • At the other end of the spectrum, independent journalists should be okay. A lone reporter running a focused Substack who only needs four digits’ worth of subscribers to sustain them.
  • But everything in between? That’s a crapshoot.
  • Technology writers sometimes talk about the contrast between “builders” and “conservers” — roughly speaking, between those who are most animated by what we stand to gain from technology and those animated by what we stand to lose.
  • in our moment the builder and conserver types are proving quite mercurial. On issues ranging from Big Tech to medicine, human enhancement to technologies of governance, the politics of technology are in upheaval.
  • Dispositions are supposed to be basically fixed. So who would have thought that deep blue cities that yesterday were hotbeds of vaccine skepticism would today become pioneers of vaccine passports? Or that outlets that yesterday reported on science and tech developments in reverent tones would today make it their mission to unmask “tech bros”?
  • One way to understand this churn is that the builder and the conserver types each speak to real, contrasting features within human nature. Another way is that these types each pick out real, contrasting features of technology. Focusing strictly on one set of features or the other eventually becomes unstable, forcing the other back into view.
17More

'Meta-Content' Is Taking Over the Internet - The Atlantic - 0 views

  • Jenn, however, has complicated things by adding an unexpected topic to her repertoire: the dangers of social media. She recently spoke about disengaging from it for her well-being; she also posted an Instagram Story about the risks of ChatGPT
  • and, in none other than a YouTube video, recommended Neil Postman’s Amusing Ourselves to Death, a seminal piece of media critique from 1985 that denounces television’s reduction of life to entertainment.
  • (Her other book recommendations included Stolen Focus, by Johann Hari, and Recapture the Rapture, by Jamie Wheal.)
  • ...14 more annotations...
  • Social-media platforms are “preying on your insecurities; they’re preying on your temptations,” Jenn explained to me in an interview that shifted our parasocial connection, at least for an hour, to a mere relationship. “And, you know, I do play a role in this.” Jenn makes money through aspirational advertising, after all—a familiar part of any influencer’s job.
  • She’s pro–parasocial relationships, she explains to the camera, but only if we remain aware that we’re in one. “This relationship does not replace existing friendships, existing relationships,” she emphasizes. “This is all supplementary. Like, it should be in addition to your life, not a replacement.” I sat there watching her talk about parasocial relationships while absorbing the irony of being in one with her.
  • The open acknowledgment of social media’s inner workings, with content creators exposing the foundations of their content within the content itself, is what Alice Marwick, an associate communications professor at the University of North Carolina at Chapel Hill, described to me as “meta-content.”
  • Meta-content can be overt, such as the vlogger Casey Neistat wondering, in a vlog, if vlogging your life prevents you from being fully present in it;
  • But meta-content can also be subtle: a vlogger walking across the frame before running back to get the camera. Or influencers vlogging themselves editing the very video you’re watching, in a moment of space-time distortion.
  • Viewers don’t seem to care. We keep watching, fully accepting the performance. Perhaps that’s because the rise of meta-content promises a way to grasp authenticity by acknowledging artifice; especially in a moment when artifice is easier to create than ever before, audiences want to know what’s “real” and what isn’
  • “The idea of a space where you can trust no sources, there’s no place to sort of land, everything is put into question, is a very unsettling, unsatisfying way to live.
  • So we continue to search for, as Murray observes, the “agreed-upon things, our basic understandings of what’s real, what’s true.” But when the content we watch becomes self-aware and even self-critical, it raises the question of whether we can truly escape the machinations of social media. Maybe when we stare directly into the abyss, we begin to enjoy its company.
  • “The difference between BeReal and the social-media giants isn’t the former’s relationship to truth but the size and scale of its deceptions.” BeReal users still angle their camera and wait to take their daily photo at an aesthetic time of day. The snapshots merely remind us how impossible it is to stop performing online.
  • Jenn’s concern over the future of the internet stems, in part, from motherhood. She recently had a son, Lennon (whose first birthday party I watched on YouTube), and worries about the digital world he’s going to inherit.
  • Back in the age of MySpace, she had her own internet friends and would sneak out to parking lots at 1 a.m. to meet them in real life: “I think this was when technology was really used as a tool to connect us.” Now, she explained, it’s beginning to ensnare us. Posting content online is no longer a means to an end so much as the end itself.
  • We used to view influencers’ lives as aspirational, a reality that we could reach toward. Now both sides acknowledge that they’re part of a perfect product that the viewer understands is unattainable and the influencer acknowledges is not fully real.
  • “I forgot to say this to her in the interview, but I truly think that my videos are less about me and more of a reflection of where you are currently … You are kind of reflecting on your own life and seeing what resonates [with] you, and you’re discarding what doesn’t. And I think that’s what’s beautiful about it.”
  • meta-content is fundamentally a compromise. Recognizing the delusion of the internet doesn’t alter our course within it so much as remind us how trapped we truly are—and how we wouldn’t have it any other way.
10More

Erasing History in the Internet Era - NYTimes.com - 1 views

  • Lorraine Martin, a nurse in Greenwich, was arrested in 2010 with her two grown sons when police raided her home and found a small stash of marijuana, scales and plastic bags. The case against her was tossed out when she agreed to take some drug classes, and the official record was automatically purged. It was, the law seemed to assure her, as if it had never happened.
  • Defamation is the publication of information that is both damaging and false. The arrest story was obviously true when it was first published. But Connecticut’s erasure law has already established that truth can be fungible. Martin, her suit says, was “deemed never to have been arrested.” And therefore the news story had metamorphosed into a falsehood.
  • They debate the difference between “historical fact” and “legal fact.” They dispute whether something that was true when it happened can become not just private but actually untrue, so untrue you can swear an oath that it never happened and, in the eyes of the law, you’ll be telling the truth.
  • ...7 more annotations...
  • The Connecticut case is just one manifestation of an anxious backlash against the invasive power of the Internet, a world of Big Data and ever more powerful search engines, in which it seems almost everything is permanently recorded and accessible to almost anyone — potential employers, landlords, dates, predators
  • In Europe, where press freedoms are less sacred and the right to privacy is more ensconced, the idea has taken hold that individuals have a “right to be forgotten,” and those who want their online particulars expunged tend to have the government on their side. In Germany or Spain, Lorraine Martin might have a winning case.
  • Google’s latest transparency report shows a sharp rise in requests from governments and courts to take down potentially damaging material.
  • The Times’s policy is not to censor history, because it’s history. The paper will update an arrest story if presented with evidence of an acquittal or dismissal, completing the story but not deleting the story.
  • Owen Tripp, a co-founder of Reputation.com, which has made a business out of helping clients manage their digital profile, advocated a “right to be forgotten” in a YouTube video. Tripp said everyone is entitled to a bit of space to grow up, to experiment, to make mistakes.
  • “This is not just a privacy problem,” said Viktor Mayer-Schönberger, a professor at the Oxford Internet Institute, and author of “Delete: The Virtue of Forgetting in the Digital Age.” “If we are continually reminded about people’s mistakes, we are not able to judge them for who they are in the present. We need some way to put a speed-brake on the omnipresence of the past.”
  • would like to see search engine companies — the parties that benefit the most financially from amassing our information — offer the kind of reputation-protecting tools that are now available only to those who can afford paid services like those of Reputation.com. Google, he points out, already takes down five million items a week because of claims that they violate copyrights. Why shouldn’t we expect Google to give users an option — and a simple process — to have news stories about them down-ranked or omitted from future search results? Good question. What’s so sacred about a search algorithm, anyway?
16More

How Facebook Warps Our Worlds - The New York Times - 0 views

  • THOSE who’ve been raising alarms about Facebook are right: Almost every minute that we spend on our smartphones and tablets and laptops, thumbing through favorite websites and scrolling through personalized feeds, we’re pointed toward foregone conclusions. We’re pressured to conform
  • We’re the real culprits. When it comes to elevating one perspective above all others and herding people into culturally and ideologically inflexible tribes, nothing that Facebook does to us comes close to what we do to ourselves.
  • I’m talking about how we use social media in particular and the Internet in general — and how we let them use us. They’re not so much agents as accomplices, new tools for ancient impulses, part of “a long sequence of technological innovations that enable us to do what we want
  • ...13 more annotations...
  • “And one of the things we want is to spend more time with people who think like us and less with people who are different,” Haidt added. “The Facebook effect isn’t trivial. But it’s catalyzing or amplifying a tendency that was already there.”
  • prevalent for many users are the posts we see from friends and from other people and groups we follow on the network, and this information is utterly contingent on choices we ourselves make
  • The Internet isn’t rigged to give us right or left, conservative or liberal — at least not until we rig it that way. It’s designed to give us more of the same, whatever that same is
  • So it goes with the fiction we read, the movies we watch, the music we listen to and, scarily, the ideas we subscribe to. They’re not challenged. They’re validated and reinforced.
  • this colors our days, or rather bleeds them of color, reducing them to a single hue.
  • Facebook, along with other social media, definitely conspires in this. Haidt noted that it often discourages dissent within a cluster of friends by accelerating shaming. He pointed to the enforced political correctness among students at many colleges.
  • Carnival barkers, conspiracy theories, willful bias and nasty partisanship aren’t anything new, and they haven’t reached unprecedented heights today. But what’s remarkable and sort of heartbreaking is the way they’re fed by what should be strides in our ability to educate ourselves.
  • The proliferation of cable television networks and growth of the Internet promised to expand our worlds, not shrink them. Instead they’ve enhanced the speed and thoroughness with which we retreat into enclaves of the like-minded.
  • there’s no argument that in an era that teems with choice, brims with niche marketing and exalts individualism to the extent that ours does, we’re sorting ourselves with a chillingly ruthless efficiency. We’ve surrendered universal points of reference. We’ve lost common ground.
  • Marc Dunkelman, adding that it also makes it easier for us to avoid “face-to-face interactions with diverse ideas.” He touched on this in an incisive 2014 book, “The Vanishing Neighbor,” which belongs with Haidt’s work and with “Bowling Alone,” “Coming Apart” and “The Fractured Republic” in the literature of modern American fragmentation, a booming genre all its own.
  • We’re less committed to, and trustful of, large institutions than we were at times in the past. We question their wisdom and substitute it with the groupthink of micro-communities, many of which we’ve formed online, and their sensibilities can be more peculiar and unforgiving.
  • We construct precisely contoured echo chambers of affirmation that turn conviction into zeal, passion into fury, disagreements with the other side into the demonization of it
  • It’s not about some sorcerer’s algorithm. It’s about a tribalism that has existed for as long as humankind has and is now rooted in the fertile soil of the Internet, which is coaxing it toward a full and insidious flower
11More

6 degrees of separation is too much - Facebook says we're all 3.5 degrees apart - Vox - 0 views

  • A well-known theory holds that most people, at least in the US and perhaps in the world, are six degrees of separation away from each other. Pick a random stranger anywhere in the country, the theory goes, and chances are you can build a chain of acquaintances between the two of you in no more than six hops.
  • The idea of "six degrees of separation" rests on a scientific foundation that's dubious at best. But Facebook, because its users give it access to possibly the richest data set ever on how 1.6 billion people know and interact with each other, set out to prove it with a statistical algorithm.
  • The average Facebook user is three and a half degrees of separation away from every other user, and the social network's post tells you your own distance from everyone else on the site.
  • ...8 more annotations...
  • Mark Zuckerberg is 3.17 degrees of separation from all Facebook users.
  • The typical Facebook user has 155 friends, but only describes 50 of them as friends in real life, according to a 2014 study from the Pew Research Center. Thirty-five percent of people have Facebook friends they've never met in person.
  • The original "six degrees of separation" experiment required people to know each other fairly well: They had to be on a first-name basis, at a time when society was slightly more formal, in order for the connection to count.
  • About one-third of the documents eventually reached the stockbroker, after a chain of, on average, six people — the six degrees of separation. It's a small world after all, Milgram concluded.
  • She found instead that Milgram's conclusions rested on a shaky foundation, and that class and race divided Americans more than his original paper admitted. The majority of Milgram's letters didn't make it to the Boston stockbroker. And further experiments that factored in class and race suggested that making connections across those barriers was even more challenging
  • While middle- and high-income people were able to find their targets regardless of their household income, low-income people could only connect with other low-income families, Kleinfeld wrote in a 2002 article in the journal Society.
  • The correct interpretation of Milgram, she argued, was not the optimistic conclusion that we're all only a few degrees of separation away. Instead, it's that there are still barriers that are insurmountable.
  • But the real divide is between people who are on Facebook and those who aren't on the internet at all. Facebook users make up about 62 percent of American adults but 72 percent of all internet users. Americans without the internet are disproportionately older, rural, and less educated. As Facebook users get closer, they might be becoming ever more isolated.
10More

How to Be Liked by Everyone Online - NYTimes.com - 1 views

  • The Internet — once again — has upended social and psychological norms. Linguistically speaking, what was formerly undesirable or just unpleasant is now highly sought after
  • To be “linked,” in a previous life, suggested something illicit — an affair or a possible crime associating His Name with Yours. But in Internet World, linking is a professional asset.
  • applying the word “disrupt” to any behavior in people under the age of 18 is bound to involve bodily damage, psychic distress or — later on, perhaps — the buying and selling of hard drugs.
  • ...7 more annotations...
  • “reversification” to describe the phenomenon. “I mean by it a process in which words come, through a process of evolution and innovation, to have a meaning that is opposite to, or at least very different from, their initial sense,”
  • the word “enable” had a dubious cast in the common parlance of therapy and gossip: an enabler was someone who handed the broody tippler a fresh cocktail; to enable was to unleash the codependent. Now it’s a technological upgrade
  • To have something liked online is not as great as having something actually liked. It doesn’t even necessarily mean someone enjoyed it — it might simply mean, “Got it,” or more wanly, “This provoked some kind of feeling, however minor.”
  • To tag someone online is a far nastier enterprise. Anyone can resurface disparaging photographic evidence of youthful folly and post it on a social network, “tagging” it with the unsuspecting’s name.
  • Most people think long and hard about their favorite movie, novel, people and even color. Online, favorites are not so special. To “favorite” (now a verb) something on Twitter is to say, in effect, “I saw this thing and liked it O.K., but not enough to retweet it.” Or a tepid “I see you wrote something about me and I will acknowledge that by favoriting. But expect nothing more.”
  • Even for adults, sharing has historically been considered a commendable activity, no matter the tangled motivations. Sharing in Internet parlance? Pure egotism. Check out my 6-year-old on the viola. Don’t you wish you were this attractive at 41?
  • Being a star in real life signifies tremendous professional success or, at the very least, celebrity; to “star” something on Gmail means you need to write back.
6More

Protesters Gather to Support the Press, From Fox News to The New York Times - The New Y... - 0 views

  • was in response to President Trump’s decision on Friday to bar several news organizations from a White House briefing, including The Times.
  • “When The New York Times is under attack, what do we do?” Michael Zorek, a stay-at-home father from the Upper West Side of Manhattan screamed to the crowd. “Stand up! Fight back!” The group boomed back, responding with the mantra as Mr. Zorek shouted again, listing names of news organizations from BuzzFeed to the Public Broadcasting Service.
  • “When you look at history, the first thing dictators do is shut down the press.”
  • ...2 more annotations...
  • The president has used Twitter to declare the press “the enemy of the American people.” Mr. Baquet disagreed, saying, “I don’t look at us as the enemy of the White House. I look at us as people who are aggressively covering the White House.”
  • It read: “Truth. It’s more important now than ever.”
  •  
    The freedom of press is one of the main principle of democracy. Although fake news and alternative facts are filled the social media, it is very irrational and inefficient to take out the press completely. Getting rid of the problem is not the right way to solve the problem, it is only escaping. I think the press now is indeed flawed, but the thing it really need is regulation and certain level of government involvement. I also think it is very ironic that the president is using twitter to declare the press "the enemy of the American people" because the internet is filled with more flawed information than the press. People have to think before they publish something on the press, but in Internet, people don't usually consider their responsibility to their words. --Sissi (3/1/2017)
4More

Instagram introduces two-factor authentication | Technology | The Guardian - 0 views

  • Instagram has become the latest social network to enable two-factor authentication, a valuable security feature that protects accounts from being compromised due to password reuse or phishing.
  • Instagram joins Facebook, Twitter, Google and many others in offering some form of two-factor verification.
  • Confusingly for users, all the methods are slightly different: Twitter requires logging in to be approved by opening the app on a trusted device, and Google uses an open standard to link up with its authenticator app, which generates new six-digit codes every 30 seconds.
  •  
    Internet security has been a big problem since the development of internet technology. There are a lot of worries especially on the safety of the account. People put more and more things online and security risk become an issue. For example, there are a lot of pay online apps that enable you to pay without using actually money, just charging automatically from your bank account. Although it is very convenient to have everything online, it is very unstable and risky at the same time. --Sissi (3/25/2017)
‹ Previous 21 - 40 of 347 Next › Last »
Showing 20 items per page