Skip to main content

Home/ TOK Friends/ Group items tagged google

Rss Feed Group items tagged

Javier E

Clear Your Google Web History - Wired How-To Wiki - 0 views

  • On March 1st, 2012, Google will implement a new, unified privacy policy. The new policy is retroactive, meaning it will affect any data Google has collected on you prior to that date, as well as any data it gathers afterward.
  • Basically, under the new policy, your Google Web History (all of your searches and the sites you clicked through to) can be combined with other data Google has gathered about you from other services — Gmail, Google+, etc.
  • If you'd like to keep your personal data a good distance away from Google, you'll need to delete your existing search history and prevent Google from using that history in the future.
  • ...4 more annotations...
  • in this case, however, a little time spent changing your settings can provide invaluable peace of mind knowing that Google can't exploit your personal tendencies for its own purposes.
  • First sign into your Google account and head to the history page. Click the button labeled Remove all Web History. Then click Okay to confirm. Note that this also pauses your web history going forward, and Google won't start listening to your history again unless you let it.
  • This will not stop Google from gathering data when you search. To do that you would need to block Google cookies completely. However, while it will still gather the data, Google will not use it to serve targeted ads or do anything other than use it for internal purposes. Also, with Web History disabled, your data is at least partially anonymized after 18 months (if you leave Web History on, Google will keep your search records indefinitely).
  • On the negative side, bear in mind that while this won't prevent Google from making search suggestions, it will prevent you from getting personalized suggestions based on your previous searches.
Javier E

Google's Relationship With Facts Is Getting Wobblier - The Atlantic - 0 views

  • Misinformation or even disinformation in search results was already a problem before generative AI. Back in 2017, The Outline noted that a snippet once confidently asserted that Barack Obama was the king of America.
  • This is what experts have worried about since ChatGPT first launched: false information confidently presented as fact, without any indication that it could be totally wrong. The problem is “the way things are presented to the user, which is Here’s the answer,” Chirag Shah, a professor of information and computer science at the University of Washington, told me. “You don’t need to follow the sources. We’re just going to give you the snippet that would answer your question. But what if that snippet is taken out of context?”
  • Responding to the notion that Google is incentivized to prevent users from navigating away, he added that “we have no desire to keep people on Google.
  • ...15 more annotations...
  • Pandu Nayak, a vice president for search who leads the company’s search-quality teams, told me that snippets are designed to be helpful to the user, to surface relevant and high-caliber results. He argued that they are “usually an invitation to learn more” about a subject
  • “It’s a strange world where these massive companies think they’re just going to slap this generative slop at the top of search results and expect that they’re going to maintain quality of the experience,” Nicholas Diakopoulos, a professor of communication studies and computer science at Northwestern University, told me. “I’ve caught myself starting to read the generative results, and then I stop myself halfway through. I’m like, Wait, Nick. You can’t trust this.”
  • Nayak said the team focuses on the bigger underlying problem, and whether its algorithm can be trained to address it.
  • If Nayak is right, and people do still follow links even when presented with a snippet, anyone who wants to gain clicks or money through search has an incentive to capitalize on that—perhaps even by flooding the zone with AI-written content.
  • Nayak told me that Google plans to fight AI-generated spam as aggressively as it fights regular spam, and claimed that the company keeps about 99 percent of spam out of search results.
  • The result is a world that feels more confused, not less, as a result of new technology.
  • The Kenya result still pops up on Google, despite viral posts about it. This is a strategic choice, not an error. If a snippet violates Google policy (for example, if it includes hate speech) the company manually intervenes and suppresses it, Nayak said. However, if the snippet is untrue but doesn’t violate any policy or cause harm, the company will not intervene.
  • experts I spoke with had several ideas for how tech companies might mitigate the potential harms of relying on AI in search
  • For starters, tech companies could become more transparent about generative AI. Diakopoulos suggested that they could publish information about the quality of facts provided when people ask questions about important topics
  • They can use a coding technique known as “retrieval-augmented generation,” or RAG, which instructs the bot to cross-check its answer with what is published elsewhere, essentially helping it self-fact-check. (A spokesperson for Google said the company uses similar techniques to improve its output.) They could open up their tools to researchers to stress-test it. Or they could add more human oversight to their outputs, maybe investing in fact-checking efforts.
  • Fact-checking, however, is a fraught proposition. In January, Google’s parent company, Alphabet, laid off roughly 6 percent of its workers, and last month, the company cut at least 40 jobs in its Google News division. This is the team that, in the past, has worked with professional fact-checking organizations to add fact-checks into search results
  • Alex Heath, at The Verge, reported that top leaders were among those laid off, and Google declined to give me more information. It certainly suggests that Google is not investing more in its fact-checking partnerships as it builds its generative-AI tool.
  • Nayak acknowledged how daunting a task human-based fact-checking is for a platform of Google’s extraordinary scale. Fifteen percent of daily searches are ones the search engine hasn’t seen before, Nayak told me. “With this kind of scale and this kind of novelty, there’s no sense in which we can manually curate results.”
  • Creating an infinite, largely automated, and still accurate encyclopedia seems impossible. And yet that seems to be the strategic direction Google is taking.
  • A representative for Google told me that this was an example of a “false premise” search, a type that is known to trip up the algorithm. If she were trying to date me, she argued, she wouldn’t just stop at the AI-generated response given by the search engine, but would click the link to fact-check it.
Javier E

Google Devising Radical Search Changes to Beat Back AI Rivals - The New York Times - 0 views

  • Google’s employees were shocked when they learned in March that the South Korean consumer electronics giant Samsung was considering replacing Google with Microsoft’s Bing as the default search engine on its devices.
  • Google’s reaction to the Samsung threat was “panic,” according to internal messages reviewed by The New York Times. An estimated $3 billion in annual revenue was at stake with the Samsung contract. An additional $20 billion is tied to a similar Apple contract that will be up for renewal this year.
  • A.I. competitors like the new Bing are quickly becoming the most serious threat to Google’s search business in 25 years, and in response, Google is racing to build an all-new search engine powered by the technology. It is also upgrading the existing one with A.I. features, according to internal documents reviewed by The Times.
  • ...14 more annotations...
  • The Samsung threat represented the first potential crack in Google’s seemingly impregnable search business, which was worth $162 billion last year.
  • Modernizing its search engine has become an obsession at Google, and the planned changes could put new A.I. technology in phones and homes all over the world.
  • Google has been worried about A.I.-powered competitors since OpenAI, a San Francisco start-up that is working with Microsoft, demonstrated a chatbot called ChatGPT in November. About two weeks later, Google created a task force in its search division to start building A.I. products,
  • Google has been doing A.I. research for years. Its DeepMind lab in London is considered one of the best A.I. research centers in the world, and the company has been a pioneer with A.I. projects, such as self-driving cars and the so-called large language models that are used in the development of chatbots. In recent years, Google has used large language models to improve the quality of its search results, but held off on fully adopting A.I. because it has been prone to generating false and biased statements.
  • Now the priority is winning control of the industry’s next big thing. Last month, Google released its own chatbot, Bard, but the technology received mixed reviews.
  • The system would learn what users want to know based on what they’re searching when they begin using it. And it would offer lists of preselected options for objects to buy, information to research and other information. It would also be more conversational — a bit like chatting with a helpful person.
  • Magi would keep ads in the mix of search results. Search queries that could lead to a financial transaction, such as buying shoes or booking a flight, for example, would still feature ads on their results pages.
  • Last week, Google invited some employees to test Magi’s features, and it has encouraged them to ask the search engine follow-up questions to judge its ability to hold a conversation. Google is expected to release the tools to the public next month and add more features in the fall, according to the planning document.
  • The company plans to initially release the features to a maximum of one million people. That number should progressively increase to 30 million by the end of the year. The features will be available exclusively in the United States.
  • Google has also explored efforts to let people use Google Earth’s mapping technology with help from A.I. and search for music through a conversation with a chatbot
  • A tool called GIFI would use A.I. to generate images in Google Image results.
  • Tivoli Tutor, would teach users a new language through open-ended A.I. text conversations.
  • Yet another product, Searchalong, would let users ask a chatbot questions while surfing the web through Google’s Chrome browser. People might ask the chatbot for activities near an Airbnb rental, for example, and the A.I. would scan the page and the rest of the internet for a response.
  • “If we are the leading search engine and this is a new attribute, a new feature, a new characteristic of search engines, we want to make sure that we’re in this race as well,”
Javier E

Thieves of experience: On the rise of surveillance capitalism - 1 views

  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • ...72 more annotations...
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
  • Social skills and relationships seem to suffer as well.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Google’s once-patient investors grew restive, demanding that the founders figure out a way to make money, preferably lots of it.
  • nder pressure, Page and Brin authorized the launch of an auction system for selling advertisements tied to search queries. The system was designed so that the company would get paid by an advertiser only when a user clicked on an ad. This feature gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content. Even tiny increases in click rates would bring big gains in income. And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits. Surveillance capitalism had arrived.
  • Google’s business now hinged on what Zuboff calls “the extraction imperative.” To improve its predictions, it had to mine as much information as possible from web users. It aggressively expanded its online services to widen the scope of its surveillance.
  • Through Gmail, it secured access to the contents of people’s emails and address books. Through Google Maps, it gained a bead on people’s whereabouts and movements. Through Google Calendar, it learned what people were doing at different moments during the day and whom they were doing it with. Through Google News, it got a readout of people’s interests and political leanings. Through Google Shopping, it opened a window onto people’s wish lists,
  • The company gave all these services away for free to ensure they’d be used by as many people as possible. It knew the money lay in the data.
  • the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements.
  • Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors.
  • What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information
  • The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.
  • Google conducted its great data heist under the cover of novelty. The web was an exciting frontier — something new in the world — and few people understood or cared about what they were revealing as they searched and surfed. In those innocent days, data was there for the taking, and Google took it
  • Google also benefited from decisions made by lawmakers, regulators, and judges — decisions that granted internet companies free use of a vast taxpayer-funded communication infrastructure, relieved them of legal and ethical responsibility for the information and messages they distributed, and gave them carte blanche to collect and exploit user data.
  • Consider the terms-of-service agreements that govern the division of rights and the delegation of ownership online. Non-negotiable, subject to emendation and extension at the company’s whim, and requiring only a casual click to bind the user, TOS agreements are parodies of contracts, yet they have been granted legal legitimacy by the court
  • Law professors, writes Zuboff, “call these ‘contracts of adhesion’ because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped Google and other firms commandeer personal data as if by fiat.
  • n the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way
  • Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one
  • but her case would have been stronger still had she more fully addressed the benefits side of the ledger.
  • there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on
  • hat the industries of the future will seek to manufacture is the self.
  • Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots.
  • All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.”
  • “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”
  • This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists
  • Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.
  • it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react.
  • spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.
  • competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives
  • “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.
  • What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not
  • Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.
  • Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public
  • Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agenc
  • As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations.
  • The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,”
Javier E

Welcome to Google Island | Gadget Lab | Wired.com - 0 views

  • As soon as you hit Google’s territorial waters, you came under our jurisdiction, our terms of service. Our laws–or lack thereof–apply here. By boarding our self-driving boat you granted us the right to all feedback you provide during your journey. This includes the chemical composition of your sweat.
  • Unified logins let us get to know our audience in ways we never could before. They gave us their locations so that we might better tell them if it was raining outside. They told us where they lived and where they wanted to go so that we could deliver a more immersive map that better anticipated what they wanted to do–it let us very literally tell people what they should do today. As people began to see how very useful Google Now was, they began to give us even more information. They told us to dig through their e-mail for their boarding passes–Imagine if you had to find it on your own!–they finally gave us permission to track and store their search and web history so that we could give them better and better Cards. And then there is the imaging. They gave us tens of thousands of pictures of themselves so that we could pick the best ones–yes we appealed to their vanity to do this: We’ll make you look better and assure you present a smiling, wrinkle-free face to the world–but it allowed us to also stitch together three-dimensional representations. Hangout chats let us know who everybody’s friends were, and what they had to say to them. Verbal searches gave us our users’ voices. These were intermediary steps. But it let us know where people were at all times, what they thought, what they said, and of course how they looked. Sure, Google Now could tell you what to do.
  • “We learned so much about regulation with Google Health. It turns out, the government has rules about health records, and that people care about these rules for some reason. So we began looking around for ways to avoid regulation. For example, government regulation meant it was much easier to experiment with white space in Kenya than in the United States. So we started thinking: What if the entire world looked more like Kenya? Or, even better, Somalia? Places where there are no laws. We haven’t adapted mechanisms to deal with some of our old institutions like the law. We aren’t keeping up with the rate of change we caused through technology. If you look at the laws we have, they’re very old. A law can’t be right if it’s 50 years old. Like, it’s before the Internet
  • ...2 more annotations...
  • I don’t want this,” I stammered, removing the glasses. “Sure you do, you just aren’t aware of that yet. For many years now, we’ve looked at everything you’ve looked at online. Everything. We know what you want, and when you want it, down to the time of day. Why wait for you to request it? And in fact, why wait for you to discover that you even want to request it? We can just serve it to you.”
  • “These are Google Spiders. They’ve crawled the entire island, and now we’re ready to release them globally. We’re sending them everywhere, so that we can make a 3D representation of the entire planet, and everyone on it. We aren’t just going to recreate the planet, though–we’re going to make it better.” “Governments are too focused on democracy and rule of law. On Google Island, we’ve found those things to be distractions. If democracy worked so well, if a majority public opinion made something right, we would still have Jim Crow laws and Google Reader. We believe we can fix the world’s problems with better math. We can tear down the old and rebuild it with the new. Imagine Minecraft. Now imagine it photorealistic, and now imagine yourself living there, or at least, your Google Being living there. We already have the information. All we need is an invitation. This is the inevitable and logical end point of Google Island: a new Google Earth.”
Javier E

Google Alters Search to Handle More Complex Queries - NYTimes.com - 0 views

  • Google on Thursday announced one of the biggest changes to its search engine, a rewriting of its algorithm to handle more complex queries that affects 90 percent of all searches.
  • Google originally matched keywords in a search query to the same words on Web pages. Hummingbird is the culmination of a shift to understanding the meaning of phrases in a query and displaying Web pages that more accurately match that meaning
  • “They said, ‘Let’s go back and basically replace the engine of a 1950s car,’ ” said Danny Sullivan, founding editor of Search Engine Land, an industry blog. “It’s fair to say the general public seemed not to have noticed that Google ripped out its engine while driving down the road and replaced it with something else.
  • ...3 more annotations...
  • The company made the changes, executives said, because Google users are asking increasingly long and complex questions and are searching Google more often on mobile phones with voice search.
  • The algorithm also builds on work Google has done to understand conversational language, like interpreting what pronouns in a search query refer to. Hummingbird extends that to all Web searches, not just results related to entities included in the Knowledge Graph. It tries to connect phrases and understand concepts in a long query.
  • The outcome is not a change in how Google searches the Web, but in the results that it shows. Unlike some of its other algorithm changes, including one that pushed down so-called content farms in search results, Hummingbird is unlikely to noticeably affect certain categories of Web businesses, Mr. Sullivan said. Instead, Google says it believes that users will see more precise results
Javier E

How Google Dominates Us by James Gleick | The New York Review of Books - 1 views

  • Most of the time Google does not actually have the answers. When people say, “I looked it up on Google,” they are committing a solecism. When they try to erase their embarrassing personal histories “on Google,” they are barking up the wrong tree. It is seldom right to say that anything is true “according to Google.” Google is the oracle of redirection. Go there for “hamadryad,” and it points you to Wikipedia. Or the Free Online Dictionary. Or the Official Hamadryad Web Site (it’s a rock band, too, wouldn’t you know). Google defines its mission as “to organize the world’s information,” not to possess it or accumulate it.
  • Then again, a substantial portion of the world’s printed books have now been copied onto the company’s servers, where they share space with millions of hours of video and detailed multilevel imagery of the entire globe, from satellites and from its squadrons of roving street-level cameras. Not to mention the great and growing trove of information Google possesses regarding the interests and behavior of, approximately, everyone.
  • When I say Google “possesses” all this information, that’s not the same as owning it. What it means to own information is very much in flux.
  • ...2 more annotations...
  • The essence of the Web is the linking of individual “pages” on websites, one to another. Every link represents a recommendation—a vote of interest, if not quality. So the algorithm assigns every page a rank, depending on how many other pages link to it. Furthermore, all links are not valued equally. A recommendation is worth more when it comes from a page that has a high rank itself. The math isn’t trivial—PageRank is a probability distribution, and the calculation is recursive, each page’s rank depending on the ranks of pages that depend…and so on. Page and Brin patented PageRank and published the details even before starting the company they called Google.
  • As they saw it from the first, their mission encompassed not just the Internet but all the world’s books and images, too.
Javier E

'The Godfather of AI' Quits Google and Warns of Danger Ahead - The New York Times - 0 views

  • he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
  • Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
  • “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,”
  • ...24 more annotations...
  • Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
  • But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
  • “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
  • After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I technologies pose “profound risks to society and humanity.
  • Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.
  • Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job
  • Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
  • Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
  • In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.
  • In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
  • Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
  • Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others.
  • “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
  • As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
  • Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
  • His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
  • He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
  • Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their ow
  • And he fears a day when truly autonomous weapons — those killer robots — become reality.
  • “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
  • Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.
  • But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
  • Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
  • He does not say that anymore.
Javier E

How To Look Smart, Ctd - The Daily Dish | By Andrew Sullivan - 0 views

  • The Atlantic Home todaysDate();Tuesday, February 8, 2011Tuesday, February 8, 2011 Go Follow the Atlantic » Politics Presented by When Ronald Reagan Endorsed Ron Paul Joshua Green Epitaph for the DLC Marc Ambinder A Hard Time Raising Concerns About Egypt Chris Good Business Presented by Could a Hybrid Mortgage System Work? Daniel Indiviglio Fighting Bias in Academia Megan McArdle The Tech Revolution For Seniors Derek Thompson Culture Presented By 'Tiger Mother' Creates a New World Order James Fallows Justin Bieber: Daydream Believer James Parker <!-- /li
  • these questions tend to overlook the way IQ tests are designed. As a neuropsychologist who has administered hundreds of these measures, I can tell you that their structures reflect a deeply embedded bias toward intelligence as a function of reading skills
kushnerha

The Rise of Hate Search - The New York Times - 0 views

  • after the media first reported that at least one of the shooters had a Muslim-sounding name, a disturbing number of Californians had decided what they wanted to do with Muslims: kill them.
  • the rest of America searched for the phrase “kill Muslims” with about the same frequency that they searched for “martini recipe,” “migraine symptoms” and “Cowboys roster.”
  • People often have vicious thoughts. Sometimes they share them on Google. Do these thoughts matter?Yes. Using weekly data from 2004 to 2013, we found a direct correlation between anti-Muslim searches and anti-Muslim hate crimes.
  • ...17 more annotations...
  • In 2014, according to the F.B.I., anti-Muslim hate crimes represented 16.3 percent of the total of 1,092 reported offenses. Anti-Semitism still led the way as a motive for hate crimes, at 58.2 percent.
  • Hate crimes may seem chaotic and unpredictable, a consequence of random neurons that happen to fire in the brains of a few angry young men. But we can explain some of the rise and fall of anti-Muslim hate crimes just based on what people are Googling about Muslims.
  • If our model is right, Islamophobia and thus anti-Muslim hate crimes are currently higher than at any time since the immediate aftermath of the Sept. 11 attacks.
  • How can these Google searches track Islamophobia so well? Who searches for “I hate Muslims” anyway?We often think of Google as a source from which we seek information directly, on topics like the weather, who won last night’s game or how to make apple pie. But sometimes we type our uncensored thoughts into Google, without much hope that Google will be able to help us. The search window can serve as a kind of confessional.
  • It is not just that hatred against Muslims is extremely high today. It’s that it’s exceptional compared with prejudice against every other group in the United States.
  • “If someone is willing to say ‘I hate them’ or ‘they disgust me,’ we know that those emotions are as good a predictor of behavior as actual intent,” said Susan Fiske, a social psychologist at Princeton
  • Google searches seem to suffer from selection bias: Instead of asking a random sample of Americans how they feel, you just get information from those who are motivated to search. But this restriction may actually help search data predict hate crimes.
  • “Google searches answer a different question: What do people excited enough by an issue to comment on it think and believe about it? The answer to this question, just because it is unrepresentative of the public as a whole, may be a better bet to predict hate crimes.”
  • While the vast majority of Muslim Americans won’t be victims of hate crimes, few escape the “constant sense of fear and paranoia” that they or their loved ones might be next, said Rana Ibrahem
  • What about the other side of the coin — compassion and understanding? Do they stand a chance against hate?Searches for information about Islam and Muslims did rise after the attacks in Paris and San Bernardino. Yet they rose far less than searches for hate did. “Who is Muhammad?” “what do Muslims believe?” and “what does the Quran say?” for instance, were no match for intolerance.
  • Google searches expressing moods, rather than looking for information, represent a tiny sample of everyone who is actually thinking those thoughts.
  • The search data also tells us that changes in Americans’ policy concerns have been dramatic. They happened, quite literally, within minutes of the terror attacks.Before the Paris attacks, 60 percent of Americans’ searches that took an obvious view of Syrian refugees saw them positively, asking how they could “help,” “volunteer” or “aid.” The other 40 percent were negative and mostly expressed skepticism about security. After Paris, however, the share of people opposed to refugees rose to 80 percent.
  • One idea might be to increase cultural integration. This is based on the “contact hypothesis”: If more Americans have Muslim neighbors, they will learn not to harbor irrational hate. We did not find support for this in the data — in fact, we found evidence for the opposite.
  • That’s evidence for the dominance of the “racial threat” hypothesis, which predicts that proximity breeds tension, not trust.
  • Another solution might be for leaders to talk about the importance of tolerance and the irrationality of hatred, as President Obama did in his Oval Office speech last Sunday night. He asked Americans to reject discrimination and religious tests for immigration. The reactions to his speech offer an excellent opportunity to see what works and what doesn’t work.
  • There was one line, however, that did trigger the type of response Mr. Obama might have wanted. He said, “Muslim Americans are our friends and our neighbors, our co-workers, our sports heroes and yes, they are our men and women in uniform, who are willing to die in defense of our country.”After this line, for the first time in more than a year, the top Googled noun after “Muslim” was not “terrorists,” “extremists” or “refugees.” It was “athletes,” followed by “soldiers.” And, in fact, “athletes” kept the top spot for a full day afterward.
  • On the whole, though, the response to the president’s speech shows that appealing to the better angels of an angry mob will most likely just backfire.
Javier E

Opinion | You Are the Object of Facebook's Secret Extraction Operation - The New York T... - 0 views

  • Facebook is not just any corporation. It reached trillion-dollar status in a single decade by applying the logic of what I call surveillance capitalism — an economic system built on the secret extraction and manipulation of human data
  • Facebook and other leading surveillance capitalist corporations now control information flows and communication infrastructures across the world.
  • These infrastructures are critical to the possibility of a democratic society, yet our democracies have allowed these companies to own, operate and mediate our information spaces unconstrained by public law.
  • ...56 more annotations...
  • The result has been a hidden revolution in how information is produced, circulated and acted upon
  • The world’s liberal democracies now confront a tragedy of the “un-commons.” Information spaces that people assume to be public are strictly ruled by private commercial interests for maximum profit.
  • The internet as a self-regulating market has been revealed as a failed experiment. Surveillance capitalism leaves a trail of social wreckage in its wake: the wholesale destruction of privacy, the intensification of social inequality, the poisoning of social discourse with defactualized information, the demolition of social norms and the weakening of democratic institutions.
  • These social harms are not random. They are tightly coupled effects of evolving economic operations. Each harm paves the way for the next and is dependent on what went before.
  • There is no way to escape the machine systems that surveil u
  • All roads to economic and social participation now lead through surveillance capitalism’s profit-maximizing institutional terrain, a condition that has intensified during nearly two years of global plague.
  • Will Facebook’s digital violence finally trigger our commitment to take back the “un-commons”?
  • Will we confront the fundamental but long ignored questions of an information civilization: How should we organize and govern the information and communication spaces of the digital century in ways that sustain and advance democratic values and principles?
  • Mark Zuckerberg’s start-up did not invent surveillance capitalism. Google did that. In 2000, when only 25 percent of the world’s information was stored digitally, Google was a tiny start-up with a great search product but little revenue.
  • By 2001, in the teeth of the dot-com bust, Google’s leaders found their breakthrough in a series of inventions that would transform advertising. Their team learned how to combine massive data flows of personal information with advanced computational analyses to predict where an ad should be placed for maximum “click through.”
  • Google’s scientists learned how to extract predictive metadata from this “data exhaust” and use it to analyze likely patterns of future behavior.
  • Prediction was the first imperative that determined the second imperative: extraction.
  • Lucrative predictions required flows of human data at unimaginable scale. Users did not suspect that their data was secretly hunted and captured from every corner of the internet and, later, from apps, smartphones, devices, cameras and sensors
  • User ignorance was understood as crucial to success. Each new product was a means to more “engagement,” a euphemism used to conceal illicit extraction operations.
  • When asked “What is Google?” the co-founder Larry Page laid it out in 2001,
  • “Storage is cheap. Cameras are cheap. People will generate enormous amounts of data,” Mr. Page said. “Everything you’ve ever heard or seen or experienced will become searchable. Your whole life will be searchable.”
  • Instead of selling search to users, Google survived by turning its search engine into a sophisticated surveillance medium for seizing human data
  • Company executives worked to keep these economic operations secret, hidden from users, lawmakers, and competitors. Mr. Page opposed anything that might “stir the privacy pot and endanger our ability to gather data,” Mr. Edwards wrote.
  • As recently as 2017, Eric Schmidt, the executive chairman of Google’s parent company, Alphabet, acknowledged the role of Google’s algorithmic ranking operations in spreading corrupt information. “There is a line that we can’t really get across,” he said. “It is very difficult for us to understand truth.” A company with a mission to organize and make accessible all the world’s information using the most sophisticated machine systems cannot discern corrupt information.
  • This is the economic context in which disinformation wins
  • In March 2008, Mr. Zuckerberg hired Google’s head of global online advertising, Sheryl Sandberg, as his second in command. Ms. Sandberg had joined Google in 2001 and was a key player in the surveillance capitalism revolution. She led the build-out of Google’s advertising engine, AdWords, and its AdSense program, which together accounted for most of the company’s $16.6 billion in revenue in 2007.
  • A Google multimillionaire by the time she met Mr. Zuckerberg, Ms. Sandberg had a canny appreciation of Facebook’s immense opportunities for extraction of rich predictive data. “We have better information than anyone else. We know gender, age, location, and it’s real data as opposed to the stuff other people infer,” Ms. Sandberg explained
  • The company had “better data” and “real data” because it had a front-row seat to what Mr. Page had called “your whole life.”
  • Facebook paved the way for surveillance economics with new privacy policies in late 2009. The Electronic Frontier Foundation warned that new “Everyone” settings eliminated options to restrict the visibility of personal data, instead treating it as publicly available information.
  • Mr. Zuckerberg “just went for it” because there were no laws to stop him from joining Google in the wholesale destruction of privacy. If lawmakers wanted to sanction him as a ruthless profit-maximizer willing to use his social network against society, then 2009 to 2010 would have been a good opportunity.
  • Facebook was the first follower, but not the last. Google, Facebook, Amazon, Microsoft and Apple are private surveillance empires, each with distinct business models.
  • In 2021 these five U.S. tech giants represent five of the six largest publicly traded companies by market capitalization in the world.
  • As we move into the third decade of the 21st century, surveillance capitalism is the dominant economic institution of our time. In the absence of countervailing law, this system successfully mediates nearly every aspect of human engagement with digital information
  • Today all apps and software, no matter how benign they appear, are designed to maximize data collection.
  • Historically, great concentrations of corporate power were associated with economic harms. But when human data are the raw material and predictions of human behavior are the product, then the harms are social rather than economic
  • The difficulty is that these novel harms are typically understood as separate, even unrelated, problems, which makes them impossible to solve. Instead, each new stage of harm creates the conditions for the next stage.
  • Fifty years ago the conservative economist Milton Friedman exhorted American executives, “There is one and only one social responsibility of business — to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game.” Even this radical doctrine did not reckon with the possibility of no rules.
  • With privacy out of the way, ill-gotten human data are concentrated within private corporations, where they are claimed as corporate assets to be deployed at will.
  • The sheer size of this knowledge gap is conveyed in a leaked 2018 Facebook document, which described its artificial intelligence hub, ingesting trillions of behavioral data points every day and producing six million behavioral predictions each second.
  • Next, these human data are weaponized as targeting algorithms, engineered to maximize extraction and aimed back at their unsuspecting human sources to increase engagement
  • Targeting mechanisms change real life, sometimes with grave consequences. For example, the Facebook Files depict Mr. Zuckerberg using his algorithms to reinforce or disrupt the behavior of billions of people. Anger is rewarded or ignored. News stories become more trustworthy or unhinged. Publishers prosper or wither. Political discourse turns uglier or more moderate. People live or die.
  • Occasionally the fog clears to reveal the ultimate harm: the growing power of tech giants willing to use their control over critical information infrastructure to compete with democratically elected lawmakers for societal dominance.
  • when it comes to the triumph of surveillance capitalism’s revolution, it is the lawmakers of every liberal democracy, especially in the United States, who bear the greatest burden of responsibility. They allowed private capital to rule our information spaces during two decades of spectacular growth, with no laws to stop it.
  • All of it begins with extraction. An economic order founded on the secret massive-scale extraction of human data assumes the destruction of privacy as a nonnegotiable condition of its business operations.
  • We can’t fix all our problems at once, but we won’t fix any of them, ever, unless we reclaim the sanctity of information integrity and trustworthy communications
  • The abdication of our information and communication spaces to surveillance capitalism has become the meta-crisis of every republic, because it obstructs solutions to all other crises.
  • Neither Google, nor Facebook, nor any other corporate actor in this new economic order set out to destroy society, any more than the fossil fuel industry set out to destroy the earth.
  • like global warming, the tech giants and their fellow travelers have been willing to treat their destructive effects on people and society as collateral damage — the unfortunate but unavoidable byproduct of perfectly legal economic operations that have produced some of the wealthiest and most powerful corporations in the history of capitalism.
  • Where does that leave us?
  • Democracy is the only countervailing institutional order with the legitimate authority and power to change our course. If the ideal of human self-governance is to survive the digital century, then all solutions point to one solution: a democratic counterrevolution.
  • instead of the usual laundry lists of remedies, lawmakers need to proceed with a clear grasp of the adversary: a single hierarchy of economic causes and their social harms.
  • We can’t rid ourselves of later-stage social harms unless we outlaw their foundational economic causes
  • This means we move beyond the current focus on downstream issues such as content moderation and policing illegal content. Such “remedies” only treat the symptoms without challenging the illegitimacy of the human data extraction that funds private control over society’s information spaces
  • Similarly, structural solutions like “breaking up” the tech giants may be valuable in some cases, but they will not affect the underlying economic operations of surveillance capitalism.
  • Instead, discussions about regulating big tech should focus on the bedrock of surveillance economics: the secret extraction of human data from realms of life once called “private.
  • No secret extraction means no illegitimate concentrations of knowledge about people. No concentrations of knowledge means no targeting algorithms. No targeting means that corporations can no longer control and curate information flows and social speech or shape human behavior to favor their interests
  • the sober truth is that we need lawmakers ready to engage in a once-a-century exploration of far more basic questions:
  • How should we structure and govern information, connection and communication in a democratic digital century?
  • What new charters of rights, legislative frameworks and institutions are required to ensure that data collection and use serve the genuine needs of individuals and society?
  • What measures will protect citizens from unaccountable power over information, whether it is wielded by private companies or governments?
  • The corporation that is Facebook may change its name or its leaders, but it will not voluntarily change its economics.
Javier E

Wine-tasting: it's junk science | Life and style | The Observer - 0 views

  • google_ad_client = 'ca-guardian_js'; google_ad_channel = 'lifeandstyle'; google_max_num_ads = '3'; // Comments Click here to join the discussion. We can't load the discussion on guardian.co.uk because you don't have JavaScript enabled. if (!!window.postMessage) { jQuery.getScript('http://discussion.guardian.co.uk/embed.js') } else { jQuery('#d2-root').removeClass('hd').html( '' + 'Comments' + 'Click here to join the discussion.We can\'t load the ' + 'discussion on guardian.co.uk ' + 'because your web browser does not support all the features that we ' + 'need. If you cannot upgrade your browser to a newer version, you can ' + 'access the discussion ' + 'here.' ); } Wor
  • Hodgson approached the organisers of the California State Fair wine competition, the oldest contest of its kind in North America, and proposed an experiment for their annual June tasting sessions.Each panel of four judges would be presented with their usual "flight" of samples to sniff, sip and slurp. But some wines would be presented to the panel three times, poured from the same bottle each time. The results would be compiled and analysed to see whether wine testing really is scientific.
  • Results from the first four years of the experiment, published in the Journal of Wine Economics, showed a typical judge's scores varied by plus or minus four points over the three blind tastings. A wine deemed to be a good 90 would be rated as an acceptable 86 by the same judge minutes later and then an excellent 94.
  • ...9 more annotations...
  • Hodgson's findings have stunned the wine industry. Over the years he has shown again and again that even trained, professional palates are terrible at judging wine."The results are disturbing," says Hodgson from the Fieldbrook Winery in Humboldt County, described by its owner as a rural paradise. "Only about 10% of judges are consistent and those judges who were consistent one year were ordinary the next year."Chance has a great deal to do with the awards that wines win."
  • French academic Frédéric Brochet tested the effect of labels in 2001. He presented the same Bordeaux superior wine to 57 volunteers a week apart and in two different bottles – one for a table wine, the other for a grand cru.The tasters were fooled.When tasting a supposedly superior wine, their language was more positive – describing it as complex, balanced, long and woody. When the same wine was presented as plonk, the critics were more likely to use negatives such as weak, light and flat.
  • In 2011 Professor Richard Wiseman, a psychologist (and former professional magician) at Hertfordshire University invited 578 people to comment on a range of red and white wines, varying from £3.49 for a claret to £30 for champagne, and tasted blind.People could tell the difference between wines under £5 and those above £10 only 53% of the time for whites and only 47% of the time for reds. Overall they would have been just as a successful flipping a coin to guess.
  • why are ordinary drinkers and the experts so poor at tasting blind? Part of the answer lies in the sheer complexity of wine.For a drink made by fermenting fruit juice, wine is a remarkably sophisticated chemical cocktail. Dr Bryce Rankine, an Australian wine scientist, identified 27 distinct organic acids in wine, 23 varieties of alcohol in addition to the common ethanol, more than 80 esters and aldehydes, 16 sugars, plus a long list of assorted vitamins and minerals that wouldn't look out of place on the ingredients list of a cereal pack. There are even harmless traces of lead and arsenic that come from the soil.
  • "People underestimate how clever the olfactory system is at detecting aromas and our brain is at interpreting them," says Hutchinson."The olfactory system has the complexity in terms of its protein receptors to detect all the different aromas, but the brain response isn't always up to it. But I'm a believer that everyone has the same equipment and it comes down to learning how to interpret it." Within eight tastings, most people can learn to detect and name a reasonable range of aromas in wine
  • People struggle with assessing wine because the brain's interpretation of aroma and bouquet is based on far more than the chemicals found in the drink. Temperature plays a big part. Volatiles in wine are more active when wine is warmer. Serve a New World chardonnay too cold and you'll only taste the overpowering oak. Serve a red too warm and the heady boozy qualities will be overpowering.
  • Colour affects our perceptions too. In 2001 Frédérick Brochet of the University of Bordeaux asked 54 wine experts to test two glasses of wine – one red, one white. Using the typical language of tasters, the panel described the red as "jammy' and commented on its crushed red fruit.The critics failed to spot that both wines were from the same bottle. The only difference was that one had been coloured red with a flavourless dye
  • Other environmental factors play a role. A judge's palate is affected by what she or he had earlier, the time of day, their tiredness, their health – even the weather.
  • Robert Hodgson is determined to improve the quality of judging. He has developed a test that will determine whether a judge's assessment of a blind-tasted glass in a medal competition is better than chance. The research will be presented at a conference in Cape Town this year. But the early findings are not promising."So far I've yet to find someone who passes," he says.
Javier E

'Our minds can be hijacked': the tech insiders who fear a smartphone dystopia | Technol... - 0 views

  • Rosenstein belongs to a small but growing band of Silicon Valley heretics who complain about the rise of the so-called “attention economy”: an internet shaped around the demands of an advertising economy.
  • “It is very common,” Rosenstein says, “for humans to develop things with the best of intentions and for them to have unintended, negative consequences.”
  • most concerned about the psychological effects on people who, research shows, touch, swipe or tap their phone 2,617 times a day.
  • ...43 more annotations...
  • There is growing concern that as well as addicting users, technology is contributing toward so-called “continuous partial attention”, severely limiting people’s ability to focus, and possibly lowering IQ. One recent study showed that the mere presence of smartphones damages cognitive capacity – even when the device is turned off. “Everyone is distracted,” Rosenstein says. “All of the time.”
  • Drawing a straight line between addiction to social media and political earthquakes like Brexit and the rise of Donald Trump, they contend that digital forces have completely upended the political system and, left unchecked, could even render democracy as we know it obsolete.
  • Without irony, Eyal finished his talk with some personal tips for resisting the lure of technology. He told his audience he uses a Chrome extension, called DF YouTube, “which scrubs out a lot of those external triggers” he writes about in his book, and recommended an app called Pocket Points that “rewards you for staying off your phone when you need to focus”.
  • “One reason I think it is particularly important for us to talk about this now is that we may be the last generation that can remember life before,” Rosenstein says. It may or may not be relevant that Rosenstein, Pearlman and most of the tech insiders questioning today’s attention economy are in their 30s, members of the last generation that can remember a world in which telephones were plugged into walls.
  • One morning in April this year, designers, programmers and tech entrepreneurs from across the world gathered at a conference centre on the shore of the San Francisco Bay. They had each paid up to $1,700 to learn how to manipulate people into habitual use of their products, on a course curated by conference organiser Nir Eyal.
  • Eyal, 39, the author of Hooked: How to Build Habit-Forming Products, has spent several years consulting for the tech industry, teaching techniques he developed by closely studying how the Silicon Valley giants operate.
  • “The technologies we use have turned into compulsions, if not full-fledged addictions,” Eyal writes. “It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” None of this is an accident, he writes. It is all “just as their designers intended”
  • He explains the subtle psychological tricks that can be used to make people develop habits, such as varying the rewards people receive to create “a craving”, or exploiting negative emotions that can act as “triggers”. “Feelings of boredom, loneliness, frustration, confusion and indecisiveness often instigate a slight pain or irritation and prompt an almost instantaneous and often mindless action to quell the negative sensation,” Eyal writes.
  • The most seductive design, Harris explains, exploits the same psychological susceptibility that makes gambling so compulsive: variable rewards. When we tap those apps with red icons, we don’t know whether we’ll discover an interesting email, an avalanche of “likes”, or nothing at all. It is the possibility of disappointment that makes it so compulsive.
  • Finally, Eyal confided the lengths he goes to protect his own family. He has installed in his house an outlet timer connected to a router that cuts off access to the internet at a set time every day. “The idea is to remember that we are not powerless,” he said. “We are in control.
  • But are we? If the people who built these technologies are taking such radical steps to wean themselves free, can the rest of us reasonably be expected to exercise our free will?
  • Not according to Tristan Harris, a 33-year-old former Google employee turned vocal critic of the tech industry. “All of us are jacked into this system,” he says. “All of our minds can be hijacked. Our choices are not as free as we think they are.”
  • Harris, who has been branded “the closest thing Silicon Valley has to a conscience”, insists that billions of people have little choice over whether they use these now ubiquitous technologies, and are largely unaware of the invisible ways in which a small number of people in Silicon Valley are shaping their lives.
  • “I don’t know a more urgent problem than this,” Harris says. “It’s changing our democracy, and it’s changing our ability to have the conversations and relationships that we want with each other.” Harris went public – giving talks, writing papers, meeting lawmakers and campaigning for reform after three years struggling to effect change inside Google’s Mountain View headquarters.
  • He explored how LinkedIn exploits a need for social reciprocity to widen its network; how YouTube and Netflix autoplay videos and next episodes, depriving users of a choice about whether or not they want to keep watching; how Snapchat created its addictive Snapstreaks feature, encouraging near-constant communication between its mostly teenage users.
  • The techniques these companies use are not always generic: they can be algorithmically tailored to each person. An internal Facebook report leaked this year, for example, revealed that the company can identify when teens feel “insecure”, “worthless” and “need a confidence boost”. Such granular information, Harris adds, is “a perfect model of what buttons you can push in a particular person”.
  • Tech companies can exploit such vulnerabilities to keep people hooked; manipulating, for example, when people receive “likes” for their posts, ensuring they arrive when an individual is likely to feel vulnerable, or in need of approval, or maybe just bored. And the very same techniques can be sold to the highest bidder. “There’s no ethics,” he says. A company paying Facebook to use its levers of persuasion could be a car business targeting tailored advertisements to different types of users who want a new vehicle. Or it could be a Moscow-based troll farm seeking to turn voters in a swing county in Wisconsin.
  • It was Rosenstein’s colleague, Leah Pearlman, then a product manager at Facebook and on the team that created the Facebook “like”, who announced the feature in a 2009 blogpost. Now 35 and an illustrator, Pearlman confirmed via email that she, too, has grown disaffected with Facebook “likes” and other addictive feedback loops. She has installed a web browser plug-in to eradicate her Facebook news feed, and hired a social media manager to monitor her Facebook page so that she doesn’t have to.
  • Harris believes that tech companies never deliberately set out to make their products addictive. They were responding to the incentives of an advertising economy, experimenting with techniques that might capture people’s attention, even stumbling across highly effective design by accident.
  • It’s this that explains how the pull-to-refresh mechanism, whereby users swipe down, pause and wait to see what content appears, rapidly became one of the most addictive and ubiquitous design features in modern technology. “Each time you’re swiping down, it’s like a slot machine,” Harris says. “You don’t know what’s coming next. Sometimes it’s a beautiful photo. Sometimes it’s just an ad.”
  • The reality TV star’s campaign, he said, had heralded a watershed in which “the new, digitally supercharged dynamics of the attention economy have finally crossed a threshold and become manifest in the political realm”.
  • “Smartphones are useful tools,” he says. “But they’re addictive. Pull-to-refresh is addictive. Twitter is addictive. These are not good things. When I was working on them, it was not something I was mature enough to think about. I’m not saying I’m mature now, but I’m a little bit more mature, and I regret the downsides.”
  • All of it, he says, is reward-based behaviour that activates the brain’s dopamine pathways. He sometimes finds himself clicking on the red icons beside his apps “to make them go away”, but is conflicted about the ethics of exploiting people’s psychological vulnerabilities. “It is not inherently evil to bring people back to your product,” he says. “It’s capitalism.”
  • He identifies the advent of the smartphone as a turning point, raising the stakes in an arms race for people’s attention. “Facebook and Google assert with merit that they are giving users what they want,” McNamee says. “The same can be said about tobacco companies and drug dealers.”
  • McNamee chooses his words carefully. “The people who run Facebook and Google are good people, whose well-intentioned strategies have led to horrific unintended consequences,” he says. “The problem is that there is nothing the companies can do to address the harm unless they abandon their current advertising models.”
  • But how can Google and Facebook be forced to abandon the business models that have transformed them into two of the most profitable companies on the planet?
  • McNamee believes the companies he invested in should be subjected to greater regulation, including new anti-monopoly rules. In Washington, there is growing appetite, on both sides of the political divide, to rein in Silicon Valley. But McNamee worries the behemoths he helped build may already be too big to curtail.
  • Rosenstein, the Facebook “like” co-creator, believes there may be a case for state regulation of “psychologically manipulative advertising”, saying the moral impetus is comparable to taking action against fossil fuel or tobacco companies. “If we only care about profit maximisation,” he says, “we will go rapidly into dystopia.”
  • James Williams does not believe talk of dystopia is far-fetched. The ex-Google strategist who built the metrics system for the company’s global search advertising business, he has had a front-row view of an industry he describes as the “largest, most standardised and most centralised form of attentional control in human history”.
  • It is a journey that has led him to question whether democracy can survive the new technological age.
  • He says his epiphany came a few years ago, when he noticed he was surrounded by technology that was inhibiting him from concentrating on the things he wanted to focus on. “It was that kind of individual, existential realisation: what’s going on?” he says. “Isn’t technology supposed to be doing the complete opposite of this?
  • That discomfort was compounded during a moment at work, when he glanced at one of Google’s dashboards, a multicoloured display showing how much of people’s attention the company had commandeered for advertisers. “I realised: this is literally a million people that we’ve sort of nudged or persuaded to do this thing that they weren’t going to otherwise do,” he recalls.
  • Williams and Harris left Google around the same time, and co-founded an advocacy group, Time Well Spent, that seeks to build public momentum for a change in the way big tech companies think about design. Williams finds it hard to comprehend why this issue is not “on the front page of every newspaper every day.
  • “Eighty-seven percent of people wake up and go to sleep with their smartphones,” he says. The entire world now has a new prism through which to understand politics, and Williams worries the consequences are profound.
  • g. “The attention economy incentivises the design of technologies that grab our attention,” he says. “In so doing, it privileges our impulses over our intentions.”
  • That means privileging what is sensational over what is nuanced, appealing to emotion, anger and outrage. The news media is increasingly working in service to tech companies, Williams adds, and must play by the rules of the attention economy to “sensationalise, bait and entertain in order to survive”.
  • It is not just shady or bad actors who were exploiting the internet to change public opinion. The attention economy itself is set up to promote a phenomenon like Trump, who is masterly at grabbing and retaining the attention of supporters and critics alike, often by exploiting or creating outrage.
  • All of which has left Brichter, who has put his design work on the backburner while he focuses on building a house in New Jersey, questioning his legacy. “I’ve spent many hours and weeks and months and years thinking about whether anything I’ve done has made a net positive impact on society or humanity at all,” he says. He has blocked certain websites, turned off push notifications, restricted his use of the Telegram app to message only with his wife and two close friends, and tried to wean himself off Twitter. “I still waste time on it,” he confesses, “just reading stupid news I already know about.” He charges his phone in the kitchen, plugging it in at 7pm and not touching it until the next morning.
  • He stresses these dynamics are by no means isolated to the political right: they also play a role, he believes, in the unexpected popularity of leftwing politicians such as Bernie Sanders and Jeremy Corbyn, and the frequent outbreaks of internet outrage over issues that ignite fury among progressives.
  • All of which, Williams says, is not only distorting the way we view politics but, over time, may be changing the way we think, making us less rational and more impulsive. “We’ve habituated ourselves into a perpetual cognitive style of outrage, by internalising the dynamics of the medium,” he says.
  • It was another English science fiction writer, Aldous Huxley, who provided the more prescient observation when he warned that Orwellian-style coercion was less of a threat to democracy than the more subtle power of psychological manipulation, and “man’s almost infinite appetite for distractions”.
  • If the attention economy erodes our ability to remember, to reason, to make decisions for ourselves – faculties that are essential to self-governance – what hope is there for democracy itself?
  • “The dynamics of the attention economy are structurally set up to undermine the human will,” he says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.”
peterconnelly

Google's I/O Conference Offers Modest Vision of the Future - The New York Times - 0 views

  • SAN FRANCISCO — There was a time when Google offered a wondrous vision of the future, with driverless cars, augmented-reality eyewear, unlimited storage of emails and photos, and predictive texts to complete sentences in progress.
  • The bold vision is still out there — but it’s a ways away. The professional executives who now run Google are increasingly focused on wringing money out of those years of spending on research and development.
  • The company’s biggest bet in artificial intelligence does not, at least for now, mean science fiction come to life. It means more subtle changes to existing products.
  • ...2 more annotations...
  • At the same time, it was not immediately clear how some of the other groundbreaking work, like language models that better understand natural conversation or that can break down a task into logical smaller steps, will ultimately lead to the next generation of computing that Google has touted.
  • Much of those capabilities are powered by the deep technological work Google has done for years using so-called machine learning, image recognition and natural language understanding. It’s a sign of an evolution rather than revolution for Google and other large tech giants.
Javier E

As Researchers Turn to Google, Libraries Navigate the Messy World of Discovery Tools - ... - 0 views

  • a major change is under way in how libraries organize information. Instead of bewildering users with a bevy of specialized databases—books here, articles there—many libraries are bulldozing their digital silos. They now offer one-stop search boxes that comb entire collections, Google style.
  • one fear is that firms could favor their own content in results.
  • Another is that discovery software, by sluicing content together, could deluge users with less-appropriate resources
  • ...8 more annotations...
  • the rollout of one-stop search tools is "really intentionally trying to change the way people do research,"
  • "That’s bound to change what people find."
  • Vendors of discovery tools will make deals with providers that sell content to libraries, he says, so that content can be represented in the discovery tools’ indexes and made available for search. (Beyond products from Ebsco and ProQuest, other major tools in this genre, known as "web scale" or "index based" discovery, include Primo, from Ex Libris, and WorldCat Discovery Services, from OCLC.)
  • Mr. Asher’s experiment discovered that default settings of the tools had a major effect on what resources students chose. Working with Google Scholar, which is integrated with Google Books, students used more books. With Summon, they used a lot of shorter newspaper and magazine articles. With ­Ebsco Discovery Service, they used more journals, which meant they scored highest under the study’s rating rubric.
  • Mr. Asher believes that "it’s a logical impossibility to create a querying tool that doesn’t have any form of bias." He speculates that discovery vendors may have better information about their own content, boosting certain articles higher in results.
  • the competition for student and faculty attention has only intensified since 2004, when Google’s "simple way to broadly search for scholarly literature" made its debut. That free service, called Google Scholar, has many fans in academe.
  • Mr. Asher is familiar with the criticisms of Google Scholar. After all, his own study listed them: "limited advanced search functionality, incomplete or inaccurate metadata, inflated citation counts, lack of usage statistics, and inconsistent coverage across disciplines."
  • "I kind of hate to say it, since I am a librarian," he says. "We pay a lot of money for discovery tools. And then I go off and just use Google Scholar."
jlessner

Europe Challenges Google, Seeing Violations of Its Antitrust Law - NYTimes.com - 0 views

  • The European Union’s antitrust chief on Wednesday formally accused Google of abusing its dominance in web searches, bringing charges that could limit the giant American tech company’s moneymaking prowess.
  • It will almost certainly increase pressure on Google to address complaints that the company favors its own products in search results over its rivals’ services.
  • “If the investigation confirmed our concerns, Google would have to face the legal consequences and change the way it does business in Europe,” said Ms. Vestager, the European Union’s competition commissioner, referring to Google’s search practices.
  • ...1 more annotation...
  • Google holds a roughly 90 percent share in the region’s search market, and the company contends that in both web searches and Android software it plays fair.
Javier E

Google Has Picked an Answer for You-Too Bad It's Often Wrong - WSJ - 1 views

  • Google became the world’s go-to source of information by ranking billions of links from millions of sources. Now, for many queries, the internet giant is presenting itself as the authority on truth by promoting a single search result as the answer.
  • Google, a unit of Alphabet Inc., handles almost all internet searches. Featured snippets appear on about 40% of results for searches formed as questions
  • They give Google’s secret algorithms even greater power to shape public opinion, given that surveys show people consider search engines their most-trusted source of information, over traditional media or social media.
  • ...7 more annotations...
  • Google’s featured answers are feeding a raging global debate about the ability of Silicon Valley companies to influence society. Google and other internet giants are under intensifying scrutiny over the power of their products and their vulnerability to bias or manipulation.
  • Featured snippets are “generated algorithmically and [are] a reflection of what people are searching for and what’s available on the web,” the company said in an April blog post. “This can sometimes lead to results that are unexpected, inaccurate or offensive.”
  • The promoted answers, called featured snippets, are outlined in boxes above other results and presented in larger type, often with images. Google’s voice assistant sometimes reads them aloud
  • An algorithm chooses featured snippets from websites in part by how closely they appear to satisfy a user’s question, factoring in Google’s measure of a source’s authority and its ranking in the search results.
  • By answering questions directly, Google aims to make the search engine more appealing to users and the advertisers that chase them. The answers’ real estate is so attractive that there is a budding marketing industry around tailoring content so it becomes a featured snippet.
  • as Google expanded the use of featured snippets, it has relied more often on less authoritative sources, such as purveyors of top-10 lists and gossipy clickbait.
  • “For them to wield their algorithm like this is very worrisome,” she said. “This is how people learn about the world.”
Javier E

The Chatbots Are Here, and the Internet Industry Is in a Tizzy - The New York Times - 0 views

  • He cleared his calendar and asked employees to figure out how the technology, which instantly provides comprehensive answers to complex questions, could benefit Box, a cloud computing company that sells services that help businesses manage their online data.
  • Mr. Levie’s reaction to ChatGPT was typical of the anxiety — and excitement — over Silicon Valley’s new new thing. Chatbots have ignited a scramble to determine whether their technology could upend the economics of the internet, turn today’s powerhouses into has-beens or create the industry’s next giants.
  • Cloud computing companies are rushing to deliver chatbot tools, even as they worry that the technology will gut other parts of their businesses. E-commerce outfits are dreaming of new ways to sell things. Social media platforms are being flooded with posts written by bots. And publishing companies are fretting that even more dollars will be squeezed out of digital advertising.
  • ...22 more annotations...
  • The volatility of chatbots has made it impossible to predict their impact. In one second, the systems impress by fielding a complex request for a five-day itinerary, making Google’s search engine look archaic. A moment later, they disturb by taking conversations in dark directions and launching verbal assaults.
  • The result is an industry gripped with the question: What do we do now?
  • The A.I. systems could disrupt $100 billion in cloud spending, $500 billion in digital advertising and $5.4 trillion in e-commerce sales,
  • As Microsoft figures out a chatbot business model, it is forging ahead with plans to sell the technology to others. It charges $10 a month for a cloud service, built in conjunction with the OpenAI lab, that provides developers with coding suggestions, among other things.
  • Smaller companies like Box need help building chatbot tools, so they are turning to the giants that process, store and manage information across the web. Those companies — Google, Microsoft and Amazon — are in a race to provide businesses with the software and substantial computing power behind their A.I. chatbots.
  • “The cloud computing providers have gone all in on A.I. over the last few months,
  • “They are realizing that in a few years, most of the spending will be on A.I., so it is important for them to make big bets.”
  • Yusuf Mehdi, the head of Bing, said the company was wrestling with how the new version would make money. Advertising will be a major driver, he said, but the company expects fewer ads than traditional search allows.
  • Google, perhaps more than any other company, has reason to both love and hate the chatbots. It has declared a “code red” because their abilities could be a blow to its $162 billion business showing ads on searches.
  • “The discourse on A.I. is rather narrow and focused on text and the chat experience,” Mr. Taylor said. “Our vision for search is about understanding information and all its forms: language, images, video, navigating the real world.”
  • Sridhar Ramaswamy, who led Google’s advertising division from 2013 to 2018, said Microsoft and Google recognized that their current search business might not survive. “The wall of ads and sea of blue links is a thing of the past,” said Mr. Ramaswamy, who now runs Neeva, a subscription-based search engine.
  • As that underlying tech, known as generative A.I., becomes more widely available, it could fuel new ideas in e-commerce. Late last year, Manish Chandra, the chief executive of Poshmark, a popular online secondhand store, found himself daydreaming during a long flight from India about chatbots building profiles of people’s tastes, then recommending and buying clothes or electronics. He imagined grocers instantly fulfilling orders for a recipe.
  • “It becomes your mini-Amazon,” said Mr. Chandra, who has made integrating generative A.I. into Poshmark one of the company’s top priorities over the next three years. “That layer is going to be very powerful and disruptive and start almost a new layer of retail.”
  • In early December, users of Stack Overflow, a popular social network for computer programmers, began posting substandard coding advice written by ChatGPT. Moderators quickly banned A.I.-generated text
  • t people could post this questionable content far faster than they could write posts on their own, said Dennis Soemers, a moderator for the site. “Content generated by ChatGPT looks trustworthy and professional, but often isn’t,”
  • When websites thrived during the pandemic as traffic from Google surged, Nilay Patel, editor in chief of The Verge, a tech news site, warned publishers that the search giant would one day turn off the spigot. He had seen Facebook stop linking out to websites and foresaw Google following suit in a bid to boost its own business.
  • He predicted that visitors from Google would drop from a third of websites’ traffic to nothing. He called that day “Google zero.”
  • Because chatbots replace website search links with footnotes to answers, he said, many publishers are now asking if his prophecy is coming true.
  • , strategists and engineers at the digital advertising company CafeMedia have met twice a week to contemplate a future where A.I. chatbots replace search engines and squeeze web traffic.
  • The group recently discussed what websites should do if chatbots lift information but send fewer visitors. One possible solution would be to encourage CafeMedia’s network of 4,200 websites to insert code that limited A.I. companies from taking content, a practice currently allowed because it contributes to search rankings.
  • Courts are expected to be the ultimate arbiter of content ownership. Last month, Getty Images sued Stability AI, the start-up behind the art generator tool Stable Diffusion, accusing it of unlawfully copying millions of images. The Wall Street Journal has said using its articles to train an A.I. system requires a license.
  • In the meantime, A.I. companies continue collecting information across the web under the “fair use” doctrine, which permits limited use of material without permission.
Javier E

Computer Algorithms Rely Increasingly on Human Helpers - NYTimes.com - 0 views

  • Although algorithms are growing ever more powerful, fast and precise, the computers themselves are literal-minded, and context and nuance often elude them. Capable as these machines are, they are not always up to deciphering the ambiguity of human language and the mystery of reasoning.
  • And so, while programming experts still write the step-by-step instructions of computer code, additional people are needed to make more subtle contributions as the work the computers do has become more involved. People evaluate, edit or correct an algorithm’s work. Or they assemble online databases of knowledge and check and verify them — creating, essentially, a crib sheet the computer can call on for a quick answer. Humans can interpret and tweak information in ways that are understandable to both computers and other humans.
  • Even at Google, where algorithms and engineers reign supreme in the company’s business and culture, the human contribution to search results is increasing. Google uses human helpers in two ways. Several months ago, it began presenting summaries of information on the right side of a search page when a user typed in the name of a well-known person or place, like “Barack Obama” or “New York City.” These summaries draw from databases of knowledge like Wikipedia, the C.I.A. World Factbook and Freebase, whose parent company, Metaweb, Google acquired in 2010. These databases are edited by humans.
  • ...3 more annotations...
  • When Google’s algorithm detects a search term for which this distilled information is available, the search engine is trained to go fetch it rather than merely present links to Web pages. “There has been a shift in our thinking,” said Scott Huffman, an engineering director in charge of search quality at Google. “A part of our resources are now more human curated.”
  • “Our engineers evolve the algorithm, and humans help us see if a suggested change is really an improvement,” Mr. Huffman said.
  • Ben Taylor, 25, is a product manager at FindTheBest, a fast-growing start-up in Santa Barbara, Calif. The company calls itself a “comparison engine” for finding and comparing more than 100 topics and products, from universities to nursing homes, smartphones to dog breeds. Its Web site went up in 2010, and the company now has 60 full-time employees. Mr. Taylor helps design and edit the site’s education pages. He is not an engineer, but an English major who has become a self-taught expert in the arcane data found in Education Department studies and elsewhere. His research methods include talking to and e-mailing educators. He is an information sleuth.
Javier E

Google Brings New Meaning to the Web - IEEE Spectrum - 0 views

  • Google’s new push to make sense of the Web in terms of “things, not strings,” to use the company’s catchphrase. Instead of just indexing Web documents by the words they contain, “we really need to understand about things in the real world,” says Shashi Thakur, technical lead on the Knowledge Graph project, which some see as a stepping stone to a long-sought system called the Semantic Web.
  • Google’s Knowledge Graph adds a new dimension to searches, because the company now keeps track of what many search terms mean. That’s what allows the system to recognize the connection between Margaret Thatcher (the person) and Grantham (her place of birth)—not because the two strings show up together on a lot of Web pages.
  • reebase was just one building block Google used to create Knowledge Graph. “We’re definitely open to using any data we have privileges to use,” says Thakur, who gives the CIA World Factbook and Wikipedia as examples of other collections of public knowledge that he and his colleagues tapped. He says that Google has also licensed certain data sets and points out that the company has some large data sets developed internally—the information collected for Google Maps, for example. “All of those go into the Knowledge Graph,” says Thakur.
1 - 20 of 263 Next › Last »
Showing 20 items per page