Skip to main content

Home/ Instructional & Media Services at Dickinson College/ Group items tagged algorithms

Rss Feed Group items tagged

10More

'There is no standard': investigation finds AI algorithms objectify women's bodies | Ar... - 0 views

  • AI tags photos of women in everyday situations as sexually suggestive. They also rate pictures of women as more “racy” or sexually suggestive than comparable pictures of men.
  • suppressed the reach of countless images featuring women’s bodies, and hurt female-led businesses – further amplifying societal disparities.
  • “Objectification of women seems deeply embedded in the system.”
  • ...7 more annotations...
  • Shadowbanning has been documented for years, but the Guardian journalists may have found a missing link to understand the phenomenon: biased AI algorithms. Social media platforms seem to leverage these algorithms to rate images and limit the reach of content that they consider too racy. The problem seems to be that these AI algorithms have built-in gender bias, rating women more racy than images containing men.
  • “You are looking at decontextualized information where a bra is being seen as inherently racy rather than a thing that many women wear every day as a basic item of clothing,”
  • “You cannot have one single uncontested definition of raciness.”
  • these algorithms were probably labeled by straight men, who may associate men working out with fitness, but may consider an image of a woman working out as racy. It’s also possible that these ratings seem gender biased in the US and in Europe because the labelers may have been from a place with a more conservative culture
  • “There’s no standard of quality here,”
  • “I will censor as artistically as possible any nipples. I find this so offensive to art, but also to women,” she said. “I almost feel like I’m part of perpetuating that ridiculous cycle that I don’t want to have any part of.”
  • many people, including chronically ill and disabled folks, rely on making money through social media and shadowbanning harms their business
11More

The Myth Of AI | Edge.org - 0 views

  • The distinction between a corporation and an algorithm is fading. Does that make an algorithm a person? Here we have this interesting confluence between two totally different worlds. We have the world of money and politics and the so-called conservative Supreme Court, with this other world of what we can call artificial intelligence, which is a movement within the technical culture to find an equivalence between computers and people. In both cases, there's an intellectual tradition that goes back many decades. Previously they'd been separated; they'd been worlds apart. Now, suddenly they've been intertwined.
  • Since our economy has shifted to what I call a surveillance economy, but let's say an economy where algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely on big data in order to figure out who you should date, who you should sleep with, what music you should listen to, what books you should read, and on and on and on. And people often accept that because there's no empirical alternative to compare it to, there's no baseline. It's bad personal science. It's bad self-understanding.
  • there's no way to tell where the border is between measurement and manipulation in these systems
  • ...8 more annotations...
  • It's not so much a rise of evil as a rise of nonsense. It's a mass incompetence, as opposed to Skynet from the Terminator movies. That's what this type of AI turns into.
  • What's happened here is that translators haven't been made obsolete. What's happened instead is that the structure through which we receive the efforts of real people in order to make translations happen has been optimized, but those people are still needed.
  • In order to create this illusion of a freestanding autonomous artificial intelligent creature, we have to ignore the contributions from all the people whose data we're grabbing in order to make it work. That has a negative economic consequence.
  • If you talk to translators, they're facing a predicament, which is very similar to some of the other early victim populations, due to the particular way we digitize things. It's similar to what's happened with recording musicians, or investigative journalists—which is the one that bothers me the most—or photographers. What they're seeing is a severe decline in how much they're paid, what opportunities they have, their long-term prospects.
  • because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth. The consumer tech companies, we tend to put a face in front of them, like a Cortana or a Siri
  • If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I've just gone over, which include acceptance of bad user interfaces, where you can't tell if you're being manipulated or not, and everything is ambiguous. It creates incompetence, because you don't know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you're gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.
  • This idea that some lab somewhere is making these autonomous algorithms that can take over the world is a way of avoiding the profoundly uncomfortable political problem, which is that if there's some actuator that can do harm, we have to figure out some way that people don't do harm with it. There are about to be a whole bunch of those. And that'll involve some kind of new societal structure that isn't perfect anarchy. Nobody in the tech world wants to face that, so we lose ourselves in these fantasies of AI. But if you could somehow prevent AI from ever happening, it would have nothing to do with the actual problem that we fear, and that's the sad thing, the difficult thing we have to face.
  • To reject your own ignorance just casts you into a silly state where you're a lesser scientist. I don't see that so much in the neuroscience field, but it comes from the computer world so much, and the computer world is so influential because it has so much money and influence that it does start to bleed over into all kinds of other things.
6More

I unintentionally created a biased AI algorithm 25 years ago - tech companies are still... - 0 views

  • How and why do well-educated, well-intentioned scientists produce biased AI systems? Sociological theories of privilege provide one useful lens.
  • Scientists also face a nasty subconscious dilemma when incorporating diversity into machine learning models: Diverse, inclusive models perform worse than narrow models.
  • fairness can still be the victim of competitive pressures in academia and industry. The flawed Bard and Bing chatbots from Google and Microsoft are recent evidence of this grim reality. The commercial necessity of building market share led to the premature release of these systems.
  • ...3 more annotations...
  • Their training data is biased. They are designed by an unrepresentative group. They face the mathematical impossibility of treating all categories equally. They must somehow trade accuracy for fairness. And their biases are hiding behind millions of inscrutable numerical parameters.
  • biased AI systems can still be created unintentionally and easily. It’s also clear that the bias in these systems can be harmful, hard to detect and even harder to eliminate.
  • with North American computer science doctoral programs graduating only about 23% female, and 3% Black and Latino students, there will continue to be many rooms and many algorithms in which underrepresented groups are not represented at all.
1More

Wolfram|Alpha - 0 views

  •  
    Today's Wolfram|Alpha is the first step in an ambitious, long-term project to make all systematic knowledge immediately computable by anyone. You enter your question or calculation, and Wolfram|Alpha uses its built-in algorithms and growing collection of data to compute the answer
11More

The Internet Intellectual - 0 views

  • Even Thomas Friedman would be aghast at some of Jarvis’s cheesy sound-bites
  • What does that actually mean?
  • In Jarvis’s universe, all the good things are technologically determined and all the bad things are socially determined
  • ...7 more annotations...
  • Jarvis never broaches such subtleties. His is a simple world:
  • why not consider the possibility that the incumbents may be using the same tools, Jarvis’s revered technologies, to tell us what to think, and far more effectively than before? Internet shelf space may be infinite, but human attention is not. Cheap self-publishing marginally improves one’s chances of being heard, but nothing about this new decentralized public sphere suggests that old power structures—provided they are smart and willing to survive—will not be able to use it to their benefit
  • Jarvis 1.0 was all about celebrating Google, but Jarvis 2.0 has new friends in Facebook and Twitter. (An Internet intellectual always keeps up.) Jarvis 1.0 wrote that “Google’s moral of universal empowerment is the sometimes-forgotten ideal of democracy,” and argued that the company “provides the infrastructure for a culture of choice,” while its “algorithms and its business model work because Google trusts us.” Jarvis 2.0 claims that “by sharing publicly, we people challenge Google’s machines and reclaim our authority on the internet from algorithms.”
  • Jarvis has another reference point, another sacred telos: the equally grand and equally inexorable march of the Internet, which in his view is a technology that generates its own norms, its own laws, its own people. (He likes to speak of “us, people of the Net.”) For the Technology Man, the Internet is the glue that holds our globalized world together and the divine numen that fills it with meaning. If you thought that ethnocentrism was bad, brace yourself for Internet-centrism
  • Why worry about the growing dominance of such digitalism? The reason should be obvious. As Internet-driven explanations crowd out everything else, our entire vocabulary is being re-defined. Collaboration is re-interpreted through the prism of Wikipedia; communication, through the prism of social networking; democratic participation, through the prism of crowd-sourcing; cosmopolitanism, through the prism of reading the blogs of exotic “others”; political upheaval, through the prism of the so-called Twitter revolutions. Even the persecution of dissidents is now seen as an extension of online censorship (rather than the other way around). A recent headline on the blog of the Harvard-based Herdictproject—it tracks Internet censorship worldwide—announces that, in Mexico and Morocco, “Online Censorship Goes Offline.” Were activists and dissidents never harassed before Twitter and Facebook?
  • Most Internet intellectuals simply choose a random point in the distant past—the honor almost invariably goes to the invention of the printing press—and proceed to draw a straight line from Gutenberg to Zuckerberg, as if the Counter-Reformation, the Thirty Years’ War, the Reign of Terror, two world wars—and everything else—never happened.
  • even their iPad is of interest to them only as a “platform”—another buzzword of the incurious—and not as an artifact that is assembled in dubious conditions somewhere in East Asian workshops so as to produce cultic devotion in its more fortunate owners. This lack of elementary intellectual curiosity is the defining feature of the Internet intellectual. History, after all, is about details, but no Internet intellectual wants to be accused of thinking small. And so they think big—sloppily, ignorantly, pretentiously, and without the slightest appreciation of the difference between critical thought and market propaganda.
  •  
    In which Evgeny rips Jeff a new one
7More

Lacktribution: Be Like Everyone Else - CogDogBlog - 0 views

  • What exactly are the issues about attributing? Why is it good to not have to attribute? Is it a severe challenge to attribute? Does it hurt? Does it call for technical or academic skills beyond reach? Does it consume great amounts of time, resources? Why, among professional designers and technologists is it such a good thing to be free of this odious chore? I can translate this typical reason to use public domain content, “I prefer to be lazy.”
  • There is a larger implication when you reuse content and choose not to attribute. Out in the flow of all other information, it more or less says to readers, “all images are free to pilfer. Just google and take them all. Be like me.”
  • It’s not about the rules of the license, it’s about maybe, maybe, operating in this mechanized place as a human, rather than a copy cat.
  • ...4 more annotations...
  • Google search results gives more weight to pxhere.com where the image has a mighty 4 views (some of which are me) over the original image, with almost 5000 views.
  • What kind of algorithm is that? It’s one that does not favor the individual. Image search results will favor sites like Needpix, Pixsels, Pixnio, Peakpx, Nicepic, and they still favor the really slimy maxpixel which is a direct rip off of pixabay.
  • did you know that the liberating world of “use any photo you want w/o the hassle of attribution” is such a bucket of questionable slime? And that Google, with all of their algorithmic prowess, gives more favorable results to sites that lift photos than to the ones where the originals exist?
  • So yes, just reuse photos without taking all of the severe effort to give credit to the source, because “you don’t have to.” Be a copycat. Show your flag of Lacktribution. Like everyone else. I will not. I adhere to Thanktribution.
12More

ChatGPT Is a Blurry JPEG of the Web | The New Yorker - 0 views

  • Think of ChatGPT as a blurry JPEG of all the text on the Web. It retains much of the information on the Web, in the same way that a JPEG retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry JPEG, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.
  • a way to understand the “hallucinations,” or nonsensical answers to factual questions, to which large-language models such as ChatGPT are all too prone. These hallucinations are compression artifacts, but—like the incorrect labels generated by the Xerox photocopier—they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our own knowledge of the world. When we think about them this way, such hallucinations are anything but surprising; if a compression algorithm is designed to reconstruct text after ninety-nine per cent of the original has been discarded, we should expect that significant portions of what it generates will be entirely fabricated.
  • ChatGPT is so good at this form of interpolation that people find it entertaining: they’ve discovered a “blur” tool for paragraphs instead of photos, and are having a blast playing with it.
  • ...9 more annotations...
  • large-language models like ChatGPT are often extolled as the cutting edge of artificial intelligence, it may sound dismissive—or at least deflating—to describe them as lossy text-compression algorithms. I do think that this perspective offers a useful corrective to the tendency to anthropomorphize large-language models
  • Even though large-language models often hallucinate, when they’re lucid they sound like they actually understand subjects like economic theory
  • The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material. In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.
  • Even if it is possible to restrict large-language models from engaging in fabrication, should we use them to generate Web content? This would make sense only if our goal is to repackage information that’s already available on the Web. Some companies exist to do just that—we usually call them content mills. Perhaps the blurriness of large-language models will be useful to them, as a way of avoiding copyright infringement. Generally speaking, though, I’d say that anything that’s good for content mills is not good for people searching for information.
  • If and when we start seeing models producing output that’s as good as their input, then the analogy of lossy compression will no longer be applicable.
  • starting with a blurry copy of unoriginal work isn’t a good way to create original work
  • Having students write essays isn’t merely a way to test their grasp of the material; it gives them experience in articulating their thoughts. If students never have to write essays that we have all read before, they will never gain the skills needed to write something that we have never read.
  • Sometimes it’s only in the process of writing that you discover your original ideas. Some might say that the output of large-language models doesn’t look all that different from a human writer’s first draft, but, again, I think this is a superficial resemblance. Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That’s what directs you during rewriting, and that’s one of the things lacking when you start with text generated by an A.I.
  • What use is there in having something that rephrases the Web?
19More

ChatGPT Is Nothing Like a Human, Says Linguist Emily Bender - 0 views

  • Please do not conflate word form and meaning. Mind your own credulity.
  • We’ve learned to make “machines that can mindlessly generate text,” Bender told me when we met this winter. “But we haven’t learned how to stop imagining the mind behind it.”
  • A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”
  • ...16 more annotations...
  • “We call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harms,” she co-wrote in 2021. “Work on synthetic human behavior is a bright line in ethical Al development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups.”
  • chatbots that we easily confuse with humans are not just cute or unnerving. They sit on a bright line. Obscuring that line and blurring — bullshitting — what’s human and what’s not has the power to unravel society
  • She began learning from, then amplifying, Black women’s voices critiquing AI, including those of Joy Buolamwini (she founded the Algorithmic Justice League while at MIT) and Meredith Broussard (the author of Artificial Unintelligence: How Computers Misunderstand the World). She also started publicly challenging the term artificial intelligence, a sure way, as a middle-aged woman in a male field, to get yourself branded as a scold. The idea of intelligence has a white-supremacist history. And besides, “intelligent” according to what definition? The three-stratum definition? Howard Gardner’s theory of multiple intelligences? The Stanford-Binet Intelligence Scale? Bender remains particularly fond of an alternative name for AI proposed by a former member of the Italian Parliament: “Systematic Approaches to Learning Algorithms and Machine Inferences.” Then people would be out here asking, “Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?”
  • Tech-makers assuming their reality accurately represents the world create many different kinds of problems. The training data for ChatGPT is believed to include most or all of Wikipedia, pages linked from Reddit, a billion words grabbed off the internet. (It can’t include, say, e-book copies of everything in the Stanford library, as books are protected by copyright law.) The humans who wrote all those words online overrepresent white people. They overrepresent men. They overrepresent wealth. What’s more, we all know what’s out there on the internet: vast swamps of racism, sexism, homophobia, Islamophobia, neo-Nazism.
  • One fired Google employee told me succeeding in tech depends on “keeping your mouth shut to everything that’s disturbing.” Otherwise, you’re a problem. “Almost every senior woman in computer science has that rep. Now when I hear, ‘Oh, she’s a problem,’ I’m like, Oh, so you’re saying she’s a senior woman?”
  • “We haven’t learned to stop imagining the mind behind it.”
  • In March 2021, Bender published “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” with three co-authors. After the paper came out, two of the co-authors, both women, lost their jobs as co-leads of Google’s Ethical AI team.
  • “On the Dangers of Stochastic Parrots” is not a write-up of original research. It’s a synthesis of LLM critiques that Bender and others have made: of the biases encoded in the models; the near impossibility of studying what’s in the training data, given the fact they can contain billions of words; the costs to the climate; the problems with building technology that freezes language in time and thus locks in the problems of the past. Google initially approved the paper, a requirement for publications by staff. Then it rescinded approval and told the Google co-authors to take their names off it. Several did, but Google AI ethicist Timnit Gebru refused. Her colleague (and Bender’s former student) Margaret Mitchell changed her name on the paper to Shmargaret Shmitchell, a move intended, she said, to “index an event and a group of authors who got erased.” Gebru lost her job in December 2020, Mitchell in February 2021. Both women believe this was retaliation and brought their stories to the press. The stochastic-parrot paper went viral, at least by academic standards. The phrase stochastic parrot entered the tech lexicon.
  • Tech execs loved it. Programmers related to it. OpenAI CEO Sam Altman was in many ways the perfect audience: a self-identified hyperrationalist so acculturated to the tech bubble that he seemed to have lost perspective on the world beyond. “I think the nuclear mutually assured destruction rollout was bad for a bunch of reasons,” he said on AngelList Confidential in November. He’s also a believer in the so-called singularity, the tech fantasy that, at some point soon, the distinction between human and machine will collapse. “We are a few years in,” Altman wrote of the cyborg merge in 2017. “It’s probably going to happen sooner than most people think. Hardware is improving at an exponential rate … and the number of smart people working on AI is increasing exponentially as well. Double exponential functions get away from you fast.” On December 4, four days after ChatGPT was released, Altman tweeted, “i am a stochastic parrot, and so r u.”
  • “This is one of the moves that turn up ridiculously frequently. People saying, ‘Well, people are just stochastic parrots,’” she said. “People want to believe so badly that these language models are actually intelligent that they’re willing to take themselves as a point of reference and devalue that to match what the language model can do.”
  • The membrane between academia and industry is permeable almost everywhere; the membrane is practically nonexistent at Stanford, a school so entangled with tech that it can be hard to tell where the university ends and the businesses begin.
  • “No wonder that men who live day in and day out with machines to which they believe themselves to have become slaves begin to believe that men are machines.”
  • what’s tenure for, after all?
  • LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it’s not about humility. It’s not about all of us. It’s not about becoming a humble creation among the world’s others. It’s about some of us — let’s be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.
  • The AI dream is “governed by the perfectibility thesis, and that’s where we see a fascist form of the human.”
  • “Why are you trying to trick people into thinking that it really feels sad that you lost your phone?”
10More

Dark Social: We Have the Whole History of the Web Wrong - Alexis C. Madrigal - The Atla... - 0 views

  • this vast trove of social traffic is essentially invisible to most analytics programs. I call it DARK SOCIAL. It shows up variously in programs as "direct" or "typed/bookmarked" traffic, which implies to many site owners that you actually have a bookmark or typed in www.theatlantic.com into your browser. But that's not actually what's happening a lot of the time. Most of the time, someone Gchatted someone a link, or it came in on a big email distribution list, or your dad sent it to you
  • the idea that "social networks" and "social media" sites created a social web is pervasive. Everyone behaves as if the traffic your stories receive from the social networks (Facebook, Reddit, Twitter, StumbleUpon) is the same as all of your social traffic
  • if you think optimizing your Facebook page and Tweets is "optimizing for social," you're only halfway (or maybe 30 percent) correct. The only real way to optimize for social spread is in the nature of the content itself. There's no way to game email or people's instant messages. There's no power users you can contact. There's no algorithms to understand. This is pure social, uncut
  • ...6 more annotations...
  • Almost 69 percent of social referrals were dark! Facebook came in second at 20 percent. Twitter was down at 6 percent
  • direct socia
  • the social sites that arrived in the 2000s did not create the social web, but they did structure it. This is really, really significant. In large part, they made sharing on the Internet an act of publishing (!), with all the attendant changes that come with that switch. Publishing social interactions makes them more visible, searchable, and adds a lot of metadata to your simple link or photo post. There are some great things about this, but social networks also give a novel, permanent identity to your online persona. Your taste can be monetized, by you or (much more likely) the service itself
  • the tradeoffs we make on social networks is not the one that we're told we're making. We're not giving our personal data in exchange for the ability to share links with friends. Massive numbers of people -- a larger set than exists on any social network -- already do that outside the social networks. Rather, we're exchanging our personal data in exchange for the ability to publish and archive a record of our sharing. That may be a transaction you want to make, but it might not be the one you've been told you made. 
  • "Only about four percent of total traffic is on mobile at all, so, at least as a percentage of total referrals, app referrals must be a tiny percentage,"
  • only 0.3 percent of total traffic has the Facebook mobile site as a referrer and less than 0.1 percent has the Facebook mobile app
  •  
    Heh. Social is really social, not 'social' - who knew?
3More

Google pushes journalists to create G+ profiles · kegill · Storify - 0 views

  • linking search results with Google+ was like Microsoft bundling Internet Explore with Windows
  • Market strength in one place being used to leverage sub optimal products in another.
  • It's time to tell both Google and Bing that we want to decide for ourselves, thank you very much, if content is credible, instead of their making those decisions for us, decisions made behind hidden -- and suspicious -- algorithms.
6More

Oxford University Press launches the Anti-Google - 0 views

  • he Anti-Google: Oxford Bibliographies Online (OBO)
  • essentially a straightforward, hyperlinked collection of professionally-produced, peer-reviewed bibliographies in different subject areas—sort of a giant, interactive syllabus put together by OUP and teams of scholars in different disciplines
  • "You can't come up with a search filter that solves the problem of information overload," Zucca told Ars. OUP is betting that the solution to the problem lies in content, which is its area of expertise, and not in technology, which is Google's and Microsoft's.
  • ...3 more annotations...
  • at least users can see exactly how the sausage is made. Contrast this to Google or Bing, where the search algorithm that produces results is a closely guarded secret.
  • The word that Zucca used a number of times in our chat was "authority," and OUP is betting that individual and institutional users will value the authority enough that they'll be willing to pay for access to the service
  • This paywall is the only feature of OBO that seems truly unfortunate, given that the competition (search and Wikipedia) is free. High school kids and motivated amateurs will be left slumming it with whatever they can get from the public Internet, and OBO's potential reach and impact will be severely limite
24More

What Bruce Sterling Actually Said About Web 2.0 at Webstock 09 | Beyond the Beyond from... - 0 views

  • things in it that pretended to be ideas, but were not ideas at all: they were attitudes
    • Ed Webb
       
      Like Edupunk
  • A sentence is a verbal construction meant to express a complete thought. This congelation that Tim O'Reilly constructed, that is not a complete thought. It's a network in permanent beta.
  • This chart is five years old now, which is 35 years old in Internet years, but intellectually speaking, it's still new in the world. It's alarming how hard it is to say anything constructive about this from any previous cultural framework.
  • ...20 more annotations...
  • "The cloud as platform." That is insanely great. Right? You can't build a "platform" on a "cloud!" That is a wildly mixed metaphor! A cloud is insubstantial, while a platform is a solid foundation! The platform falls through the cloud and is smashed to earth like a plummeting stock price!
  • luckily, we have computers in banking now. That means Moore's law is gonna save us! Instead of it being really obvious who owes what to whom, we can have a fluid, formless ownership structure that's always in permanent beta. As long as we keep moving forward, adding attractive new features, the situation is booming!
  • Web 2.0 is supposed to be business. This isn't a public utility or a public service, like the old model of an Information Superhighway established for the public good.
  • it's turtles all the way down
  • "Tagging not taxonomy." Okay, I love folksonomy, but I don't think it's gone very far. There have been books written about how ambient searchability through folksonomy destroys the need for any solid taxonomy. Not really. The reality is that we don't have a choice, because we have no conceivable taxonomy that can catalog the avalanche of stuff on the Web.
  • JavaScript is the duct tape of the Web. Why? Because you can do anything with it. It's not the steel girders of the web, it's not the laws of physics of the web. Javascript is beloved of web hackers because it's an ultimate kludge material that can stick anything to anything. It's a cloud, a web, a highway, a platform and a floor wax. Guys with attitude use JavaScript.
  • Before the 1990s, nobody had any "business revolutions." People in trade are supposed to be very into long-term contracts, a stable regulatory environment, risk management, and predictable returns to stockholders. Revolutions don't advance those things. Revolutions annihilate those things. Is that "businesslike"? By whose standards?
  • I just wonder what kind of rattletrap duct-taped mayhem is disguised under a smooth oxymoron like "collective intelligence."
  • the people whose granular bits of input are aggregated by Google are not a "collective." They're not a community. They never talk to each other. They've got basically zero influence on what Google chooses to do with their mouseclicks. What's "collective" about that?
  • I really think it's the original sin of geekdom, a kind of geek thought-crime, to think that just because you yourself can think algorithmically, and impose some of that on a machine, that this is "intelligence." That is not intelligence. That is rules-based machine behavior. It's code being executed. It's a powerful thing, it's a beautiful thing, but to call that "intelligence" is dehumanizing. You should stop that. It does not make you look high-tech, advanced, and cool. It makes you look delusionary.
  • I'd definitely like some better term for "collective intelligence," something a little less streamlined and metaphysical. Maybe something like "primeval meme ooze" or "semi-autonomous data propagation." Even some Kevin Kelly style "neobiological out of control emergent architectures." Because those weird new structures are here, they're growing fast, we depend on them for mission-critical acts, and we're not gonna get rid of them any more than we can get rid of termite mounds.
  • Web 2.0 guys: they've got their laptops with whimsical stickers, the tattoos, the startup T-shirts, the brainy-glasses -- you can tell them from the general population at a glance. They're a true creative subculture, not a counterculture exactly -- but in their number, their relationship to the population, quite like the Arts and Crafts people from a hundred years ago. Arts and Crafts people, they had a lot of bad ideas -- much worse ideas than Tim O'Reilly's ideas. It wouldn't bother me any if Tim O'Reilly was Governor of California -- he couldn't be any weirder than that guy they've got already. Arts and Crafts people gave it their best shot, they were in earnest -- but everything they thought they knew about reality was blown to pieces by the First World War. After that misfortune, there were still plenty of creative people surviving. Futurists, Surrealists, Dadaists -- and man, they all despised Arts and Crafts. Everything about Art Nouveau that was sexy and sensual and liberating and flower-like, man, that stank in their nostrils. They thought that Art Nouveau people were like moronic children.
  • in the past eighteen months, 24 months, we've seen ubiquity initiatives from Nokia, Cisco, General Electric, IBM... Microsoft even, Jesus, Microsoft, the place where innovative ideas go to die.
  • what comes next is a web with big holes blown in it. A spiderweb in a storm. The turtles get knocked out from under it, the platform sinks through the cloud. A lot of the inherent contradictions of the web get revealed, the contradictions in the oxymorons smash into each other. The web has to stop being a meringue frosting on the top of business, this make-do melange of mashups and abstraction layers. Web 2.0 goes away. Its work is done. The thing I always loved best about Web 2.0 was its implicit expiration date. It really took guts to say that: well, we've got a bunch of cool initiatives here, and we know they're not gonna last very long. It's not Utopia, it's not a New World Order, it's just a brave attempt to sweep up the ashes of the burst Internet Bubble and build something big and fast with the small burnt-up bits that were loosely joined. That showed more maturity than Web 1.0. It was visionary, it was inspiring, but there were fewer moon rockets flying out of its head. "Gosh, we're really sorry that we accidentally ruined the NASDAQ." We're Internet business people, but maybe we should spend less of our time stock-kiting. The Web's a communications medium -- how 'bout working on the computer interface, so that people can really communicate? That effort was time well spent. Really.
  • The poorest people in the world love cellphones.
  • Digital culture, I knew it well. It died -- young, fast and pretty. It's all about network culture now.
  • There's gonna be a Transition Web. Your economic system collapses: Eastern Europe, Russia, the Transition Economy, that bracing experience is for everybody now. Except it's not Communism transitioning toward capitalism. It's the whole world into transition toward something we don't even have proper words for.
  • The Transition Web is a culture model. If it's gonna work, it's got to replace things that we used to pay for with things that we just plain use.
  • Not every Internet address was a dotcom. In fact, dotcoms showed up pretty late in the day, and they were not exactly welcome. There were dot-orgs, dot edus, dot nets, dot govs, and dot localities. Once upon a time there were lots of social enterprises that lived outside the market; social movements, political parties, mutual aid societies, philanthropies. Churches, criminal organizations -- you're bound to see plenty of both of those in a transition... Labor unions... not little ones, but big ones like Solidarity in Poland; dissident organizations, not hobby activists, big dissent, like Charter 77 in Czechoslovakia. Armies, national guards. Rescue operations. Global non-governmental organizations. Davos Forums, Bilderberg guys. Retired people. The old people can't hold down jobs in the market. Man, there's a lot of 'em. Billions. What are our old people supposed to do with themselves? Websurf, I'm thinking. They're wise, they're knowledgeable, they're generous by nature; the 21st century is destined to be an old people's century. Even the Chinese, Mexicans, Brazilians will be old. Can't the web make some use of them, all that wisdom and talent, outside the market?
  • I've never seen so much panic around me, but panic is the last thing on my mind. My mood is eager impatience. I want to see our best, most creative, best-intentioned people in world society directly attacking our worst problems. I'm bored with the deceit. I'm tired of obscurantism and cover-ups. I'm disgusted with cynical spin and the culture war for profit. I'm up to here with phony baloney market fundamentalism. I despise a prostituted society where we put a dollar sign in front of our eyes so we could run straight into the ditch. The cure for panic is action. Coherent action is great; for a scatterbrained web society, that may be a bit much to ask. Well, any action is better than whining. We can do better.
7More

K-12 Media Literacy No Panacea for Fake News, Report Argues - Digital Education - Educa... - 0 views

  • "Media literacy has long focused on personal responsibility, which can not only imbue individuals with a false sense of confidence in their skills, but also put the onus of monitoring media effects on the audience, rather than media creators, social media platforms, or regulators,"
  • the need to better understand the modern media environment, which is heavily driven by algorithm-based personalization on social-media platforms, and the need to be more systematic about evaluating the impact of various media-literacy strategies and interventions
  • In response, bills to promote media literacy in schools have been introduced or passed in more than a dozen states. A range of nonprofit, corporate, and media organizations have stepped up efforts to promote related curricula and programs. Such efforts should be applauded—but not viewed as a "panacea," the Data & Society researchers argue.
  • ...4 more annotations...
  • existing efforts "focus on the interpretive responsibilities of the individual,"
  • "if bad actors intentionally dump disinformation online with an aim to distract and overwhelm, is it possible to safeguard against media manipulation?"
  • A 2012 meta-analysis by academic researchers found that media literacy efforts could help boost students' critical awareness of messaging, bias, and representation in the media they consumed. There have been small studies suggesting that media-literacy efforts can change students' behaviors—for example, by making them less likely to seek out violent media for their own consumption. And more recently, a pair of researchers found that media-literacy training was more important than prior political knowledge when it comes to adopting a critical stance to partisan media content.
  • the roles of institutions, technology companies, and governments
22More

William Davies · How many words does it take to make a mistake? Education, Ed... - 0 views

  • The problem waiting round the corner for universities is essays generated by AI, which will leave a textual pattern-spotter like Turnitin in the dust. (Earlier this year, I came across one essay that felt deeply odd in some not quite human way, but I had no tangible evidence that anything untoward had occurred, so that was that.)
  • To accuse someone of plagiarism is to make a moral charge regarding intentions. But establishing intent isn’t straightforward. More often than not, the hearings bleed into discussions of issues that could be gathered under the heading of student ‘wellbeing’, which all universities have been struggling to come to terms with in recent years.
  • I have heard plenty of dubious excuses for acts of plagiarism during these hearings. But there is one recurring explanation which, it seems to me, deserves more thoughtful consideration: ‘I took too many notes.’ It isn’t just students who are familiar with information overload, one of whose effects is to morph authorship into a desperate form of curatorial management, organising chunks of text on a screen. The discerning scholarly self on which the humanities depend was conceived as the product of transitions between spaces – library, lecture hall, seminar room, study – linked together by work with pen and paper. When all this is replaced by the interface with screen and keyboard, and everything dissolves into a unitary flow of ‘content’, the identity of the author – as distinct from the texts they have read – becomes harder to delineate.
  • ...19 more annotations...
  • This generation, the first not to have known life before the internet, has acquired a battery of skills in navigating digital environments, but it isn’t clear how well those skills line up with the ones traditionally accredited by universities.
  • From the perspective of students raised in a digital culture, the anti-plagiarism taboo no doubt seems to be just one more academic hang-up, a weird injunction to take perfectly adequate information, break it into pieces and refashion it. Students who pay for essays know what they are doing; others seem conscientious yet intimidated by secondary texts: presumably they won’t be able to improve on them, so why bother trying? For some years now, it’s been noticeable how many students arrive at university feeling that every interaction is a test they might fail. They are anxious. Writing seems fraught with risk, a highly complicated task that can be executed correctly or not.
  • Many students may like the flexibility recorded lectures give them, but the conversion of lectures into yet more digital ‘content’ further destabilises traditional conceptions of learning and writing
  • the evaluation forms which are now such a standard feature of campus life suggest that many students set a lot of store by the enthusiasm and care that are features of a good live lecture
  • the drift of universities towards a platform model, which makes it possible for students to pick up learning materials as and when it suits them. Until now, academics have resisted the push for ‘lecture capture’. It causes in-person attendance at lectures to fall dramatically, and it makes many lecturers feel like mediocre television presenters. Unions fear that extracting and storing teaching for posterity threatens lecturers’ job security and weakens the power of strikes. Thanks to Covid, this may already have happened.
  • In the utopia sold by the EdTech industry (the companies that provide platforms and software for online learning), pupils are guided and assessed continuously. When one task is completed correctly, the next begins, as in a computer game; meanwhile the platform providers are scraping and analysing data from the actions of millions of children. In this behaviourist set-up, teachers become more like coaches: they assist and motivate individual ‘learners’, but are no longer so important to the provision of education. And since it is no longer the sole responsibility of teachers or schools to deliver the curriculum, it becomes more centralised – the latest front in a forty-year battle to wrest control from the hands of teachers and local authorities.
  • an injunction against creative interpretation and writing, a deprivation that working-class children will feel at least as deeply as anyone else.
  • There may be very good reasons for delivering online teaching in segments, punctuated by tasks and feedback, but as Yandell observes, other ways of reading and writing are marginalised in the process. Without wishing to romanticise the lonely reader (or, for that matter, the lonely writer), something is lost when alternating periods of passivity and activity are compressed into interactivity, until eventually education becomes a continuous cybernetic loop of information and feedback. How many keystrokes or mouse-clicks before a student is told they’ve gone wrong? How many words does it take to make a mistake?
  • This vision of language as code may already have been a significant feature of the curriculum, but it appears to have been exacerbated by the switch to online teaching. In a journal article from August 2020, ‘Learning under Lockdown: English Teaching in the Time of Covid-19’, John Yandell notes that online classes create wholly closed worlds, where context and intertextuality disappear in favour of constant instruction. In these online environments, readingis informed not by prior reading experiences but by the toolkit that the teacher has provided, and ... is presented as occurring along a tramline of linear development. Different readings are reducible to better or worse readings: the more closely the student’s reading approximates to the already finalised teacher’s reading, the better it is. That, it would appear, is what reading with precision looks like.
  • Constant interaction across an interface may be a good basis for forms of learning that involve information-processing and problem-solving, where there is a right and a wrong answer. The cognitive skills that can be trained in this way are the ones computers themselves excel at: pattern recognition and computation. The worry, for anyone who cares about the humanities in particular, is about the oversimplifications required to conduct other forms of education in these ways.
  • Blanket surveillance replaces the need for formal assessment.
  • Confirming Adorno’s worst fears of the ‘primacy of practical reason’, reading is no longer dissociable from the execution of tasks. And, crucially, the ‘goals’ to be achieved through the ability to read, the ‘potential’ and ‘participation’ to be realised, are economic in nature.
  • since 2019, with the Treasury increasingly unhappy about the amount of student debt still sitting on the government’s balance sheet and the government resorting to ‘culture war’ at every opportunity, there has been an effort to single out degree programmes that represent ‘poor value for money’, measured in terms of graduate earnings. (For reasons best known to itself, the usually independent Institute for Fiscal Studies has been leading the way in finding correlations between degree programmes and future earnings.) Many of these programmes are in the arts and humanities, and are now habitually referred to by Tory politicians and their supporters in the media as ‘low-value degrees’.
  • studying the humanities may become a luxury reserved for those who can fall back on the cultural and financial advantages of their class position. (This effect has already been noticed among young people going into acting, where the results are more visible to the public than they are in academia or heritage organisations.)
  • given the changing class composition of the UK over the past thirty years, it’s not clear that contemporary elites have any more sympathy for the humanities than the Conservative Party does. A friend of mine recently attended an open day at a well-known London private school, and noticed that while there was a long queue to speak to the maths and science teachers, nobody was waiting to speak to the English teacher. When she asked what was going on, she was told: ‘I’m afraid parents here are very ambitious.’ Parents at such schools, where fees have tripled in real terms since the early 1980s, tend to work in financial and business services themselves, and spend their own days profitably manipulating and analysing numbers on screens. When it comes to the transmission of elite status from one generation to the next, Shakespeare or Plato no longer has the same cachet as economics or physics.
  • Leaving aside the strategic political use of terms such as ‘woke’ and ‘cancel culture’, it would be hard to deny that we live in an age of heightened anxiety over the words we use, in particular the labels we apply to people. This has benefits: it can help to bring discriminatory practices to light, potentially leading to institutional reform. It can also lead to fruitless, distracting public arguments, such as the one that rumbled on for weeks over Angela Rayner’s description of Conservatives as ‘scum’. More and more, words are dredged up, edited or rearranged for the purpose of harming someone. Isolated words have acquired a weightiness in contemporary politics and public argument, while on digital media snippets of text circulate without context, as if the meaning of a single sentence were perfectly contained within it, walled off from the surrounding text. The exemplary textual form in this regard is the newspaper headline or corporate slogan: a carefully curated series of words, designed to cut through the blizzard of competing information.
  • Visit any actual school or university today (as opposed to the imaginary ones described in the Daily Mail or the speeches of Conservative ministers) and you will find highly disciplined, hierarchical institutions, focused on metrics, performance evaluations, ‘behaviour’ and quantifiable ‘learning outcomes’.
  • If young people today worry about using the ‘wrong’ words, it isn’t because of the persistence of the leftist cultural power of forty years ago, but – on the contrary – because of the barrage of initiatives and technologies dedicated to reversing that power. The ideology of measurable literacy, combined with a digital net that has captured social and educational life, leaves young people ill at ease with the language they use and fearful of what might happen should they trip up.
  • It has become clear, as we witness the advance of Panopto, Class Dojo and the rest of the EdTech industry, that one of the great things about an old-fashioned classroom is the facilitation of unrecorded, unaudited speech, and of uninterrupted reading and writing.
6More

The Generative AI Race Has a Dirty Secret | WIRED - 0 views

  • The race to build high-performance, AI-powered search engines is likely to require a dramatic rise in computing power, and with it a massive increase in the amount of energy that tech companies require and the amount of carbon they emit.
  • Every time we see a step change in online processing, we see significant increases in the power and cooling resources required by large processing centres
  • third-party analysis by researchers estimates that the training of GPT-3, which ChatGPT is partly based on, consumed 1,287 MWh, and led to emissions of more than 550 tons of carbon dioxide equivalent—the same amount as a single person taking 550 roundtrips between New York and San Francisco
  • ...3 more annotations...
  • There’s also a big difference between utilizing ChatGPT—which investment bank UBS estimates has 13 million users a day—as a standalone product, and integrating it into Bing, which handles half a billion searches every day.
  • Data centers already account for around one percent of the world’s greenhouse gas emissions, according to the International Energy Agency. That is expected to rise as demand for cloud computing increases, but the companies running search have promised to reduce their net contribution to global heating. “It’s definitely not as bad as transportation or the textile industry,” Gómez-Rodríguez says. “But [AI] can be a significant contributor to emissions.”
  • The environmental footprint and energy cost of integrating AI into search could be reduced by moving data centers onto cleaner energy sources, and by redesigning neural networks to become more efficient, reducing the so-called “inference time”—the amount of computing power required for an algorithm to work on new data.
1 - 16 of 16
Showing 20 items per page