Skip to main content

Home/ Instructional & Media Services at Dickinson College/ Group items tagged facts

Rss Feed Group items tagged

Ed Webb

Bad News : CJR - 0 views

  • Students in Howard Rheingold’s journalism class at Stanford recently teamed up with NewsTrust, a nonprofit Web site that enables people to review and rate news articles for their level of quality, in a search for lousy journalism.
  • the News Hunt is a way of getting young journalists to critically examine the work of professionals. For Rheingold, an influential writer and thinker about the online world and the man credited with coining the phrase “virtual community,” it’s all about teaching them “crap detection.”
  • last year Rheingold wrote an important essay about the topic for the San Francisco Chronicle’s Web site
  • ...3 more annotations...
  • What’s at stake is no less than the quality of the information available in our society, and our collective ability to evaluate its accuracy and value. “Are we going to have a world filled with people who pass along urban legends and hoaxes?” Rheingold said, “or are people going to educate themselves about these tools [for crap detection] so we will have collective intelligence instead of misinformation, spam, urban legends, and hoaxes?”
  • I previously called fact-checking “one of the great American pastimes of the Internet age.” But, as Rheingold noted, the opposite is also true: the manufacture and promotion of bullshit is endemic. One couldn’t exist without the other. That makes Rheingold’s essay, his recent experiment with NewsTrust, and his wiki of online critical-thinking tools” essential reading for journalists. (He’s also writing a book about this topic.)
  • I believe if we want kids to succeed online, the biggest danger is not porn or predators—the biggest danger is them not being able to distinguish truth from carefully manufactured misinformation or bullshit
  •  
    As relevant to general education as to journalism training
Ed Webb

A Conversation With Bill Gates - Technology - The Chronicle of Higher Education - 2 views

  • argues for radical reform of college teaching, advocating a move toward a "flipped" classroom, where students watch videos from superstar professors as homework and use class time for group projects and other interactive activities
  • it's much harder to then take it for the broad set of students in the institutional framework and decide, OK, where is technology the best and where is the face-to-face the best. And they don't have very good metrics of what is their value-added. If you try and compare two universities, you'll find out a lot more about the inputs—this university has high SAT scores compared to this one. And it's sort of the opposite of what you'd think. You'd think people would say, "We take people with low SATs and make them really good lawyers." Instead they say, "We take people with very high SATs and we don't really know what we create, but at least they're smart when they show up here so maybe they still are when we're done with them."
  • The various rankings have focused on the input side of the equation, not the output
  • ...11 more annotations...
  • Something that's not purely digital but also that the efficiency of the face-to-face time is much greater
  • Can we transform this credentialing process? And in fact the ideal would be to separate out the idea of proving your knowledge from the way you acquire that knowledge
  • Employers have decided that having the breadth of knowledge that's associated with a four-year degree is often something they want to see in the people they give that job to. So instead of testing for that different profession, they'll be testing that you have that broader exposure
  • that failing student is a disaster for everyone
  • What is it that we need to do to strengthen this fundamental part of our country that both in a broad sort of economic level and an individual-rights level is the key enabler. And it's amazing how little effort's been put into this. Of saying, OK, why are some teachers at any different level way better than others? You've got universities in this country with a 7-percent completion rate. Why is it that they don't come under pressure to change what they're doing to come up with a better way of doing things?
  • We bet on the change agents within the universities. And so, various universities come to us and say, We have some ideas about completion rates, here are some things we want to try out, it's actually budget that holds us back from being able to do that. People come to us and say, We want to try a hybrid course where some piece is online, some piece is not, and we're aiming this at the students that are in the most need, not just the most elite. So that's who we're giving grants to, people who are trying out new things in universities. Now the idea that if you have a few universities that figure out how to do things well. how do you spread these best practices, that's a tough challenge. It's not the quite same way as in the private sector that if somebody's doing something better, the price signals force that to be adopted broadly. Here, things move very slowly even if they are an improvement.
  • Q. Some of what you've been talking about is getting people to completion by weeding out extraneous courses. There's a concern by some that that might create pressure to make universities into a kind of job-training area without the citizenship focus of that broad liberal-arts degree.
  • it is important to distinguish when people are taking extra courses that broaden them as a citizen and that would be considered a plus, versus they're just marking time because they're being held up because the capacity doesn't exist in the system to let them do what they want to do. As you go through the student survey data, it's mostly the latter. But I'm the biggest believer in taking a lot of different things. And hopefully, if these courses are appealing enough, we can get people even after they've finished a college degree to want to go online and take these courses.
  • Other countries are sending more kids to college. They're getting higher completion rates. They've moved ahead of us
  • There's nothing that was more important to me in terms of the kind of opportunity I had personally. I went to a great high school. I went to a great university. I only went three years, but it doesn't matter; it was still extremely valuable to me to be in that environment. And I had fantastic professors throughout that whole thing. And so, if every kid could have that kind of education, we'd achieve a lot of goals both at the individual and country level
  • One of the strengths of higher ed is the variety. But the variety has also meant that if somebody is doing something particularly well, it's hard to map that across a lot of different institutions. There aren't very many good metrics. At least in high schools we can talk about dropout rates. Completion rate was really opaque, and not talked about a lot. The quality-measure things are equally different. We don't have a gold standard like SAT scores or No Child Left Behind up at the collegiate level. And of course, kids are more dispersed in terms of what their career goals are at that point. So it's got some things that make it particularly challenging, but it has a lot in common, and I'd say it's equally important to get it right
Ed Webb

Would You Protect Your Computer's Feelings? Clifford Nass Says Yes. - ProfHacker - The ... - 0 views

  • The Man Who Lied to His Laptop condenses for a popular audience an argument that Nass has been making for at least 15 years: humans do not differentiate between computers and people in their social interactions.
  • At first blush, this sounds absurd. Everyone knows that it's "just a computer," and of course computers don't have feelings. And yet. Nass has a slew of amusing stories—and, crucially, studies based on those stories—indicating that, no matter what "everyone knows," people act as if the computer secretly cares. For example: In one study, users reviewed a software package, either on the same computer they'd used it on, or on a different computer. Consistently, participants gave the software better ratings when they reviewed in on the same computer—as if they didn't want the computer to feel bad. What's more, Nass notes, "every one of the participants insisted that she or he would never bother being polite to a computer" (7).
  • Nass found that users given completely random praise by a computer program liked it more than the same program without praise, even though they knew in advance the praise was meaningless. In fact, they liked it as much as the same program, if they were told the praise was accurate. (In other words, flattery was as well received as praise, and both were preferred to no positive comments.) Again, when questioned about the results, users angrily denied any difference at all in their reactions.
  •  
    How do you interact with the computing devices in your life?
Ed Webb

A Review of NOOKStudy - ProfHacker - The Chronicle of Higher Education - 0 views

  • Though the software will sync information between two computers, highlights and notes created in NOOKStudy won't sync to the Nook, nor will highlights and notes created on the Nook sync to NOOKStudy. In fact, NOOKStudy couldn't even bring me to the correct page in the book I'm currently reading. At least the pages in NOOKStudy seem to correspond with the pagination you'd see on the Nook, so finding one's place isn't horrendously difficult, but still. Amazon had this sort of thing figured out with Whispersync some time ago.
Ed Webb

Study Shows Students Are Addicted to Social Media | News | Communications of the ACM - 0 views

  • most college students are not just unwilling, but functionally unable to be without their media links to the world. "I clearly am addicted and the dependency is sickening," says one person in the study. "I feel like most people these days are in a similar situation, for between having a Blackberry, a laptop, a television, and an iPod, people have become unable to shed their media skin."
  • what they wrote at length about was how they hated losing their personal connections. Going without media meant, in their world, going without their friends and family
  • they couldn't connect with friends who lived close by, much less those far away
  • ...8 more annotations...
  • "Texting and IM-ing my friends gives me a constant feeling of comfort," wrote one student. "When I did not have those two luxuries, I felt quite alone and secluded from my life. Although I go to a school with thousands of students, the fact that I was not able to communicate with anyone via technology was almost unbearable."
  • students' lives are wired together in such ways that opting out of that communication pattern would be tantamount to renouncing a social life
  • "Students expressed tremendous anxiety about being cut-off from information,"
  • How did they get the information? In a disaggregated way, and not typically from the news outlet that broke or committed resources to a story.
  • the young adults in this study appeared to be generally oblivious to branded news and information
  • an undifferentiated wave to them via social media
  • 43.3 percent of the students reported that they had a "smart phone"
  • Quotes
Ed Webb

Search Engine Helps Users Connect In Arabic : NPR - 0 views

  • new technology that is revolutionizing the way Arabic-speaking people use the Internet
  • Abdullah says that of her 500 Egyptian students, 78 percent have never typed in Arabic online, a fact that greatly disturbed Habib Haddad, a Boston-based software engineer originally from Lebanon. "I mean imagine [if] 78 percent of French people don't type French," Haddad says. "Imagine how destructive that is online."
  • "The idea is, if you don't have an Arabic keyboard, you can type Arabic by spelling your words out phonetically," Jureidini says. "For example ... when you're writing the word 'falafel,' Yamli will convert that to Arabic in your Web browser. We will go and search not only the Arabic script version of that search query, but also for all the Western variations of that keyword."
  • ...1 more annotation...
  • At a recent "new" technology forum at MIT, Yamli went on to win best of show — a development that did not escape the attention of Google, which recently developed its own search and transliteration engine. "I guess Google recognizes a good idea when it sees it," Jureidini says. He adds, "And the way we counter it is by being better. We live and breathe Yamli every day, and we're constantly in the process of improving how people can use it." Experts in Arabic Web content say that since its release a year ago, Yamli has helped increase Arabic content on the Internet just by its use. They say that bodes well for the Arabic Web and for communication between the Arab and Western worlds.
Ed Webb

Literature Review: GIS for Conflict Analysis « iRevolution - 0 views

  • The study objective is to represent geographic and territorial concepts with Geographic Information Systems (GIS). The paper describes the challenges and potential opportunities for creating an integrated GIS model of security.
  • The literature review is a good introduction for anyone interested in the application of GIS to the spatial analysis of conflict. As a colleague mentioned, however, the authors of the study do not cite more recent work in this area, which is rather surprising and unfortunate. Perhaps this is due to the fact that the academic peer-review process can seemingly take forever.
Ed Webb

Most Faculty Don't Use Twitter, Study Reveals -- Campus Technology - 0 views

  • 30.7 percent of respondents reported that they do, in fact, use Twitter in one way or another--a percentage that's fairly high compared with the percentage of the general adult American population that uses Twitter (which is projected to be in the neighborhood of 10 percent to 11 percent by 2010).
Ed Webb

Clive Thompson on the New Literacy - 0 views

  • The fact that students today almost always write for an audience (something virtually no one in my generation did) gives them a different sense of what constitutes good writing. In interviews, they defined good prose as something that had an effect on the world. For them, writing is about persuading and organizing and debating, even if it's over something as quotidian as what movie to go see. The Stanford students were almost always less enthusiastic about their in-class writing because it had no audience but the professor: It didn't serve any purpose other than to get them a grade.
  • The brevity of texting and status updating teaches young people to deploy haiku-like concision.
Ed Webb

What Bruce Sterling Actually Said About Web 2.0 at Webstock 09 | Beyond the Beyond from... - 0 views

  • things in it that pretended to be ideas, but were not ideas at all: they were attitudes
    • Ed Webb
       
      Like Edupunk
  • A sentence is a verbal construction meant to express a complete thought. This congelation that Tim O'Reilly constructed, that is not a complete thought. It's a network in permanent beta.
  • This chart is five years old now, which is 35 years old in Internet years, but intellectually speaking, it's still new in the world. It's alarming how hard it is to say anything constructive about this from any previous cultural framework.
  • ...20 more annotations...
  • "The cloud as platform." That is insanely great. Right? You can't build a "platform" on a "cloud!" That is a wildly mixed metaphor! A cloud is insubstantial, while a platform is a solid foundation! The platform falls through the cloud and is smashed to earth like a plummeting stock price!
  • luckily, we have computers in banking now. That means Moore's law is gonna save us! Instead of it being really obvious who owes what to whom, we can have a fluid, formless ownership structure that's always in permanent beta. As long as we keep moving forward, adding attractive new features, the situation is booming!
  • Web 2.0 is supposed to be business. This isn't a public utility or a public service, like the old model of an Information Superhighway established for the public good.
  • it's turtles all the way down
  • "Tagging not taxonomy." Okay, I love folksonomy, but I don't think it's gone very far. There have been books written about how ambient searchability through folksonomy destroys the need for any solid taxonomy. Not really. The reality is that we don't have a choice, because we have no conceivable taxonomy that can catalog the avalanche of stuff on the Web.
  • JavaScript is the duct tape of the Web. Why? Because you can do anything with it. It's not the steel girders of the web, it's not the laws of physics of the web. Javascript is beloved of web hackers because it's an ultimate kludge material that can stick anything to anything. It's a cloud, a web, a highway, a platform and a floor wax. Guys with attitude use JavaScript.
  • Before the 1990s, nobody had any "business revolutions." People in trade are supposed to be very into long-term contracts, a stable regulatory environment, risk management, and predictable returns to stockholders. Revolutions don't advance those things. Revolutions annihilate those things. Is that "businesslike"? By whose standards?
  • I just wonder what kind of rattletrap duct-taped mayhem is disguised under a smooth oxymoron like "collective intelligence."
  • the people whose granular bits of input are aggregated by Google are not a "collective." They're not a community. They never talk to each other. They've got basically zero influence on what Google chooses to do with their mouseclicks. What's "collective" about that?
  • I really think it's the original sin of geekdom, a kind of geek thought-crime, to think that just because you yourself can think algorithmically, and impose some of that on a machine, that this is "intelligence." That is not intelligence. That is rules-based machine behavior. It's code being executed. It's a powerful thing, it's a beautiful thing, but to call that "intelligence" is dehumanizing. You should stop that. It does not make you look high-tech, advanced, and cool. It makes you look delusionary.
  • I'd definitely like some better term for "collective intelligence," something a little less streamlined and metaphysical. Maybe something like "primeval meme ooze" or "semi-autonomous data propagation." Even some Kevin Kelly style "neobiological out of control emergent architectures." Because those weird new structures are here, they're growing fast, we depend on them for mission-critical acts, and we're not gonna get rid of them any more than we can get rid of termite mounds.
  • Web 2.0 guys: they've got their laptops with whimsical stickers, the tattoos, the startup T-shirts, the brainy-glasses -- you can tell them from the general population at a glance. They're a true creative subculture, not a counterculture exactly -- but in their number, their relationship to the population, quite like the Arts and Crafts people from a hundred years ago. Arts and Crafts people, they had a lot of bad ideas -- much worse ideas than Tim O'Reilly's ideas. It wouldn't bother me any if Tim O'Reilly was Governor of California -- he couldn't be any weirder than that guy they've got already. Arts and Crafts people gave it their best shot, they were in earnest -- but everything they thought they knew about reality was blown to pieces by the First World War. After that misfortune, there were still plenty of creative people surviving. Futurists, Surrealists, Dadaists -- and man, they all despised Arts and Crafts. Everything about Art Nouveau that was sexy and sensual and liberating and flower-like, man, that stank in their nostrils. They thought that Art Nouveau people were like moronic children.
  • in the past eighteen months, 24 months, we've seen ubiquity initiatives from Nokia, Cisco, General Electric, IBM... Microsoft even, Jesus, Microsoft, the place where innovative ideas go to die.
  • what comes next is a web with big holes blown in it. A spiderweb in a storm. The turtles get knocked out from under it, the platform sinks through the cloud. A lot of the inherent contradictions of the web get revealed, the contradictions in the oxymorons smash into each other. The web has to stop being a meringue frosting on the top of business, this make-do melange of mashups and abstraction layers. Web 2.0 goes away. Its work is done. The thing I always loved best about Web 2.0 was its implicit expiration date. It really took guts to say that: well, we've got a bunch of cool initiatives here, and we know they're not gonna last very long. It's not Utopia, it's not a New World Order, it's just a brave attempt to sweep up the ashes of the burst Internet Bubble and build something big and fast with the small burnt-up bits that were loosely joined. That showed more maturity than Web 1.0. It was visionary, it was inspiring, but there were fewer moon rockets flying out of its head. "Gosh, we're really sorry that we accidentally ruined the NASDAQ." We're Internet business people, but maybe we should spend less of our time stock-kiting. The Web's a communications medium -- how 'bout working on the computer interface, so that people can really communicate? That effort was time well spent. Really.
  • The poorest people in the world love cellphones.
  • Digital culture, I knew it well. It died -- young, fast and pretty. It's all about network culture now.
  • There's gonna be a Transition Web. Your economic system collapses: Eastern Europe, Russia, the Transition Economy, that bracing experience is for everybody now. Except it's not Communism transitioning toward capitalism. It's the whole world into transition toward something we don't even have proper words for.
  • The Transition Web is a culture model. If it's gonna work, it's got to replace things that we used to pay for with things that we just plain use.
  • Not every Internet address was a dotcom. In fact, dotcoms showed up pretty late in the day, and they were not exactly welcome. There were dot-orgs, dot edus, dot nets, dot govs, and dot localities. Once upon a time there were lots of social enterprises that lived outside the market; social movements, political parties, mutual aid societies, philanthropies. Churches, criminal organizations -- you're bound to see plenty of both of those in a transition... Labor unions... not little ones, but big ones like Solidarity in Poland; dissident organizations, not hobby activists, big dissent, like Charter 77 in Czechoslovakia. Armies, national guards. Rescue operations. Global non-governmental organizations. Davos Forums, Bilderberg guys. Retired people. The old people can't hold down jobs in the market. Man, there's a lot of 'em. Billions. What are our old people supposed to do with themselves? Websurf, I'm thinking. They're wise, they're knowledgeable, they're generous by nature; the 21st century is destined to be an old people's century. Even the Chinese, Mexicans, Brazilians will be old. Can't the web make some use of them, all that wisdom and talent, outside the market?
  • I've never seen so much panic around me, but panic is the last thing on my mind. My mood is eager impatience. I want to see our best, most creative, best-intentioned people in world society directly attacking our worst problems. I'm bored with the deceit. I'm tired of obscurantism and cover-ups. I'm disgusted with cynical spin and the culture war for profit. I'm up to here with phony baloney market fundamentalism. I despise a prostituted society where we put a dollar sign in front of our eyes so we could run straight into the ditch. The cure for panic is action. Coherent action is great; for a scatterbrained web society, that may be a bit much to ask. Well, any action is better than whining. We can do better.
Ed Webb

Teaching Naked - without Powerpoint « HeyJude - 0 views

  • The idea is that we  should challenge thinking, inspire creativity, and stir up discussion with a Powerpoint presentation – not present a series of dry facts. 
  • More than any thing else, Mr. Bowen wants to discourage professors from using PowerPoint, because they often lean on the slide-display program as a crutch rather using it as a creative tool. Class time should be reserved for discussion, he contends, especially now that students can download lectures online and find libraries of information on the Web.
Ed Webb

It's Time To Hide The Noise - 0 views

  • the noise is worse than ever. Indeed, it is being magnified every day as more people pile onto Twitter and Facebook and new apps yet to crest like Google Wave. The data stream is growing stronger, but so too is the danger of drowning in all that information.
  • the fact that Seesmic or TweetDeck or any of these apps can display 1,200 Tweets at once is not a feature, it’s a bug
  • if you think Twitter is noisy, wait until you see Google Wave, which doesn’t hide anything at all.  Imagine that Twhirl image below with a million dialog boxes on your screen, except you see as other people type in their messages and add new files and images to the conversation, all at once as it is happening.  It’s enough to make your brain explode.
  • ...2 more annotations...
  • all I need is two columns: the most recent Tweets from everyone I follow (the standard) and the the most interesting tweets I need to pay attention to.  Recent and Interesting.  This second column is the tricky one.  It needs to be automatically generated and personalized to my interests at that moment.
  • search is broken on Twitter.  Unless you know the exact word you are looking for, Tweets with related terms won’t show up.  And there is no way to sort searches by relevance, it is just sorted by chronology.
Ed Webb

The Google Wave chatting tool is too complicated for its own good. - By Farhad Manjoo -... - 0 views

  • The Google Wave chatting tool is too complicated for its own good.
  • Chatting on Wave is like talking to an overcurious mind reader.
  • This behavior is so corrosive to normal conversation that you'd think it was some kind of bug. In fact, it's a feature—indeed, it's one of the Wave team's proudest accomplishments.
  • ...1 more annotation...
  • In many cases, the software creates new headaches by attempting to fix aspects of online communication that don't need fixing.
Ed Webb

The Ed-Tech Imaginary - 0 views

  • We can say "Black lives matter," but we must also demonstrate through our actions that Black lives matter, and that means we must radically alter many of our institutions and practices, recognizing their inhumanity and carcerality. And that includes, no doubt, ed-tech. How much of ed-tech is, to use Ruha Benjamin's phrase, "the new Jim Code"? How much of ed-tech is designed by those who imagine students as cheats or criminals, as deficient or negligent?
  • "Reimagining" is a verb that education reformers are quite fond of. And "reimagining" seems too often to mean simply defunding, privatizing, union-busting, dismantling, outsourcing.
  • if Betsy DeVos is out there "reimagining," then we best be resisting
  • ...9 more annotations...
  • think we can view the promotion of ed-tech as a similar sort of process — the stories designed to convince us that the future of teaching and learning will be a technological wonder. The "jobs of the future that don't exist yet." The push for everyone to "learn to code."
  • The Matrix is, after all, a dystopia. So why would Matrix-style learning be desirable? Maybe that's the wrong question. Perhaps it's not so much that it's desirable, but it's just how our imaginations have been constructed, constricted even. We can't imagine any other ideal but speed and efficiency.
  • The first science fiction novel, published over 200 years ago, was in fact an ed-tech story: Mary Shelley's Frankenstein. While the book is commonly interpreted as a tale of bad science, it is also the story of bad education — something we tend to forget if we only know the story through the 1931 film version
  • Teaching machines and robot teachers were part of the Sixties' cultural imaginary — perhaps that's the problem with so many Boomer ed-reform leaders today. But that imaginary — certainly in the case of The Jetsons — was, upon close inspection, not always particularly radical or transformative. The students at Little Dipper Elementary still sat in desks in rows. The teacher still stood at the front of the class, punishing students who weren't paying attention.
  • we must also decolonize the ed-tech imaginary
  • Zuckerberg gave everyone at Facebook a copy of the Ernest Cline novel Ready Player One, for example, to get them excited about building technology for the future — a book that is really just a string of nostalgic references to Eighties white boy culture. And I always think about that New York Times interview with Sal Khan, where he said that "The science fiction books I like tend to relate to what we're doing at Khan Academy, like Orson Scott Card's 'Ender's Game' series." You mean, online math lectures are like a novel that justifies imperialism and genocide?! Wow.
  • This ed-tech imaginary is segregated. There are no Black students at the push-button school. There are no Black people in The Jetsons — no Black people living the American dream of the mid-twenty-first century
  • Part of the argument I make in my book is that much of education technology has been profoundly shaped by Skinner, even though I'd say that most practitioners today would say that they reject his theories; that cognitive science has supplanted behaviorism; and that after Ayn Rand and Noam Chomsky trashed Beyond Freedom and Dignity, no one paid attention to Skinner any more — which is odd considering there are whole academic programs devoted to "behavioral design," bestselling books devoted to the "nudge," and so on.
  • so much of the ed-tech imaginary is wrapped up in narratives about the Hero, the Weapon, the Machine, the Behavior, the Action, the Disruption. And it's so striking because education should be a practice of care, not conquest
Ed Webb

ChatGPT Is a Blurry JPEG of the Web | The New Yorker - 0 views

  • Think of ChatGPT as a blurry JPEG of all the text on the Web. It retains much of the information on the Web, in the same way that a JPEG retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry JPEG, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.
  • a way to understand the “hallucinations,” or nonsensical answers to factual questions, to which large-language models such as ChatGPT are all too prone. These hallucinations are compression artifacts, but—like the incorrect labels generated by the Xerox photocopier—they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our own knowledge of the world. When we think about them this way, such hallucinations are anything but surprising; if a compression algorithm is designed to reconstruct text after ninety-nine per cent of the original has been discarded, we should expect that significant portions of what it generates will be entirely fabricated.
  • ChatGPT is so good at this form of interpolation that people find it entertaining: they’ve discovered a “blur” tool for paragraphs instead of photos, and are having a blast playing with it.
  • ...9 more annotations...
  • large-language models like ChatGPT are often extolled as the cutting edge of artificial intelligence, it may sound dismissive—or at least deflating—to describe them as lossy text-compression algorithms. I do think that this perspective offers a useful corrective to the tendency to anthropomorphize large-language models
  • Even though large-language models often hallucinate, when they’re lucid they sound like they actually understand subjects like economic theory
  • The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material. In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.
  • starting with a blurry copy of unoriginal work isn’t a good way to create original work
  • If and when we start seeing models producing output that’s as good as their input, then the analogy of lossy compression will no longer be applicable.
  • Even if it is possible to restrict large-language models from engaging in fabrication, should we use them to generate Web content? This would make sense only if our goal is to repackage information that’s already available on the Web. Some companies exist to do just that—we usually call them content mills. Perhaps the blurriness of large-language models will be useful to them, as a way of avoiding copyright infringement. Generally speaking, though, I’d say that anything that’s good for content mills is not good for people searching for information.
  • Having students write essays isn’t merely a way to test their grasp of the material; it gives them experience in articulating their thoughts. If students never have to write essays that we have all read before, they will never gain the skills needed to write something that we have never read.
  • Sometimes it’s only in the process of writing that you discover your original ideas. Some might say that the output of large-language models doesn’t look all that different from a human writer’s first draft, but, again, I think this is a superficial resemblance. Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That’s what directs you during rewriting, and that’s one of the things lacking when you start with text generated by an A.I.
  • What use is there in having something that rephrases the Web?
Ed Webb

ChatGPT Is Nothing Like a Human, Says Linguist Emily Bender - 0 views

  • Please do not conflate word form and meaning. Mind your own credulity.
  • We’ve learned to make “machines that can mindlessly generate text,” Bender told me when we met this winter. “But we haven’t learned how to stop imagining the mind behind it.”
  • A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”
  • ...16 more annotations...
  • “We call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harms,” she co-wrote in 2021. “Work on synthetic human behavior is a bright line in ethical Al development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups.”
  • chatbots that we easily confuse with humans are not just cute or unnerving. They sit on a bright line. Obscuring that line and blurring — bullshitting — what’s human and what’s not has the power to unravel society
  • She began learning from, then amplifying, Black women’s voices critiquing AI, including those of Joy Buolamwini (she founded the Algorithmic Justice League while at MIT) and Meredith Broussard (the author of Artificial Unintelligence: How Computers Misunderstand the World). She also started publicly challenging the term artificial intelligence, a sure way, as a middle-aged woman in a male field, to get yourself branded as a scold. The idea of intelligence has a white-supremacist history. And besides, “intelligent” according to what definition? The three-stratum definition? Howard Gardner’s theory of multiple intelligences? The Stanford-Binet Intelligence Scale? Bender remains particularly fond of an alternative name for AI proposed by a former member of the Italian Parliament: “Systematic Approaches to Learning Algorithms and Machine Inferences.” Then people would be out here asking, “Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?”
  • Tech-makers assuming their reality accurately represents the world create many different kinds of problems. The training data for ChatGPT is believed to include most or all of Wikipedia, pages linked from Reddit, a billion words grabbed off the internet. (It can’t include, say, e-book copies of everything in the Stanford library, as books are protected by copyright law.) The humans who wrote all those words online overrepresent white people. They overrepresent men. They overrepresent wealth. What’s more, we all know what’s out there on the internet: vast swamps of racism, sexism, homophobia, Islamophobia, neo-Nazism.
  • One fired Google employee told me succeeding in tech depends on “keeping your mouth shut to everything that’s disturbing.” Otherwise, you’re a problem. “Almost every senior woman in computer science has that rep. Now when I hear, ‘Oh, she’s a problem,’ I’m like, Oh, so you’re saying she’s a senior woman?”
  • “We haven’t learned to stop imagining the mind behind it.”
  • In March 2021, Bender published “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” with three co-authors. After the paper came out, two of the co-authors, both women, lost their jobs as co-leads of Google’s Ethical AI team.
  • “On the Dangers of Stochastic Parrots” is not a write-up of original research. It’s a synthesis of LLM critiques that Bender and others have made: of the biases encoded in the models; the near impossibility of studying what’s in the training data, given the fact they can contain billions of words; the costs to the climate; the problems with building technology that freezes language in time and thus locks in the problems of the past. Google initially approved the paper, a requirement for publications by staff. Then it rescinded approval and told the Google co-authors to take their names off it. Several did, but Google AI ethicist Timnit Gebru refused. Her colleague (and Bender’s former student) Margaret Mitchell changed her name on the paper to Shmargaret Shmitchell, a move intended, she said, to “index an event and a group of authors who got erased.” Gebru lost her job in December 2020, Mitchell in February 2021. Both women believe this was retaliation and brought their stories to the press. The stochastic-parrot paper went viral, at least by academic standards. The phrase stochastic parrot entered the tech lexicon.
  • Tech execs loved it. Programmers related to it. OpenAI CEO Sam Altman was in many ways the perfect audience: a self-identified hyperrationalist so acculturated to the tech bubble that he seemed to have lost perspective on the world beyond. “I think the nuclear mutually assured destruction rollout was bad for a bunch of reasons,” he said on AngelList Confidential in November. He’s also a believer in the so-called singularity, the tech fantasy that, at some point soon, the distinction between human and machine will collapse. “We are a few years in,” Altman wrote of the cyborg merge in 2017. “It’s probably going to happen sooner than most people think. Hardware is improving at an exponential rate … and the number of smart people working on AI is increasing exponentially as well. Double exponential functions get away from you fast.” On December 4, four days after ChatGPT was released, Altman tweeted, “i am a stochastic parrot, and so r u.”
  • “This is one of the moves that turn up ridiculously frequently. People saying, ‘Well, people are just stochastic parrots,’” she said. “People want to believe so badly that these language models are actually intelligent that they’re willing to take themselves as a point of reference and devalue that to match what the language model can do.”
  • The membrane between academia and industry is permeable almost everywhere; the membrane is practically nonexistent at Stanford, a school so entangled with tech that it can be hard to tell where the university ends and the businesses begin.
  • “No wonder that men who live day in and day out with machines to which they believe themselves to have become slaves begin to believe that men are machines.”
  • what’s tenure for, after all?
  • LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it’s not about humility. It’s not about all of us. It’s not about becoming a humble creation among the world’s others. It’s about some of us — let’s be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.
  • The AI dream is “governed by the perfectibility thesis, and that’s where we see a fascist form of the human.”
  • “Why are you trying to trick people into thinking that it really feels sad that you lost your phone?”
1 - 17 of 17
Showing 20 items per page