Skip to main content

Home/ Instructional & Media Services at Dickinson College/ Group items tagged publishing

Rss Feed Group items tagged

Ed Webb

ChatGPT Is Nothing Like a Human, Says Linguist Emily Bender - 0 views

  • Please do not conflate word form and meaning. Mind your own credulity.
  • We’ve learned to make “machines that can mindlessly generate text,” Bender told me when we met this winter. “But we haven’t learned how to stop imagining the mind behind it.”
  • A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”
  • ...16 more annotations...
  • “We call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harms,” she co-wrote in 2021. “Work on synthetic human behavior is a bright line in ethical Al development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups.”
  • chatbots that we easily confuse with humans are not just cute or unnerving. They sit on a bright line. Obscuring that line and blurring — bullshitting — what’s human and what’s not has the power to unravel society
  • She began learning from, then amplifying, Black women’s voices critiquing AI, including those of Joy Buolamwini (she founded the Algorithmic Justice League while at MIT) and Meredith Broussard (the author of Artificial Unintelligence: How Computers Misunderstand the World). She also started publicly challenging the term artificial intelligence, a sure way, as a middle-aged woman in a male field, to get yourself branded as a scold. The idea of intelligence has a white-supremacist history. And besides, “intelligent” according to what definition? The three-stratum definition? Howard Gardner’s theory of multiple intelligences? The Stanford-Binet Intelligence Scale? Bender remains particularly fond of an alternative name for AI proposed by a former member of the Italian Parliament: “Systematic Approaches to Learning Algorithms and Machine Inferences.” Then people would be out here asking, “Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?”
  • Tech-makers assuming their reality accurately represents the world create many different kinds of problems. The training data for ChatGPT is believed to include most or all of Wikipedia, pages linked from Reddit, a billion words grabbed off the internet. (It can’t include, say, e-book copies of everything in the Stanford library, as books are protected by copyright law.) The humans who wrote all those words online overrepresent white people. They overrepresent men. They overrepresent wealth. What’s more, we all know what’s out there on the internet: vast swamps of racism, sexism, homophobia, Islamophobia, neo-Nazism.
  • One fired Google employee told me succeeding in tech depends on “keeping your mouth shut to everything that’s disturbing.” Otherwise, you’re a problem. “Almost every senior woman in computer science has that rep. Now when I hear, ‘Oh, she’s a problem,’ I’m like, Oh, so you’re saying she’s a senior woman?”
  • “We haven’t learned to stop imagining the mind behind it.”
  • In March 2021, Bender published “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” with three co-authors. After the paper came out, two of the co-authors, both women, lost their jobs as co-leads of Google’s Ethical AI team.
  • “On the Dangers of Stochastic Parrots” is not a write-up of original research. It’s a synthesis of LLM critiques that Bender and others have made: of the biases encoded in the models; the near impossibility of studying what’s in the training data, given the fact they can contain billions of words; the costs to the climate; the problems with building technology that freezes language in time and thus locks in the problems of the past. Google initially approved the paper, a requirement for publications by staff. Then it rescinded approval and told the Google co-authors to take their names off it. Several did, but Google AI ethicist Timnit Gebru refused. Her colleague (and Bender’s former student) Margaret Mitchell changed her name on the paper to Shmargaret Shmitchell, a move intended, she said, to “index an event and a group of authors who got erased.” Gebru lost her job in December 2020, Mitchell in February 2021. Both women believe this was retaliation and brought their stories to the press. The stochastic-parrot paper went viral, at least by academic standards. The phrase stochastic parrot entered the tech lexicon.
  • Tech execs loved it. Programmers related to it. OpenAI CEO Sam Altman was in many ways the perfect audience: a self-identified hyperrationalist so acculturated to the tech bubble that he seemed to have lost perspective on the world beyond. “I think the nuclear mutually assured destruction rollout was bad for a bunch of reasons,” he said on AngelList Confidential in November. He’s also a believer in the so-called singularity, the tech fantasy that, at some point soon, the distinction between human and machine will collapse. “We are a few years in,” Altman wrote of the cyborg merge in 2017. “It’s probably going to happen sooner than most people think. Hardware is improving at an exponential rate … and the number of smart people working on AI is increasing exponentially as well. Double exponential functions get away from you fast.” On December 4, four days after ChatGPT was released, Altman tweeted, “i am a stochastic parrot, and so r u.”
  • “This is one of the moves that turn up ridiculously frequently. People saying, ‘Well, people are just stochastic parrots,’” she said. “People want to believe so badly that these language models are actually intelligent that they’re willing to take themselves as a point of reference and devalue that to match what the language model can do.”
  • The membrane between academia and industry is permeable almost everywhere; the membrane is practically nonexistent at Stanford, a school so entangled with tech that it can be hard to tell where the university ends and the businesses begin.
  • “No wonder that men who live day in and day out with machines to which they believe themselves to have become slaves begin to believe that men are machines.”
  • what’s tenure for, after all?
  • LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it’s not about humility. It’s not about all of us. It’s not about becoming a humble creation among the world’s others. It’s about some of us — let’s be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.
  • The AI dream is “governed by the perfectibility thesis, and that’s where we see a fascist form of the human.”
  • “Why are you trying to trick people into thinking that it really feels sad that you lost your phone?”
Ed Webb

Dark Social: We Have the Whole History of the Web Wrong - Alexis C. Madrigal - The Atla... - 0 views

  • this vast trove of social traffic is essentially invisible to most analytics programs. I call it DARK SOCIAL. It shows up variously in programs as "direct" or "typed/bookmarked" traffic, which implies to many site owners that you actually have a bookmark or typed in www.theatlantic.com into your browser. But that's not actually what's happening a lot of the time. Most of the time, someone Gchatted someone a link, or it came in on a big email distribution list, or your dad sent it to you
  • the idea that "social networks" and "social media" sites created a social web is pervasive. Everyone behaves as if the traffic your stories receive from the social networks (Facebook, Reddit, Twitter, StumbleUpon) is the same as all of your social traffic
  • if you think optimizing your Facebook page and Tweets is "optimizing for social," you're only halfway (or maybe 30 percent) correct. The only real way to optimize for social spread is in the nature of the content itself. There's no way to game email or people's instant messages. There's no power users you can contact. There's no algorithms to understand. This is pure social, uncut
  • ...6 more annotations...
  • Almost 69 percent of social referrals were dark! Facebook came in second at 20 percent. Twitter was down at 6 percent
  • direct socia
  • the social sites that arrived in the 2000s did not create the social web, but they did structure it. This is really, really significant. In large part, they made sharing on the Internet an act of publishing (!), with all the attendant changes that come with that switch. Publishing social interactions makes them more visible, searchable, and adds a lot of metadata to your simple link or photo post. There are some great things about this, but social networks also give a novel, permanent identity to your online persona. Your taste can be monetized, by you or (much more likely) the service itself
  • the tradeoffs we make on social networks is not the one that we're told we're making. We're not giving our personal data in exchange for the ability to share links with friends. Massive numbers of people -- a larger set than exists on any social network -- already do that outside the social networks. Rather, we're exchanging our personal data in exchange for the ability to publish and archive a record of our sharing. That may be a transaction you want to make, but it might not be the one you've been told you made. 
  • "Only about four percent of total traffic is on mobile at all, so, at least as a percentage of total referrals, app referrals must be a tiny percentage,"
  • only 0.3 percent of total traffic has the Facebook mobile site as a referrer and less than 0.1 percent has the Facebook mobile app
  •  
    Heh. Social is really social, not 'social' - who knew?
Ed Webb

Keep the 'Research,' Ditch the 'Paper' - Commentary - The Chronicle of Higher Education - 1 views

  • we need to construct meaningful opportunities for students to actually engage in research—to become modest but real contributors to the research on an actual question. When students write up the work they’ve actually performed, they create data and potential contributions to knowledge, contributions that can be digitally published or shared with a target community
  • Schuman’s critique of traditional writing instruction is sadly accurate. The skill it teaches most students is little more than a smash-and-grab assault on the secondary literature. Students open a window onto a search engine or database. They punch through to the first half-dozen items. Snatching random gems that seem to support their preconceived thesis, they change a few words, cobble it all together with class notes in the form of an argument, and call it "proving a thesis."
  • What happens when a newly employed person tries to pass off quote-farmed drivel as professional communication?
  • ...6 more annotations...
  • Generally these papers are just pumped-up versions of the five-paragraph essay, with filler added. Thesis-driven, argumentative, like the newspaper editorials the genre is based on, this "researched writing" promises to solve big questions with little effort: "Reproductive rights resolved in five pages!"
  • Actual writing related to research is modest, qualified, and hesitant
  • our actual model involves elaborately respectful conversation, demonstrating sensitivity to the most nuanced claims of previous researchers
  • Academic, legal, medical, and business writing has easily understandable conventions. We responsibly survey the existing literature, formally or informally creating an annotated bibliography. We write a review of the literature, identifying a "blank" spot ignored by other scholars, or a "bright" spot where we see conflicting evidence. We describe the nature of our research in terms of a contribution to the blank or bright spot in that conversation. We conclude by pointing to further questions.
  • Millions of pieces of research writing that aren’t essays usefully circulate in the profession through any number of sharing technologies, including presentations and posters; grant and experiment proposals; curated, arranged, translated, or visualized data; knowledgeable dialogue in online media with working professionals; independent journalism, arts reviews, and Wikipedia entries; documentary pitches, scripts and storyboards; and informative websites.
  • real researchers don’t write a word unless they have something to contribute. We should teach our students to do the same
Ed Webb

PSU Aggregates Democracy at bavatuesdays - 0 views

  • I propose we (you and me) get off our asses and put together an east coast higher ed blogger con that focuses not on a particular platform, but instead on the affordances of an open publishing platform. We’ll host here in State College or we can do it elsewhere — doesn’t matter to me. What do you think? A two day event that could (maybe) eventually rival Northern Voice … that may be shooting too high, but we need to set a bar somewhere.
  •  
    Want!
Ed Webb

The Internet Intellectual - 0 views

  • Even Thomas Friedman would be aghast at some of Jarvis’s cheesy sound-bites
  • What does that actually mean?
  • In Jarvis’s universe, all the good things are technologically determined and all the bad things are socially determined
  • ...7 more annotations...
  • Jarvis never broaches such subtleties. His is a simple world:
  • why not consider the possibility that the incumbents may be using the same tools, Jarvis’s revered technologies, to tell us what to think, and far more effectively than before? Internet shelf space may be infinite, but human attention is not. Cheap self-publishing marginally improves one’s chances of being heard, but nothing about this new decentralized public sphere suggests that old power structures—provided they are smart and willing to survive—will not be able to use it to their benefit
  • Jarvis 1.0 was all about celebrating Google, but Jarvis 2.0 has new friends in Facebook and Twitter. (An Internet intellectual always keeps up.) Jarvis 1.0 wrote that “Google’s moral of universal empowerment is the sometimes-forgotten ideal of democracy,” and argued that the company “provides the infrastructure for a culture of choice,” while its “algorithms and its business model work because Google trusts us.” Jarvis 2.0 claims that “by sharing publicly, we people challenge Google’s machines and reclaim our authority on the internet from algorithms.”
  • Jarvis has another reference point, another sacred telos: the equally grand and equally inexorable march of the Internet, which in his view is a technology that generates its own norms, its own laws, its own people. (He likes to speak of “us, people of the Net.”) For the Technology Man, the Internet is the glue that holds our globalized world together and the divine numen that fills it with meaning. If you thought that ethnocentrism was bad, brace yourself for Internet-centrism
  • Why worry about the growing dominance of such digitalism? The reason should be obvious. As Internet-driven explanations crowd out everything else, our entire vocabulary is being re-defined. Collaboration is re-interpreted through the prism of Wikipedia; communication, through the prism of social networking; democratic participation, through the prism of crowd-sourcing; cosmopolitanism, through the prism of reading the blogs of exotic “others”; political upheaval, through the prism of the so-called Twitter revolutions. Even the persecution of dissidents is now seen as an extension of online censorship (rather than the other way around). A recent headline on the blog of the Harvard-based Herdictproject—it tracks Internet censorship worldwide—announces that, in Mexico and Morocco, “Online Censorship Goes Offline.” Were activists and dissidents never harassed before Twitter and Facebook?
  • Most Internet intellectuals simply choose a random point in the distant past—the honor almost invariably goes to the invention of the printing press—and proceed to draw a straight line from Gutenberg to Zuckerberg, as if the Counter-Reformation, the Thirty Years’ War, the Reign of Terror, two world wars—and everything else—never happened.
  • even their iPad is of interest to them only as a “platform”—another buzzword of the incurious—and not as an artifact that is assembled in dubious conditions somewhere in East Asian workshops so as to produce cultic devotion in its more fortunate owners. This lack of elementary intellectual curiosity is the defining feature of the Internet intellectual. History, after all, is about details, but no Internet intellectual wants to be accused of thinking small. And so they think big—sloppily, ignorantly, pretentiously, and without the slightest appreciation of the difference between critical thought and market propaganda.
  •  
    In which Evgeny rips Jeff a new one
Ed Webb

Clear backpacks, monitored emails: life for US students under constant surveillance | E... - 0 views

  • This level of surveillance is “not too over-the-top”, Ingrid said, and she feels her classmates are generally “accepting” of it.
  • One leading student privacy expert estimated that as many as a third of America’s roughly 15,000 school districts may already be using technology that monitors students’ emails and documents for phrases that might flag suicidal thoughts, plans for a school shooting, or a range of other offenses.
  • Some parents said they were alarmed and frightened by schools’ new monitoring technologies. Others said they were conflicted, seeing some benefits to schools watching over what kids are doing online, but uncertain if their schools were striking the right balance with privacy concerns. Many said they were not even sure what kind of surveillance technology their schools might be using, and that the permission slips they had signed when their kids brought home school devices had told them almost nothing
  • ...13 more annotations...
  • When Dapier talks with other teen librarians about the issue of school surveillance, “we’re very alarmed,” he said. “It sort of trains the next generation that [surveillance] is normal, that it’s not an issue. What is the next generation’s Mark Zuckerberg going to think is normal?
  • “It’s the school as panopticon, and the sweeping searchlight beams into homes, now, and to me, that’s just disastrous to intellectual risk-taking and creativity.”
  • “They’re so unclear that I’ve just decided to cut off the research completely, to not do any of it.”
  • “They are all mandatory, and the accounts have been created before we’ve even been consulted,” he said. Parents are given almost no information about how their children’s data is being used, or the business models of the companies involved. Any time his kids complete school work through a digital platform, they are generating huge amounts of very personal, and potentially very valuable, data. The platforms know what time his kids do their homework, and whether it’s done early or at the last minute. They know what kinds of mistakes his kids make on math problems.
  • Felix, now 12, said he is frustrated that the school “doesn’t really [educate] students on what is OK and what is not OK. They don’t make it clear when they are tracking you, or not, or what platforms they track you on. “They don’t really give you a list of things not to do,” he said. “Once you’re in trouble, they act like you knew.”
  • As of 2018, at least 60 American school districts had also spent more than $1m on separate monitoring technology to track what their students were saying on public social media accounts, an amount that spiked sharply in the wake of the 2018 Parkland school shooting, according to the Brennan Center for Justice, a progressive advocacy group that compiled and analyzed school contracts with a subset of surveillance companies.
  • Many parents also said that they wanted more transparency and more parental control over surveillance. A few years ago, Ben, a tech professional from Maryland, got a call from his son’s principal to set up an urgent meeting. His son, then about nine or 10-years old, had opened up a school Google document and typed “I want to kill myself.” It was not until he and his son were in a serious meeting with school officials that Ben found out what happened: his son had typed the words on purpose, curious about what would happen. “The smile on his face gave away that he was testing boundaries, and not considering harming himself,” Ben said. (He asked that his last name and his son’s school district not be published, to preserve his son’s privacy.) The incident was resolved easily, he said, in part because Ben’s family already had close relationships with the school administrators.
  • there is still no independent evaluation of whether this kind of surveillance technology actually works to reduce violence and suicide.
  • Certain groups of students could easily be targeted by the monitoring more intensely than others, she said. Would Muslim students face additional surveillance? What about black students? Her daughter, who is 11, loves hip-hop music. “Maybe some of that language could be misconstrued, by the wrong ears or the wrong eyes, as potentially violent or threatening,” she said.
  • The Parent Coalition for Student Privacy was founded in 2014, in the wake of parental outrage over the attempt to create a standardized national database that would track hundreds of data points about public school students, from their names and social security numbers to their attendance, academic performance, and disciplinary and behavior records, and share the data with education tech companies. The effort, which had been funded by the Gates Foundation, collapsed in 2014 after fierce opposition from parents and privacy activists.
  • “More and more parents are organizing against the onslaught of ed tech and the loss of privacy that it entails. But at the same time, there’s so much money and power and political influence behind these groups,”
  • some privacy experts – and students – said they are concerned that surveillance at school might actually be undermining students’ wellbeing
  • “I do think the constant screen surveillance has affected our anxiety levels and our levels of depression.” “It’s over-guarding kids,” she said. “You need to let them make mistakes, you know? That’s kind of how we learn.”
Ed Webb

Waving the Asynchronous Flag - CogDogBlog - 0 views

  • in all the pivot talk, there’s a tinge of favoring the synchronous over the asynchronous
  • it’s not synchronous BAD / asynchronous GOOD
  • In terms of teaching, it seems now seen through sepia toned web glasses, is one of my favorite approaches, of participants/learners creating/writing/publishing in their own spaces and the class space being a syndication hub. The old gold ds106, which, as I must remind is still chugging along after 10 years, while in that span, most every Name Your Tech Fad has crested and sunk to the bottom of the Gartner hype trough
  • ...2 more annotations...
  • I think we ought to be placing a lot of thought and effort into asynchronous events and activities
  • The whole idea of distributed activity, woven in with daily challenges and assignment banks, was asynchronous beauty. But not without synchronous bits, be it class visits or running live sessions on ds106radio. Twas a mix.
Ed Webb

The Ed-Tech Imaginary - 0 views

  • We can say "Black lives matter," but we must also demonstrate through our actions that Black lives matter, and that means we must radically alter many of our institutions and practices, recognizing their inhumanity and carcerality. And that includes, no doubt, ed-tech. How much of ed-tech is, to use Ruha Benjamin's phrase, "the new Jim Code"? How much of ed-tech is designed by those who imagine students as cheats or criminals, as deficient or negligent?
  • "Reimagining" is a verb that education reformers are quite fond of. And "reimagining" seems too often to mean simply defunding, privatizing, union-busting, dismantling, outsourcing.
  • if Betsy DeVos is out there "reimagining," then we best be resisting
  • ...9 more annotations...
  • think we can view the promotion of ed-tech as a similar sort of process — the stories designed to convince us that the future of teaching and learning will be a technological wonder. The "jobs of the future that don't exist yet." The push for everyone to "learn to code."
  • The Matrix is, after all, a dystopia. So why would Matrix-style learning be desirable? Maybe that's the wrong question. Perhaps it's not so much that it's desirable, but it's just how our imaginations have been constructed, constricted even. We can't imagine any other ideal but speed and efficiency.
  • The first science fiction novel, published over 200 years ago, was in fact an ed-tech story: Mary Shelley's Frankenstein. While the book is commonly interpreted as a tale of bad science, it is also the story of bad education — something we tend to forget if we only know the story through the 1931 film version
  • Teaching machines and robot teachers were part of the Sixties' cultural imaginary — perhaps that's the problem with so many Boomer ed-reform leaders today. But that imaginary — certainly in the case of The Jetsons — was, upon close inspection, not always particularly radical or transformative. The students at Little Dipper Elementary still sat in desks in rows. The teacher still stood at the front of the class, punishing students who weren't paying attention.
  • we must also decolonize the ed-tech imaginary
  • Zuckerberg gave everyone at Facebook a copy of the Ernest Cline novel Ready Player One, for example, to get them excited about building technology for the future — a book that is really just a string of nostalgic references to Eighties white boy culture. And I always think about that New York Times interview with Sal Khan, where he said that "The science fiction books I like tend to relate to what we're doing at Khan Academy, like Orson Scott Card's 'Ender's Game' series." You mean, online math lectures are like a novel that justifies imperialism and genocide?! Wow.
  • This ed-tech imaginary is segregated. There are no Black students at the push-button school. There are no Black people in The Jetsons — no Black people living the American dream of the mid-twenty-first century
  • Part of the argument I make in my book is that much of education technology has been profoundly shaped by Skinner, even though I'd say that most practitioners today would say that they reject his theories; that cognitive science has supplanted behaviorism; and that after Ayn Rand and Noam Chomsky trashed Beyond Freedom and Dignity, no one paid attention to Skinner any more — which is odd considering there are whole academic programs devoted to "behavioral design," bestselling books devoted to the "nudge," and so on.
  • so much of the ed-tech imaginary is wrapped up in narratives about the Hero, the Weapon, the Machine, the Behavior, the Action, the Disruption. And it's so striking because education should be a practice of care, not conquest
Ed Webb

Google and Meta moved cautiously on AI. Then came OpenAI's ChatGPT. - The Washington Post - 0 views

  • The surge of attention around ChatGPT is prompting pressure inside tech giants including Meta and Google to move faster, potentially sweeping safety concerns aside
  • Tech giants have been skittish since public debacles like Microsoft’s Tay, which it took down in less than a day in 2016 after trolls prompted the bot to call for a race war, suggest Hitler was right and tweet “Jews did 9/11.”
  • Some AI ethicists fear that Big Tech’s rush to market could expose billions of people to potential harms — such as sharing inaccurate information, generating fake photos or giving students the ability to cheat on school tests — before trust and safety experts have been able to study the risks. Others in the field share OpenAI’s philosophy that releasing the tools to the public, often nominally in a “beta” phase after mitigating some predictable risks, is the only way to assess real world harms.
  • ...8 more annotations...
  • Silicon Valley’s sudden willingness to consider taking more reputational risk arrives as tech stocks are tumbling
  • A chatbot that pointed to one answer directly from Google could increase its liability if the response was found to be harmful or plagiarized.
  • AI has been through several hype cycles over the past decade, but the furor over DALL-E and ChatGPT has reached new heights.
  • Soon after OpenAI released ChatGPT, tech influencers on Twitter began to predict that generative AI would spell the demise of Google search. ChatGPT delivered simple answers in an accessible way and didn’t ask users to rifle through blue links. Besides, after a quarter of a century, Google’s search interface had grown bloated with ads and marketers trying to game the system.
  • Inside big tech companies, the system of checks and balances for vetting the ethical implications of cutting-edge AI isn’t as established as privacy or data security. Typically teams of AI researchers and engineers publish papers on their findings, incorporate their technology into the company’s existing infrastructure or develop new products, a process that can sometimes clash with other teams working on responsible AI over pressure to see innovation reach the public sooner.
  • Chatbots like OpenAI routinely make factual errors and often switch their answers depending on how a question is asked
  • To Timnit Gebru, executive director of the nonprofit Distributed AI Research Institute, the prospect of Google sidelining its responsible AI team doesn’t necessarily signal a shift in power or safety concerns, because those warning of the potential harms were never empowered to begin with. “If we were lucky, we’d get invited to a meeting,” said Gebru, who helped lead Google’s Ethical AI team until she was fired for a paper criticizing large language models.
  • Rumman Chowdhury, who led Twitter’s machine-learning ethics team until Elon Musk disbanded it in November, said she expects companies like Google to increasingly sideline internal critics and ethicists as they scramble to catch up with OpenAI.“We thought it was going to be China pushing the U.S., but looks like it’s start-ups,” she said.
1 - 14 of 14
Showing 20 items per page