Skip to main content

Home/ Instructional & Media Services at Dickinson College/ Group items tagged forms

Rss Feed Group items tagged

Ed Webb

William Davies · How many words does it take to make a mistake? Education, Ed... - 0 views

  • The problem waiting round the corner for universities is essays generated by AI, which will leave a textual pattern-spotter like Turnitin in the dust. (Earlier this year, I came across one essay that felt deeply odd in some not quite human way, but I had no tangible evidence that anything untoward had occurred, so that was that.)
  • To accuse someone of plagiarism is to make a moral charge regarding intentions. But establishing intent isn’t straightforward. More often than not, the hearings bleed into discussions of issues that could be gathered under the heading of student ‘wellbeing’, which all universities have been struggling to come to terms with in recent years.
  • I have heard plenty of dubious excuses for acts of plagiarism during these hearings. But there is one recurring explanation which, it seems to me, deserves more thoughtful consideration: ‘I took too many notes.’ It isn’t just students who are familiar with information overload, one of whose effects is to morph authorship into a desperate form of curatorial management, organising chunks of text on a screen. The discerning scholarly self on which the humanities depend was conceived as the product of transitions between spaces – library, lecture hall, seminar room, study – linked together by work with pen and paper. When all this is replaced by the interface with screen and keyboard, and everything dissolves into a unitary flow of ‘content’, the identity of the author – as distinct from the texts they have read – becomes harder to delineate.
  • ...19 more annotations...
  • This generation, the first not to have known life before the internet, has acquired a battery of skills in navigating digital environments, but it isn’t clear how well those skills line up with the ones traditionally accredited by universities.
  • From the perspective of students raised in a digital culture, the anti-plagiarism taboo no doubt seems to be just one more academic hang-up, a weird injunction to take perfectly adequate information, break it into pieces and refashion it. Students who pay for essays know what they are doing; others seem conscientious yet intimidated by secondary texts: presumably they won’t be able to improve on them, so why bother trying? For some years now, it’s been noticeable how many students arrive at university feeling that every interaction is a test they might fail. They are anxious. Writing seems fraught with risk, a highly complicated task that can be executed correctly or not.
  • Many students may like the flexibility recorded lectures give them, but the conversion of lectures into yet more digital ‘content’ further destabilises traditional conceptions of learning and writing
  • the evaluation forms which are now such a standard feature of campus life suggest that many students set a lot of store by the enthusiasm and care that are features of a good live lecture
  • the drift of universities towards a platform model, which makes it possible for students to pick up learning materials as and when it suits them. Until now, academics have resisted the push for ‘lecture capture’. It causes in-person attendance at lectures to fall dramatically, and it makes many lecturers feel like mediocre television presenters. Unions fear that extracting and storing teaching for posterity threatens lecturers’ job security and weakens the power of strikes. Thanks to Covid, this may already have happened.
  • In the utopia sold by the EdTech industry (the companies that provide platforms and software for online learning), pupils are guided and assessed continuously. When one task is completed correctly, the next begins, as in a computer game; meanwhile the platform providers are scraping and analysing data from the actions of millions of children. In this behaviourist set-up, teachers become more like coaches: they assist and motivate individual ‘learners’, but are no longer so important to the provision of education. And since it is no longer the sole responsibility of teachers or schools to deliver the curriculum, it becomes more centralised – the latest front in a forty-year battle to wrest control from the hands of teachers and local authorities.
  • an injunction against creative interpretation and writing, a deprivation that working-class children will feel at least as deeply as anyone else.
  • There may be very good reasons for delivering online teaching in segments, punctuated by tasks and feedback, but as Yandell observes, other ways of reading and writing are marginalised in the process. Without wishing to romanticise the lonely reader (or, for that matter, the lonely writer), something is lost when alternating periods of passivity and activity are compressed into interactivity, until eventually education becomes a continuous cybernetic loop of information and feedback. How many keystrokes or mouse-clicks before a student is told they’ve gone wrong? How many words does it take to make a mistake?
  • This vision of language as code may already have been a significant feature of the curriculum, but it appears to have been exacerbated by the switch to online teaching. In a journal article from August 2020, ‘Learning under Lockdown: English Teaching in the Time of Covid-19’, John Yandell notes that online classes create wholly closed worlds, where context and intertextuality disappear in favour of constant instruction. In these online environments, readingis informed not by prior reading experiences but by the toolkit that the teacher has provided, and ... is presented as occurring along a tramline of linear development. Different readings are reducible to better or worse readings: the more closely the student’s reading approximates to the already finalised teacher’s reading, the better it is. That, it would appear, is what reading with precision looks like.
  • Constant interaction across an interface may be a good basis for forms of learning that involve information-processing and problem-solving, where there is a right and a wrong answer. The cognitive skills that can be trained in this way are the ones computers themselves excel at: pattern recognition and computation. The worry, for anyone who cares about the humanities in particular, is about the oversimplifications required to conduct other forms of education in these ways.
  • Blanket surveillance replaces the need for formal assessment.
  • Confirming Adorno’s worst fears of the ‘primacy of practical reason’, reading is no longer dissociable from the execution of tasks. And, crucially, the ‘goals’ to be achieved through the ability to read, the ‘potential’ and ‘participation’ to be realised, are economic in nature.
  • since 2019, with the Treasury increasingly unhappy about the amount of student debt still sitting on the government’s balance sheet and the government resorting to ‘culture war’ at every opportunity, there has been an effort to single out degree programmes that represent ‘poor value for money’, measured in terms of graduate earnings. (For reasons best known to itself, the usually independent Institute for Fiscal Studies has been leading the way in finding correlations between degree programmes and future earnings.) Many of these programmes are in the arts and humanities, and are now habitually referred to by Tory politicians and their supporters in the media as ‘low-value degrees’.
  • studying the humanities may become a luxury reserved for those who can fall back on the cultural and financial advantages of their class position. (This effect has already been noticed among young people going into acting, where the results are more visible to the public than they are in academia or heritage organisations.)
  • given the changing class composition of the UK over the past thirty years, it’s not clear that contemporary elites have any more sympathy for the humanities than the Conservative Party does. A friend of mine recently attended an open day at a well-known London private school, and noticed that while there was a long queue to speak to the maths and science teachers, nobody was waiting to speak to the English teacher. When she asked what was going on, she was told: ‘I’m afraid parents here are very ambitious.’ Parents at such schools, where fees have tripled in real terms since the early 1980s, tend to work in financial and business services themselves, and spend their own days profitably manipulating and analysing numbers on screens. When it comes to the transmission of elite status from one generation to the next, Shakespeare or Plato no longer has the same cachet as economics or physics.
  • Leaving aside the strategic political use of terms such as ‘woke’ and ‘cancel culture’, it would be hard to deny that we live in an age of heightened anxiety over the words we use, in particular the labels we apply to people. This has benefits: it can help to bring discriminatory practices to light, potentially leading to institutional reform. It can also lead to fruitless, distracting public arguments, such as the one that rumbled on for weeks over Angela Rayner’s description of Conservatives as ‘scum’. More and more, words are dredged up, edited or rearranged for the purpose of harming someone. Isolated words have acquired a weightiness in contemporary politics and public argument, while on digital media snippets of text circulate without context, as if the meaning of a single sentence were perfectly contained within it, walled off from the surrounding text. The exemplary textual form in this regard is the newspaper headline or corporate slogan: a carefully curated series of words, designed to cut through the blizzard of competing information.
  • Visit any actual school or university today (as opposed to the imaginary ones described in the Daily Mail or the speeches of Conservative ministers) and you will find highly disciplined, hierarchical institutions, focused on metrics, performance evaluations, ‘behaviour’ and quantifiable ‘learning outcomes’.
  • If young people today worry about using the ‘wrong’ words, it isn’t because of the persistence of the leftist cultural power of forty years ago, but – on the contrary – because of the barrage of initiatives and technologies dedicated to reversing that power. The ideology of measurable literacy, combined with a digital net that has captured social and educational life, leaves young people ill at ease with the language they use and fearful of what might happen should they trip up.
  • It has become clear, as we witness the advance of Panopto, Class Dojo and the rest of the EdTech industry, that one of the great things about an old-fashioned classroom is the facilitation of unrecorded, unaudited speech, and of uninterrupted reading and writing.
Ed Webb

9 Ways Online Teaching Should be Different from Face-to-Face | Cult of Pedagogy - 0 views

  • Resist the temptation to dive right into curriculum at the start of the school year. Things will go more smoothly if you devote the early weeks to building community so students feel connected. Social emotional skills can be woven in during this time. On top of that, students need practice with whatever digital tools you’ll be using. So focus your lessons on those things, intertwining the two when possible. 
  • Online instruction is made up largely of asynchronous instruction, which students can access at any time. This is ideal, because requiring attendance for synchronous instruction puts some students at an immediate disadvantage if they don’t have the same access to technology, reliable internet, or a flexible home schedule. 
  • you’re likely to offer “face-to-face” or synchronous opportunities at some point, and one way to make them happen more easily is to have students meet in small groups. While it’s nearly impossible to arrange for 30 students to attend a meeting at once, assigning four students to meet is much more manageable.
  • ...9 more annotations...
  • What works best, Kitchen says, is to keep direct instruction—things like brief video lectures and readings—in asynchronous form, using checks for understanding like embedded questions or exit slips.  You can then use synchronous meetings for more interactive, engaging work. “If we want students showing up, if we want them to know that this is worth their time,” Kitchen explains, “it really needs to be something active and engaging for them. Any time they can work with the material, categorize it, organize it, share further thoughts on it, have a discussion, all of those are great things to do in small groups.” 
  • The Jigsaw method, where students form expert groups on a particular chunk of content, then teach that content to other students. Discussion strategies adapted for virtual settingsUsing best practices for cooperative learning Visible Thinking routinesGamestorming and other business related protocols adapted for education, where students take on the role of customers/stakeholders
  • What really holds leverage for the students? What has endurance? What knowledge is essential?What knowledge and skills do students need to have before they move to the next grade level or the next class?What practices can be emphasized that transfer across many content areas?  Skills like analyzing, constructing arguments, building a strong knowledge base through texts, and speaking can all be taught through many different subjects. What tools can serve multiple purposes? Teaching students to use something like Padlet gives them opportunities to use audio, drawing, writing, and video. Non-digital tools can also work: Students can use things they find around the house, like toilet paper rolls, to fulfill other assignments, and then submit their work with a photo.
  • Online instruction is not conducive to covering large amounts of content, so you have to choose wisely, teaching the most important things at a slower pace.
  • Provide instructions in a consistent location and at a consistent time. This advice was already given for parents, but it’s worth repeating here through the lens of instructional design: Set up lessons so that students know where to find instructions every time. Make instructions explicit. Read and re-read to make sure these are as clear as possible. Make dogfooding your lessons a regular practice to root out problem areas.Offer multimodal instructions. If possible, provide both written and video instructions for assignments, so students can choose the format that works best for them. You might also offer a synchronous weekly or daily meeting; what’s great about doing these online is that even if you teach several sections of the same class per day, students are no longer restricted to class times and can attend whatever meeting works best for them.
  • put the emphasis on formative feedback as students work through assignments and tasks, rather than simply grading them at the end. 
  • In online learning, Kitchen says, “There are so many ways that students can cheat, so if we’re giving them just the traditional quiz or test, it’s really easy for them to be able to just look up that information.” A great solution to this problem is to have students create things.
  • For assessment, use a detailed rubric that highlights the learning goals the end product will demonstrate. A single-point rubric works well for this.To help students discover tools to work with, this list of tools is organized by the type of product each one creates. Another great source of ideas is the Teacher’s Guide to Tech.When developing the assignment, rather than focusing on the end product, start by getting clear on what you want students to DO with that product.
  • Clear and consistent communicationCreating explicit and consistent rituals and routinesUsing research-based instructional strategiesDetermining whether to use digital or non-digital tools for an assignment A focus on authentic learning, where authentic products are created and students have voice and choice in assignments
Ed Webb

Reflections on open courses « Connectivism - 0 views

  • There is value of blending traditional with emergent knowledge spaces (online conferences and traditional journals) - Learners will create and innovate if they can express ideas and concepts in their own spaces and through their own expertise (i.e. hosting events in Second Life) - Courses are platforms for innovation. Too rigid a structure puts the educator in full control. Using a course as a platform fosters creativity…and creativity generates a bit of chaos and can be unsettling to individuals who prefer a structure with which they are familiar. - (cliche) Letting go of control is a bit stressful, but surprisingly rewarding in the new doors it opens and liberating in how it brings others in to assist in running a course and advancing the discussion. - People want to participate…but they will only do so once they have “permission” and a forum in which to utilize existing communication/technological skills.
  • The internet is a barrier-reducing system. In theory, everyone has a voice online (the reality of technology ownership, digital skills, and internet access add an unpleasant dimension). Costs of duplication are reduced. Technology (technique) is primarily a duplicationary process, as evidenced by the printing press, assembly line, and now the content duplication ability of digital technologies. As a result, MOOCs embody, rather than reflect, practices within the digital economy. MOOCs reduce barriers to information access and to the dialogue that permits individuals (and society) to grow knowledge. Much of the technical innovation in the last several centuries has permitted humanity to extend itself physically (cars, planes, trains, telescopes). The internet, especially in recent developments of connective and collaborative applications, is a cognitive extension for humanity. Put another way, the internet offers a model where the reproduction of knowledge is not confined to the production of physical objects.
  • Knowledge is a mashup. Many people contribute. Many different forums are used. Multiple media permit varied and nuanced expressions of knowledge. And, because the information base (which is required for knowledge formation) changes so rapidly, being properly connected to the right people and information is vitally important. The need for proper connectedness to the right people and information is readily evident in intelligence communities. Consider the Christmas day bomber. Or 9/11. The information was being collected. But not connected.
  • ...11 more annotations...
  • The open model of participation calls into question where value is created in the education system. Gutenberg created a means to duplicate content. The social web creates the opportunity for many-to-many interactions and to add a global social layer on content creation and knowledge growth.
  • Whatever can be easily duplicated cannot serve as the foundation for economic value. Integration and connectedness are economic value points.
  • In education, content can easily be produced (it’s important but has limited economic value). Lectures also have limited value (easy to record and to duplicate). Teaching – as done in most universities – can be duplicated. Learning, on the other hand, can’t be duplicated. Learning is personal, it has to occur one learner at a time. The support needed for learners to learn is a critical value point.
  • Learning, however, requires a human, social element: both peer-based and through interaction with subject area experts
  • Content is readily duplicated, reducing its value economically. It is still critical for learning – all fields have core elements that learners must master before they can advance (research in expertise supports this notion). - Teaching can be duplicated (lectures can be recorded, Elluminate or similar webconferencing system can bring people from around the world into a class). Assisting learners in the learning process, correcting misconceptions (see Private Universe), and providing social support and brokering introductions to other people and ideas in the discipline is critical. - Accreditation is a value statement – it is required when people don’t know each other. Content was the first area of focus in open education. Teaching (i.e. MOOCs) are the second. Accreditation will be next, but, before progress can be made, profile, identity, and peer-rating systems will need to improve dramatically. The underlying trust mechanism on which accreditation is based cannot yet be duplicated in open spaces (at least, it can’t be duplicated to such a degree that people who do not know each other will trust the mediating agent of open accreditation)
  • The skills that are privileged and rewarded in a MOOC are similar to those that are needed to be effective in communicating with others and interacting with information online (specifically, social media and information sources like journals, databases, videos, lectures, etc.). Creative skills are the most critical. Facilitators and learners need something to “point to”. When a participant creates an insightful blog post, a video, a concept map, or other resource/artifact it generally gets attention.
  • Intentional diversity – not necessarily a digital skill, but the ability to self-evaluate ones network and ensure diversity of ideologies is critical when information is fragmented and is at risk of being sorted by single perspectives/ideologies.
  • The volume of information is very disorienting in a MOOC. For example, in CCK08, the initial flow of postings in Moodle, three weekly live sessions, Daily newsletter, and weekly readings and assignments proved to be overwhelming for many participants. Stephen and I somewhat intentionally structured the course for this disorienting experience. Deciding who to follow, which course concepts are important, and how to form sub-networks and sub-systems to assist in sensemaking are required to respond to information abundance. The process of coping and wayfinding (ontology) is as much a lesson in the learning process as mastering the content (epistemology). Learners often find it difficult to let go of the urge to master all content, read all the comments and blog posts.
  • e. Learning is a social trust-based process.
  • Patience, tolerance, suspension of judgment, and openness to other cultures and ideas are required to form social connections and negotiating misunderstandings.
  • An effective digital citizenry needs the skills to participate in important conversations. The growth of digital content and social networks raises the need citizens to have the technical and conceptual skills to express their ideas and engage with others in those spaces. MOOCs are a first generation testing grounds for knowledge growth in a distributed, global, digital world. Their role in developing a digital citizenry is still unclear, but democratic societies require a populace with the skills to participate in growing a society’s knowledge. As such, MOOCs, or similar open transparent learning experiences that foster the development of citizens confidence engage and create collaboratively, are important for the future of society.
Ed Webb

Social Media is Killing the LMS Star - A Bootleg of Bryan Alexander's Lost Presentation... - 0 views

  • Note that this isn’t just a technological alternate history. It also describes a different set of social and cultural practices.
  • CMSes lumber along like radio, still playing into the air as they continue to gradually shift ever farther away on the margins. In comparison, Web 2.0 is like movies and tv combined, plus printed books and magazines. That’s where the sheer scale, creative ferment, and wife-ranging influence reside. This is the necessary background for discussing how to integrate learning and the digital world.
  • These virtual classes are like musical practice rooms, small chambers where one may try out the instrument in silent isolation. It is not connectivism but disconnectivism.
  • ...11 more annotations...
  • CMSes shift from being merely retrograde to being actively regressive if we consider the broader, subtler changes in the digital teaching landscape. Web 2.0 has rapidly grown an enormous amount of content through what Yochai Benkler calls “peer-based commons production.” One effect of this has been to grow a large area for informal learning, which students (and staff) access without our benign interference. Students (and staff) also contribute to this peering world; more on this later. For now, we can observe that as teachers we grapple with this mechanism of change through many means, but the CMS in its silo’d isolation is not a useful tool.
  • those curious about teaching with social media have easy access to a growing, accessible community of experienced staff by means of those very media. A meta-community of Web 2.0 academic practitioners is now too vast to catalogue. Academics in every discipline blog about their work. Wikis record their efforts and thoughts, as do podcasts. The reverse is true of the CMS, the very architecture of which forbids such peer-to-peer information sharing. For example, the Resource Center for Cyberculture Studies (RCCS) has for many years maintained a descriptive listing of courses about digital culture across the disciplines. During the 1990s that number grew with each semester. But after the explosive growth of CMSes that number dwindled. Not the number of classes taught, but the number of classes which could even be described. According to the RCCS’ founder, David Silver (University of San Francisco), this is due to the isolation of class content in CMS containers.
  • unless we consider the CMS environment to be a sort of corporate intranet simulation, the CMS set of community skills is unusual, rarely applicable to post-graduation examples. In other words, while a CMS might help privacy concerns, it is at best a partial, not sufficient solution, and can even be inappropriate for already online students.
  • That experiential, teachable moment of selecting one’s copyright stance is eliminated by the CMS.
  • Another argument in favor of CMSes over Web 2.0 concerns the latter’s open nature. It is too open, goes the thought, constituting a “Wild West” experience of unfettered information flow and unpleasant forms of access. Campuses should run CMSes to create shielded environments, iPhone-style walled gardens that protect the learning process from the Lovecraftian chaos without.
  • social sifting, information literacy, using the wisdom of crowds, and others. Such strategies are widely discussed, easily accessed, and continually revised and honed.
  • at present, radio CMS is the Clear Channel of online learning.
  • For now, the CMS landsape is a multi-institutional dark Web, an invisible, unsearchable, un-mash-up-able archipelago of hidden learning content.
  • Can the practice of using a CMS prepare either teacher or student to think critically about this new shape for information literacy? Moreover, can we use the traditional CMS to share thoughts and practices about this topic?
  • The internet of things refers to a vastly more challenging concept, the association of digital information with the physical world. It covers such diverse instances as RFID chips attached to books or shipping pallets, connecting a product’s scanned UPC code to a Web-based database, assigning unique digital identifiers to physical locations, and the broader enterprise of augmented reality. It includes problems as varied as building search that covers both the World Wide Web and one’s mobile device, revising copyright to include digital content associated with private locations, and trying to salvage what’s left of privacy. How does this connect with our topic? Consider a recent article by Tim O’Reilly and John Battle, where they argue that the internet of things is actually growing knowledge about itself. The combination of people, networks, and objects is building descriptions about objects, largely in folksonomic form. That is, people are tagging the world, and sharing those tags. It’s worth quoting a passage in full: “It’s also possible to give structure to what appears to be unstructured data by teaching an application how to recognize the connection between the two. For example, You R Here, an iPhone app, neatly combines these two approaches. You use your iPhone camera to take a photo of a map that contains details not found on generic mapping applications such as Google maps – say a trailhead map in a park, or another hiking map. Use the phone’s GPS to set your current location on the map. Walk a distance away, and set a second point. Now your iPhone can track your position on that custom map image as easily as it can on Google maps.” (http://www.web2summit.com/web2009/public/schedule/detail/10194) What world is better placed to connect academia productively with such projects, the open social Web or the CMS?
  • imagine the CMS function of every class much like class email, a necessary feature, but not by any means the broadest technological element. Similarly the e-reserves function is of immense practical value. There may be no better way to share copyrighted academic materials with a class, at this point. These logistical functions could well play on.
Ed Webb

ChatGPT Is a Blurry JPEG of the Web | The New Yorker - 0 views

  • Think of ChatGPT as a blurry JPEG of all the text on the Web. It retains much of the information on the Web, in the same way that a JPEG retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry JPEG, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.
  • a way to understand the “hallucinations,” or nonsensical answers to factual questions, to which large-language models such as ChatGPT are all too prone. These hallucinations are compression artifacts, but—like the incorrect labels generated by the Xerox photocopier—they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our own knowledge of the world. When we think about them this way, such hallucinations are anything but surprising; if a compression algorithm is designed to reconstruct text after ninety-nine per cent of the original has been discarded, we should expect that significant portions of what it generates will be entirely fabricated.
  • ChatGPT is so good at this form of interpolation that people find it entertaining: they’ve discovered a “blur” tool for paragraphs instead of photos, and are having a blast playing with it.
  • ...9 more annotations...
  • large-language models like ChatGPT are often extolled as the cutting edge of artificial intelligence, it may sound dismissive—or at least deflating—to describe them as lossy text-compression algorithms. I do think that this perspective offers a useful corrective to the tendency to anthropomorphize large-language models
  • Even though large-language models often hallucinate, when they’re lucid they sound like they actually understand subjects like economic theory
  • The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material. In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.
  • starting with a blurry copy of unoriginal work isn’t a good way to create original work
  • If and when we start seeing models producing output that’s as good as their input, then the analogy of lossy compression will no longer be applicable.
  • Even if it is possible to restrict large-language models from engaging in fabrication, should we use them to generate Web content? This would make sense only if our goal is to repackage information that’s already available on the Web. Some companies exist to do just that—we usually call them content mills. Perhaps the blurriness of large-language models will be useful to them, as a way of avoiding copyright infringement. Generally speaking, though, I’d say that anything that’s good for content mills is not good for people searching for information.
  • Having students write essays isn’t merely a way to test their grasp of the material; it gives them experience in articulating their thoughts. If students never have to write essays that we have all read before, they will never gain the skills needed to write something that we have never read.
  • Sometimes it’s only in the process of writing that you discover your original ideas. Some might say that the output of large-language models doesn’t look all that different from a human writer’s first draft, but, again, I think this is a superficial resemblance. Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That’s what directs you during rewriting, and that’s one of the things lacking when you start with text generated by an A.I.
  • What use is there in having something that rephrases the Web?
Ed Webb

ChatGPT Is Nothing Like a Human, Says Linguist Emily Bender - 0 views

  • Please do not conflate word form and meaning. Mind your own credulity.
  • We’ve learned to make “machines that can mindlessly generate text,” Bender told me when we met this winter. “But we haven’t learned how to stop imagining the mind behind it.”
  • A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”
  • ...16 more annotations...
  • “We call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harms,” she co-wrote in 2021. “Work on synthetic human behavior is a bright line in ethical Al development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups.”
  • chatbots that we easily confuse with humans are not just cute or unnerving. They sit on a bright line. Obscuring that line and blurring — bullshitting — what’s human and what’s not has the power to unravel society
  • She began learning from, then amplifying, Black women’s voices critiquing AI, including those of Joy Buolamwini (she founded the Algorithmic Justice League while at MIT) and Meredith Broussard (the author of Artificial Unintelligence: How Computers Misunderstand the World). She also started publicly challenging the term artificial intelligence, a sure way, as a middle-aged woman in a male field, to get yourself branded as a scold. The idea of intelligence has a white-supremacist history. And besides, “intelligent” according to what definition? The three-stratum definition? Howard Gardner’s theory of multiple intelligences? The Stanford-Binet Intelligence Scale? Bender remains particularly fond of an alternative name for AI proposed by a former member of the Italian Parliament: “Systematic Approaches to Learning Algorithms and Machine Inferences.” Then people would be out here asking, “Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?”
  • Tech-makers assuming their reality accurately represents the world create many different kinds of problems. The training data for ChatGPT is believed to include most or all of Wikipedia, pages linked from Reddit, a billion words grabbed off the internet. (It can’t include, say, e-book copies of everything in the Stanford library, as books are protected by copyright law.) The humans who wrote all those words online overrepresent white people. They overrepresent men. They overrepresent wealth. What’s more, we all know what’s out there on the internet: vast swamps of racism, sexism, homophobia, Islamophobia, neo-Nazism.
  • One fired Google employee told me succeeding in tech depends on “keeping your mouth shut to everything that’s disturbing.” Otherwise, you’re a problem. “Almost every senior woman in computer science has that rep. Now when I hear, ‘Oh, she’s a problem,’ I’m like, Oh, so you’re saying she’s a senior woman?”
  • “We haven’t learned to stop imagining the mind behind it.”
  • In March 2021, Bender published “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” with three co-authors. After the paper came out, two of the co-authors, both women, lost their jobs as co-leads of Google’s Ethical AI team.
  • “On the Dangers of Stochastic Parrots” is not a write-up of original research. It’s a synthesis of LLM critiques that Bender and others have made: of the biases encoded in the models; the near impossibility of studying what’s in the training data, given the fact they can contain billions of words; the costs to the climate; the problems with building technology that freezes language in time and thus locks in the problems of the past. Google initially approved the paper, a requirement for publications by staff. Then it rescinded approval and told the Google co-authors to take their names off it. Several did, but Google AI ethicist Timnit Gebru refused. Her colleague (and Bender’s former student) Margaret Mitchell changed her name on the paper to Shmargaret Shmitchell, a move intended, she said, to “index an event and a group of authors who got erased.” Gebru lost her job in December 2020, Mitchell in February 2021. Both women believe this was retaliation and brought their stories to the press. The stochastic-parrot paper went viral, at least by academic standards. The phrase stochastic parrot entered the tech lexicon.
  • Tech execs loved it. Programmers related to it. OpenAI CEO Sam Altman was in many ways the perfect audience: a self-identified hyperrationalist so acculturated to the tech bubble that he seemed to have lost perspective on the world beyond. “I think the nuclear mutually assured destruction rollout was bad for a bunch of reasons,” he said on AngelList Confidential in November. He’s also a believer in the so-called singularity, the tech fantasy that, at some point soon, the distinction between human and machine will collapse. “We are a few years in,” Altman wrote of the cyborg merge in 2017. “It’s probably going to happen sooner than most people think. Hardware is improving at an exponential rate … and the number of smart people working on AI is increasing exponentially as well. Double exponential functions get away from you fast.” On December 4, four days after ChatGPT was released, Altman tweeted, “i am a stochastic parrot, and so r u.”
  • “This is one of the moves that turn up ridiculously frequently. People saying, ‘Well, people are just stochastic parrots,’” she said. “People want to believe so badly that these language models are actually intelligent that they’re willing to take themselves as a point of reference and devalue that to match what the language model can do.”
  • The membrane between academia and industry is permeable almost everywhere; the membrane is practically nonexistent at Stanford, a school so entangled with tech that it can be hard to tell where the university ends and the businesses begin.
  • “No wonder that men who live day in and day out with machines to which they believe themselves to have become slaves begin to believe that men are machines.”
  • what’s tenure for, after all?
  • LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it’s not about humility. It’s not about all of us. It’s not about becoming a humble creation among the world’s others. It’s about some of us — let’s be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.
  • The AI dream is “governed by the perfectibility thesis, and that’s where we see a fascist form of the human.”
  • “Why are you trying to trick people into thinking that it really feels sad that you lost your phone?”
Ed Webb

Start Calling it Digital Liberal Arts | The Transducer - 0 views

  • No longer an inno­cent place for the play­ful encounter between tech­nol­ogy and inter­pre­ta­tion, DH is now being inter­ro­gated for evi­dence of par­tic­i­pa­tion in an exclu­sivist techno­sci­en­tific imag­i­nary, and there are many will­ing to save the field by the­o­riz­ing what has remained for too long under­the­o­rized
  • This is in con­trast to the dig­i­tal human­i­ties, and indeed dig­i­tal schol­ar­ship as a whole, which has its heart in the edi­tion and the archive
  • DLA is inclu­sive of the entire arts and sci­ences spec­trum
  • ...5 more annotations...
  • DLA is explic­itly res­i­den­tial and dia­log­i­cal
  • Not so much a replace­ment as a sup­ple­ment to dig­i­tal human­i­ties, DLA broad­ens the scope and relo­cates the cen­ter of grav­ity of what I have referred to as the dig­i­tal human­i­ties sit­u­a­tion, the recur­ring, play­ful encounter of human­ists with tech­nol­ogy.  Instead of focus­ing on what may bet­ter be described as the com­pu­ta­tional human­i­ties (a use­ful term recently pro­posed by Lev Manovich), the dig­i­tal lib­eral arts seeks to locate dig­i­tal media squarely within the frame of the lib­eral arts, broadly con­ceived as a cur­ricu­lum, not a dis­ci­pline or even set of dis­ci­plines, and as a dis­tinc­tive mode of edu­ca­tional expe­ri­ence, not a set of received the­o­ret­i­cal con­cerns. It is a fram­ing par­tic­u­larly suited to lib­eral arts col­leges — America’s great con­tri­bu­tion to higher learn­ing — but also to uni­ver­si­ties, such as UVa, whose souls are in the lib­eral arts as well.
  • the idea of Coursera-style MOOCs being part of the DLA is a non-starter, although dis­trib­uted and medi­ated forms of edu­ca­tion can, and I think must, become part of the lib­eral arts experience
  • DLA is as con­cerned with ped­a­gogy as it is with research
  • focus­ing on the real use of dig­i­tal col­lec­tions (for exam­ple) as much as on their cre­ation and publication
Ed Webb

Keep the 'Research,' Ditch the 'Paper' - Commentary - The Chronicle of Higher Education - 1 views

  • we need to construct meaningful opportunities for students to actually engage in research—to become modest but real contributors to the research on an actual question. When students write up the work they’ve actually performed, they create data and potential contributions to knowledge, contributions that can be digitally published or shared with a target community
  • Schuman’s critique of traditional writing instruction is sadly accurate. The skill it teaches most students is little more than a smash-and-grab assault on the secondary literature. Students open a window onto a search engine or database. They punch through to the first half-dozen items. Snatching random gems that seem to support their preconceived thesis, they change a few words, cobble it all together with class notes in the form of an argument, and call it "proving a thesis."
  • What happens when a newly employed person tries to pass off quote-farmed drivel as professional communication?
  • ...6 more annotations...
  • Generally these papers are just pumped-up versions of the five-paragraph essay, with filler added. Thesis-driven, argumentative, like the newspaper editorials the genre is based on, this "researched writing" promises to solve big questions with little effort: "Reproductive rights resolved in five pages!"
  • Actual writing related to research is modest, qualified, and hesitant
  • our actual model involves elaborately respectful conversation, demonstrating sensitivity to the most nuanced claims of previous researchers
  • Academic, legal, medical, and business writing has easily understandable conventions. We responsibly survey the existing literature, formally or informally creating an annotated bibliography. We write a review of the literature, identifying a "blank" spot ignored by other scholars, or a "bright" spot where we see conflicting evidence. We describe the nature of our research in terms of a contribution to the blank or bright spot in that conversation. We conclude by pointing to further questions.
  • Millions of pieces of research writing that aren’t essays usefully circulate in the profession through any number of sharing technologies, including presentations and posters; grant and experiment proposals; curated, arranged, translated, or visualized data; knowledgeable dialogue in online media with working professionals; independent journalism, arts reviews, and Wikipedia entries; documentary pitches, scripts and storyboards; and informative websites.
  • real researchers don’t write a word unless they have something to contribute. We should teach our students to do the same
Ed Webb

Fahrenheit 451 in comic-book form. - By Sarah Boxer - Slate Magazine - 0 views

  •  
    For the Clarke Forum year of popular culture.
Ed Webb

Online pivot & the absence of a magic button - The Ed Techie - 1 views

  • Now we’re getting into the online pivot more substantially, higher education institutions are coming to terms it may not be a short-term emergency shift. It looks like the first semester of the 2020-21 year may be online, and if Covid-19 flares up again, who knows how long it may continue. While you could get away with “sticking classes on Zoom” for the immediate emergency, that won’t cut it in the medium term.
  • I’m sorry to tell you – there is no Go Online button
  • The good news is that it is entirely possible to create good, online courses in just about any subject, and students will do well in them and their performance and long term understanding of the topics will be as good, if not better, than those taught face to face. So that’s the good news, higher education isn’t going to die.
  • ...3 more annotations...
  • It is not cheapIt is not quickYou need to invest in building up your own staff expertiseIt will bring additional problems that you didn’t have beforeStudents will need different types of support
  • the main issue here is the cut in academic staff and the outsourcing of expertise. Invest in your staff.
  • a better solution is to invest in staff (and here institutions might want to get expertise in to help), use OER for content, and make strategic decisions that have as their basis the belief that online, distance ed is a useful, valid form of education.
Ed Webb

How to Turn Your Syllabus into an Infographic - The Visual Communication Guy - 0 views

  • If you’re ever going to turn a syllabus into an infographic, you must, MUST reduce the amount of text you are using. There are, of course, important things you’ll want and must include, but you can’t think of this document as ten pages of paragraphs. Strip down to only the essential information, with a bit of added info where you  think some flare or excitement is needed. Remember: your students are smart people. They can understand documents quickly without a bunch of extra fluff, so remove all the unnecessary stuff.
  • Remember to only use pictures that you either created yourself (own the copyright) or that you found through creative commons or public domain websites. Don’t use ugly clipart or images that you don’t have permission to use. A great place to find free icons? Flaticon.com.
  • try drawing it out on sketch paper first. While this will seem like an annoying task for most people, trust me when I say that it will save you a lot of time in the long run
  • ...5 more annotations...
  • If there is anything on your syllabus that can be quantified (like percentages for grades or assignments), consider making bar graphs or pie charts to visually represent it. This is helpful, too, so students can visually understand, very quickly, how much weight is given to each project.
  • Once you’ve determined the sections, it’s easier to think about what relates to what and how you might organize your syllabus in a way that makes sense for your students.
  • Remember to reduce as much text as possible and supplement what you write with an image. Consider using the images of your required textbooks, for example, and use icons and graphics that relate to each section.
  • Adobe InDesign
  • Don’t get so caught up in designing a cool infographic about your course that you forget to include information about accessibility, Title IX, academic dishonesty, and other related information. I might recommend not going too fancy on the institution-wide policies. You might still keep that in paragraph form, just so that there is no way to misinterpret what your institution wants you to say.
Ed Webb

How much 'work' should my online course be for me and my students? - Dave's Educational... - 0 views

  • My recommendation for people planning their courses, is to stop thinking about ‘contact hours’. A contact hour is a constraint that is applied to the learning process because of the organizational need to have people share a space in a building. Also called a credit hour, (particularly for American universities) this has meant, from a workload perspective, that for every in class hour a student is meant to do at least 2 (in some cases 3) hours of study outside of class. Even Cliff Notes agrees with me. So… for a full load, that 30 to 45 Total Work Hours for students per course that you are designing.
  • Simple break down (not quite 90, yes i know) Watch 3 hours of video* – 5 hoursRead stuff – 20 hoursListen to me talk – 15 hoursTalk with other students in a group – 15 hoursWrite reflections about group chat – 7.5 hoursRespond to other people’s reflections – 7.5 hoursWork on a term paper – 10 hoursDo weekly quiz – 3 hoursWrite take home mid-term – 3 hoursWrite take home final – 3 hours
  • A thousand variations of this might be imagined
  • ...3 more annotations...
  • a possible structure recommended by one of the faculty we were talking to was – read/watch, quiz, lecture, student group discussion, reflection. The reasoning here is that if you give learners (particularly new learners) a reading without some form of accountability (a quiz) they are much less likely to do it. I know that for me, when I’ve done the readings, I’m far more likely to attend class. Putting the student group discussion after the lecture gives students who can’t attend a synchronous session a chance to review the recording
  • The standardization police have been telling us for years that each student must learn the same things. Poppycock. Scaffolding doesn’t mean taking away student choice. There are numerous approaches to allowing a little or a lot of choice into your classes (learner contracts come to mind). Just remember, most students don’t want choice – at first. 12-16 years of training has told them that you the faculty member have something you want them to do and they need to find the trick of it. It will take a while until those students actually believe you want their actual opinion.
  • You can have a goal like – get them acculturated to the field – and work through your activities to get there. It’s harder, they will need your patience, but once they get their minds around it, it makes things much more interesting.
Ed Webb

CRITICAL AI: Adapting College Writing for the Age of Large Language Models such as Chat... - 1 views

  • In the long run, we believe, teachers need to help students develop a critical awareness of generative machine models: how they work; why their content is often biased, false, or simplistic; and what their social, intellectual, and environmental implications might be. But that kind of preparation takes time, not least because journalism on this topic is often clickbait-driven, and “AI” discourse tends to be jargony, hype-laden, and conflated with science fiction.
  • Make explicit that the goal of writing is neither a product nor a grade but, rather, a process that empowers critical thinking
  • Students are more likely to misuse text generators if they trust them too much. The term “Artificial Intelligence” (“AI”) has become a marketing tool for hyping products. For all their impressiveness, these systems are not intelligent in the conventional sense of that term. They are elaborate statistical models that rely on mass troves of data—which has often been scraped indiscriminately from the web and used without knowledge or consent.
  • ...9 more annotations...
  • LLMs usually cannot do a good job of explaining how a particular passage from a longer text illuminates the whole of that longer text. Moreover, ChatGPT’s outputs on comparison and contrast are often superficial. Typically the system breaks down a task of logical comparison into bite-size pieces, conveys shallow information about each of those pieces, and then formulaically “compares” and “contrasts” in a noticeably superficial or repetitive way. 
  • In-class writing, whether digital or handwritten, may have downsides for students with anxiety and disabilities
  • ChatGPT can produce outputs that take the form of  “brainstorms,” outlines, and drafts. It can also provide commentary in the style of peer review or self-analysis. Nonetheless, students would need to coordinate multiple submissions of automated work in order to complete this type of assignment with a text generator.  
  • No one should present auto-generated writing as their own on the expectation that this deception is undiscoverable. 
  • LLMs often mimic the harmful prejudices, misconceptions, and biases found in data scraped from the internet
  • Show students examples of inaccuracy, bias, logical, and stylistic problems in automated outputs. We can build students’ cognitive abilities by modeling and encouraging this kind of critique. Given that social media and the internet are full of bogus accounts using synthetic text, alerting students to the intrinsic problems of such writing could be beneficial. (See the “ChatGPT/LLM Errors Tracker,” maintained by Gary Marcus and Ernest Davis.)
  • Since ChatGPT is good at grammar and syntax but suffers from formulaic, derivative, or inaccurate content, it seems like a poor foundation for building students’ skills and may circumvent their independent thinking.
  • Good journalism on language models is surprisingly hard to find since the technology is so new and the hype is ubiquitous. Here are a few reliable short pieces.     “ChatGPT Advice Academics Can Use Now” edited by Susan Dagostino, Inside Higher Ed, January 12, 2023  “University students recruit AI to write essays for them. Now what?” by Katyanna Quach, The Register, December 27, 2022  “How to spot AI-generated text” by Melissa Heikkilä, MIT Technology Review, December 19, 2022  The Road to AI We Can Trust, Substack by Gary Marcus, a cognitive scientist and AI researcher who writes frequently and lucidly about the topic. See also Gary Marcus and Ernest Davis, “GPT-3, Bloviator: OpenAI’s Language Generator Has No Idea What It’s Talking About” (2020).
  • “On the Dangers of Stochastic Parrots” by Emily M. Bender, Timnit Gebru, et al, FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 2021. Association for Computing Machinery, doi: 10.1145/3442188. A blog post summarizing and discussing the above essay derived from a Critical AI @ Rutgers workshop on the essay: summarizes key arguments, reprises discussion, and includes links to video-recorded presentations by digital humanist Katherine Bode (ANU) and computer scientist and NLP researcher Matthew Stone (Rutgers).
1 - 14 of 14
Showing 20 items per page