Skip to main content

Home/ Instructional & Media Services at Dickinson College/ Group items tagged Education

Rss Feed Group items tagged

Ed Webb

Lurching Toward Fall, Disaster on the Horizon | Just Visiting - 0 views

  • the virus will be far more present in far more places when school starts in August than it was when most schools shut down in March
  • I am among the crowd who both believes that online learning can be done quite well, and that there is something irreplaceable about the experiences of face-to-face learning, when that learning is happening under reasonable conditions that is. These are not reasonable conditions. Do not get me wrong. This is a loss. The experience of community is not the same at a distance or over the internet. It is not necessarily entirely absent, but it is not as present.
Ed Webb

Offering Seminar Courses Remotely | Educatus - 0 views

  • In an online environment, seminars will work best if they occur asynchronously in the discussion boards in an LMS
  • The 4 key elements for a seminar that need to be replicated during remote instruction include: A prompt or text(s) that the student considers independently in advance Guiding questions that require analysis, synthesize and/or evaluation of ideas The opportunity to share personal thinking with a group Ideas being developed, rejected, and refined over time based on everyone’s contributions
  • Students need specific guidance and support for how to develop, reject, and refine ideas appropriately in your course.  If you want students to share well, consider requiring an initial post where you and students introduce yourselves and share a picture. Describe your expectations for norms in how everyone will behave online Provide a lot of initial feedback about the quality of posting.  Consider giving samples of good and bad posts, and remember to clarify your marking criteria. Focus your expectations on the quality of comments, and set maximums for the amount you expect to reduce your marking load and keep the discussions high quality. Someone will need to moderate the discussion. That includes posting the initial threads, reading what everyone posts all weeks and commenting to keep the discussion flowing.  Likely, the same person (you or a TA) will also be grading and providing private feedback to each student. Consider making the moderation of a discussion an assignment in your course. You can moderate the first few weeks to demonstrate what you want, and groups of students can moderate other weeks. It can increase engagement if done well, and definitely decreases your work load.
  • ...1 more annotation...
  • Teach everyone to mute when not speaking, and turn off their cameras if they have bandwidth issues. Use the chat so people can agree and add ideas as other people are speaking, and teach people to raise their hands or add emoticons in the participants window to help you know who wants to speak next
Ed Webb

I unintentionally created a biased AI algorithm 25 years ago - tech companies are still... - 0 views

  • How and why do well-educated, well-intentioned scientists produce biased AI systems? Sociological theories of privilege provide one useful lens.
  • Scientists also face a nasty subconscious dilemma when incorporating diversity into machine learning models: Diverse, inclusive models perform worse than narrow models.
  • fairness can still be the victim of competitive pressures in academia and industry. The flawed Bard and Bing chatbots from Google and Microsoft are recent evidence of this grim reality. The commercial necessity of building market share led to the premature release of these systems.
  • ...3 more annotations...
  • Their training data is biased. They are designed by an unrepresentative group. They face the mathematical impossibility of treating all categories equally. They must somehow trade accuracy for fairness. And their biases are hiding behind millions of inscrutable numerical parameters.
  • biased AI systems can still be created unintentionally and easily. It’s also clear that the bias in these systems can be harmful, hard to detect and even harder to eliminate.
  • with North American computer science doctoral programs graduating only about 23% female, and 3% Black and Latino students, there will continue to be many rooms and many algorithms in which underrepresented groups are not represented at all.
Ed Webb

ChatGPT Is a Blurry JPEG of the Web | The New Yorker - 0 views

  • Think of ChatGPT as a blurry JPEG of all the text on the Web. It retains much of the information on the Web, in the same way that a JPEG retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry JPEG, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.
  • a way to understand the “hallucinations,” or nonsensical answers to factual questions, to which large-language models such as ChatGPT are all too prone. These hallucinations are compression artifacts, but—like the incorrect labels generated by the Xerox photocopier—they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our own knowledge of the world. When we think about them this way, such hallucinations are anything but surprising; if a compression algorithm is designed to reconstruct text after ninety-nine per cent of the original has been discarded, we should expect that significant portions of what it generates will be entirely fabricated.
  • ChatGPT is so good at this form of interpolation that people find it entertaining: they’ve discovered a “blur” tool for paragraphs instead of photos, and are having a blast playing with it.
  • ...9 more annotations...
  • large-language models like ChatGPT are often extolled as the cutting edge of artificial intelligence, it may sound dismissive—or at least deflating—to describe them as lossy text-compression algorithms. I do think that this perspective offers a useful corrective to the tendency to anthropomorphize large-language models
  • Even though large-language models often hallucinate, when they’re lucid they sound like they actually understand subjects like economic theory
  • The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material. In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.
  • Even if it is possible to restrict large-language models from engaging in fabrication, should we use them to generate Web content? This would make sense only if our goal is to repackage information that’s already available on the Web. Some companies exist to do just that—we usually call them content mills. Perhaps the blurriness of large-language models will be useful to them, as a way of avoiding copyright infringement. Generally speaking, though, I’d say that anything that’s good for content mills is not good for people searching for information.
  • If and when we start seeing models producing output that’s as good as their input, then the analogy of lossy compression will no longer be applicable.
  • starting with a blurry copy of unoriginal work isn’t a good way to create original work
  • Having students write essays isn’t merely a way to test their grasp of the material; it gives them experience in articulating their thoughts. If students never have to write essays that we have all read before, they will never gain the skills needed to write something that we have never read.
  • Sometimes it’s only in the process of writing that you discover your original ideas. Some might say that the output of large-language models doesn’t look all that different from a human writer’s first draft, but, again, I think this is a superficial resemblance. Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That’s what directs you during rewriting, and that’s one of the things lacking when you start with text generated by an A.I.
  • What use is there in having something that rephrases the Web?
« First ‹ Previous 181 - 187 of 187
Showing 20 items per page