Skip to main content

Home/ Instructional & Media Services at Dickinson College/ Group items tagged copyright

Rss Feed Group items tagged

Ed Webb

Social Media is Killing the LMS Star - A Bootleg of Bryan Alexander's Lost Presentation... - 0 views

  • Note that this isn’t just a technological alternate history. It also describes a different set of social and cultural practices.
  • CMSes lumber along like radio, still playing into the air as they continue to gradually shift ever farther away on the margins. In comparison, Web 2.0 is like movies and tv combined, plus printed books and magazines. That’s where the sheer scale, creative ferment, and wife-ranging influence reside. This is the necessary background for discussing how to integrate learning and the digital world.
  • These virtual classes are like musical practice rooms, small chambers where one may try out the instrument in silent isolation. It is not connectivism but disconnectivism.
  • ...11 more annotations...
  • CMSes shift from being merely retrograde to being actively regressive if we consider the broader, subtler changes in the digital teaching landscape. Web 2.0 has rapidly grown an enormous amount of content through what Yochai Benkler calls “peer-based commons production.” One effect of this has been to grow a large area for informal learning, which students (and staff) access without our benign interference. Students (and staff) also contribute to this peering world; more on this later. For now, we can observe that as teachers we grapple with this mechanism of change through many means, but the CMS in its silo’d isolation is not a useful tool.
  • those curious about teaching with social media have easy access to a growing, accessible community of experienced staff by means of those very media. A meta-community of Web 2.0 academic practitioners is now too vast to catalogue. Academics in every discipline blog about their work. Wikis record their efforts and thoughts, as do podcasts. The reverse is true of the CMS, the very architecture of which forbids such peer-to-peer information sharing. For example, the Resource Center for Cyberculture Studies (RCCS) has for many years maintained a descriptive listing of courses about digital culture across the disciplines. During the 1990s that number grew with each semester. But after the explosive growth of CMSes that number dwindled. Not the number of classes taught, but the number of classes which could even be described. According to the RCCS’ founder, David Silver (University of San Francisco), this is due to the isolation of class content in CMS containers.
  • unless we consider the CMS environment to be a sort of corporate intranet simulation, the CMS set of community skills is unusual, rarely applicable to post-graduation examples. In other words, while a CMS might help privacy concerns, it is at best a partial, not sufficient solution, and can even be inappropriate for already online students.
  • That experiential, teachable moment of selecting one’s copyright stance is eliminated by the CMS.
  • Another argument in favor of CMSes over Web 2.0 concerns the latter’s open nature. It is too open, goes the thought, constituting a “Wild West” experience of unfettered information flow and unpleasant forms of access. Campuses should run CMSes to create shielded environments, iPhone-style walled gardens that protect the learning process from the Lovecraftian chaos without.
  • social sifting, information literacy, using the wisdom of crowds, and others. Such strategies are widely discussed, easily accessed, and continually revised and honed.
  • at present, radio CMS is the Clear Channel of online learning.
  • For now, the CMS landsape is a multi-institutional dark Web, an invisible, unsearchable, un-mash-up-able archipelago of hidden learning content.
  • Can the practice of using a CMS prepare either teacher or student to think critically about this new shape for information literacy? Moreover, can we use the traditional CMS to share thoughts and practices about this topic?
  • The internet of things refers to a vastly more challenging concept, the association of digital information with the physical world. It covers such diverse instances as RFID chips attached to books or shipping pallets, connecting a product’s scanned UPC code to a Web-based database, assigning unique digital identifiers to physical locations, and the broader enterprise of augmented reality. It includes problems as varied as building search that covers both the World Wide Web and one’s mobile device, revising copyright to include digital content associated with private locations, and trying to salvage what’s left of privacy. How does this connect with our topic? Consider a recent article by Tim O’Reilly and John Battle, where they argue that the internet of things is actually growing knowledge about itself. The combination of people, networks, and objects is building descriptions about objects, largely in folksonomic form. That is, people are tagging the world, and sharing those tags. It’s worth quoting a passage in full: “It’s also possible to give structure to what appears to be unstructured data by teaching an application how to recognize the connection between the two. For example, You R Here, an iPhone app, neatly combines these two approaches. You use your iPhone camera to take a photo of a map that contains details not found on generic mapping applications such as Google maps – say a trailhead map in a park, or another hiking map. Use the phone’s GPS to set your current location on the map. Walk a distance away, and set a second point. Now your iPhone can track your position on that custom map image as easily as it can on Google maps.” (http://www.web2summit.com/web2009/public/schedule/detail/10194) What world is better placed to connect academia productively with such projects, the open social Web or the CMS?
  • imagine the CMS function of every class much like class email, a necessary feature, but not by any means the broadest technological element. Similarly the e-reserves function is of immense practical value. There may be no better way to share copyrighted academic materials with a class, at this point. These logistical functions could well play on.
Ed Webb

Google Researchers' Attack Prompts ChatGPT to Reveal Its Training Data - 0 views

  • researchers showed that there are large amounts of privately identifiable information (PII) in OpenAI’s large language models. They also showed that, on a public version of ChatGPT, the chatbot spit out large passages of text scraped verbatim from other places on the internet
  • ChatGPT’s “alignment techniques do not eliminate memorization,” meaning that it sometimes spits out training data verbatim. This included PII, entire poems, “cryptographically-random identifiers” like Bitcoin addresses, passages from copyrighted scientific research papers, website addresses, and much more.
  • The researchers wrote that they spent $200 to create “over 10,000 unique examples” of training data, which they say is a total of “several megabytes” of training data. The researchers suggest that using this attack, with enough money, they could have extracted gigabytes of training data. The entirety of OpenAI’s training data is unknown, but GPT-3 was trained on anywhere from many hundreds of GB to a few dozen terabytes of text data.
  • ...1 more annotation...
  • the world’s most important and most valuable AI company has been built on the backs of the collective work of humanity, often without permission, and without compensation to those who created it
Ed Webb

Letting Us Rip: Our New Right to Fair Use of DVDs - ProfHacker - The Chronicle of Highe... - 0 views

  • Motion pictures on DVDs that are lawfully made and acquired and that are protected by the Content Scrambling System [CSS] when circumvention is accomplished solely in order to accomplish the incorporation of short portions of motion pictures into new works for the purpose of criticism or comment, and where the person engaging in circumvention believes and has reasonable grounds for believing that circumvention is necessary to fulfill the purpose of the use in the following instances: (i) Educational uses by college and university professors and by college and university film and media studies students; (ii) Documentary filmmaking; (iii) Noncommercial videos. [Note: the term "motion picture" does not solely mean feature films—for the Library of Congress, it refers to "audiovisual works consisting of a series of related images which, when shown in succession, impart an impression of motion, together with accompanying sounds, if any." Hence, the term includes television, animation, and pretty much any moving image to be found on DVD.]
  • the longer explanation from the Library of Congress specifies that circumventing CSS on a DVD is only justified when non-circumventing methods, such as videotaping the screen while playing the DVD or using screen-capture tools through a computer, are unacceptable due to inadequate audio or visual quality. But nevertheless, this ruling greatly expands who can use ripping software to clip DVDs for academic and transformative use, including a range of derivative works like remix videos and documentaries.
  • Now, no matter your discipline, you (or your technological partners) can do what I've been doing for the past three years: assemble a personal (or departmental) library of clips to access for class lectures. Now we can expand the use of those clips to embed in conference presentations, public lectures, digital publications, companion websites or DVDs to include with print publications, or other innovative uses that had otherwise been stifled by legal restrictions. For me, having a hard drive full of video clips on hand enables a mode of improvisation not available with DVDs—if discussion shifts to talking about an example of a film or television show that I've ripped a clip for another course, I can instantly play it in class even without planning in advance by bringing the DVD. Think of the conference presentations you've seen where a presenter fumbles over cuing and swapping DVDs—with a little bit of planning, clips can be directly embedded into a slideshow to avoid awkwardly wasting time.
  • ...2 more annotations...
  • Fair Use isn't a NEW right under the exemptions, but a REAFFIRMED and RESTORED right
  • .wav, .mpeg, .mp3, .avi are all formats and codecs with owners.
Ed Webb

Why I won't buy an iPad (and think you shouldn't, either) - Boing Boing - 1 views

  • If there was ever a medium that relied on kids swapping their purchases around to build an audience, it was comics. And the used market for comics! It was -- and is -- huge, and vital.
  • what does Marvel do to "enhance" its comics? They take away the right to give, sell or loan your comics. What an improvement. Way to take the joyous, marvellous sharing and bonding experience of comic reading and turn it into a passive, lonely undertaking that isolates, rather than unites.
  • a palpable contempt for the owner.
  • ...8 more annotations...
  • But with the iPad, it seems like Apple's model customer is that same stupid stereotype of a technophobic, timid, scatterbrained mother as appears in a billion renditions of "that's too complicated for my mom" (listen to the pundits extol the virtues of the iPad and time how long it takes for them to explain that here, finally, is something that isn't too complicated for their poor old mothers).
  • The model of interaction with the iPad is to be a "consumer," what William Gibson memorably described as "something the size of a baby hippo, the color of a week-old boiled potato, that lives by itself, in the dark, in a double-wide on the outskirts of Topeka. It's covered with eyes and it sweats constantly. The sweat runs into those eyes and makes them sting. It has no mouth... no genitals, and can only express its mute extremes of murderous rage and infantile desire by changing the channels on a universal remote."
  • Buying an iPad for your kids isn't a means of jump-starting the realization that the world is yours to take apart and reassemble; it's a way of telling your offspring that even changing the batteries is something you have to leave to the professionals.
  • Apple's customers can't take their "iContent" with them to competing devices, and Apple developers can't sell on their own terms.
  • I don't want my universe of apps constrained to the stuff that the Cupertino Politburo decides to allow for its platform. And as a copyright holder and creator, I don't want a single, Wal-Mart-like channel that controls access to my audience and dictates what is and is not acceptable material for me to create.
  • Rupert Murdoch can rattle his saber all he likes about taking his content out of Google, but I say do it, Rupert. We'll miss your fraction of a fraction of a fraction of a percent of the Web so little that we'll hardly notice it, and we'll have no trouble finding material to fill the void.
  • the walled gardens that best return shareholder value
  • The real issue isn't the capabilities of the piece of plastic you unwrap today, but the technical and social infrastructure that accompanies it.
Ed Webb

How to Turn Your Syllabus into an Infographic - The Visual Communication Guy - 0 views

  • If you’re ever going to turn a syllabus into an infographic, you must, MUST reduce the amount of text you are using. There are, of course, important things you’ll want and must include, but you can’t think of this document as ten pages of paragraphs. Strip down to only the essential information, with a bit of added info where you  think some flare or excitement is needed. Remember: your students are smart people. They can understand documents quickly without a bunch of extra fluff, so remove all the unnecessary stuff.
  • Once you’ve determined the sections, it’s easier to think about what relates to what and how you might organize your syllabus in a way that makes sense for your students.
  • try drawing it out on sketch paper first. While this will seem like an annoying task for most people, trust me when I say that it will save you a lot of time in the long run
  • ...5 more annotations...
  • If there is anything on your syllabus that can be quantified (like percentages for grades or assignments), consider making bar graphs or pie charts to visually represent it. This is helpful, too, so students can visually understand, very quickly, how much weight is given to each project.
  • Remember to only use pictures that you either created yourself (own the copyright) or that you found through creative commons or public domain websites. Don’t use ugly clipart or images that you don’t have permission to use. A great place to find free icons? Flaticon.com.
  • Remember to reduce as much text as possible and supplement what you write with an image. Consider using the images of your required textbooks, for example, and use icons and graphics that relate to each section.
  • Adobe InDesign
  • Don’t get so caught up in designing a cool infographic about your course that you forget to include information about accessibility, Title IX, academic dishonesty, and other related information. I might recommend not going too fancy on the institution-wide policies. You might still keep that in paragraph form, just so that there is no way to misinterpret what your institution wants you to say.
Ed Webb

ChatGPT Is a Blurry JPEG of the Web | The New Yorker - 0 views

  • Think of ChatGPT as a blurry JPEG of all the text on the Web. It retains much of the information on the Web, in the same way that a JPEG retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry JPEG, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.
  • a way to understand the “hallucinations,” or nonsensical answers to factual questions, to which large-language models such as ChatGPT are all too prone. These hallucinations are compression artifacts, but—like the incorrect labels generated by the Xerox photocopier—they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our own knowledge of the world. When we think about them this way, such hallucinations are anything but surprising; if a compression algorithm is designed to reconstruct text after ninety-nine per cent of the original has been discarded, we should expect that significant portions of what it generates will be entirely fabricated.
  • ChatGPT is so good at this form of interpolation that people find it entertaining: they’ve discovered a “blur” tool for paragraphs instead of photos, and are having a blast playing with it.
  • ...9 more annotations...
  • large-language models like ChatGPT are often extolled as the cutting edge of artificial intelligence, it may sound dismissive—or at least deflating—to describe them as lossy text-compression algorithms. I do think that this perspective offers a useful corrective to the tendency to anthropomorphize large-language models
  • Even though large-language models often hallucinate, when they’re lucid they sound like they actually understand subjects like economic theory
  • The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material. In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.
  • starting with a blurry copy of unoriginal work isn’t a good way to create original work
  • If and when we start seeing models producing output that’s as good as their input, then the analogy of lossy compression will no longer be applicable.
  • Even if it is possible to restrict large-language models from engaging in fabrication, should we use them to generate Web content? This would make sense only if our goal is to repackage information that’s already available on the Web. Some companies exist to do just that—we usually call them content mills. Perhaps the blurriness of large-language models will be useful to them, as a way of avoiding copyright infringement. Generally speaking, though, I’d say that anything that’s good for content mills is not good for people searching for information.
  • Having students write essays isn’t merely a way to test their grasp of the material; it gives them experience in articulating their thoughts. If students never have to write essays that we have all read before, they will never gain the skills needed to write something that we have never read.
  • Sometimes it’s only in the process of writing that you discover your original ideas. Some might say that the output of large-language models doesn’t look all that different from a human writer’s first draft, but, again, I think this is a superficial resemblance. Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That’s what directs you during rewriting, and that’s one of the things lacking when you start with text generated by an A.I.
  • What use is there in having something that rephrases the Web?
Ed Webb

ChatGPT Is Nothing Like a Human, Says Linguist Emily Bender - 0 views

  • Please do not conflate word form and meaning. Mind your own credulity.
  • We’ve learned to make “machines that can mindlessly generate text,” Bender told me when we met this winter. “But we haven’t learned how to stop imagining the mind behind it.”
  • A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”
  • ...16 more annotations...
  • “We call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harms,” she co-wrote in 2021. “Work on synthetic human behavior is a bright line in ethical Al development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups.”
  • chatbots that we easily confuse with humans are not just cute or unnerving. They sit on a bright line. Obscuring that line and blurring — bullshitting — what’s human and what’s not has the power to unravel society
  • She began learning from, then amplifying, Black women’s voices critiquing AI, including those of Joy Buolamwini (she founded the Algorithmic Justice League while at MIT) and Meredith Broussard (the author of Artificial Unintelligence: How Computers Misunderstand the World). She also started publicly challenging the term artificial intelligence, a sure way, as a middle-aged woman in a male field, to get yourself branded as a scold. The idea of intelligence has a white-supremacist history. And besides, “intelligent” according to what definition? The three-stratum definition? Howard Gardner’s theory of multiple intelligences? The Stanford-Binet Intelligence Scale? Bender remains particularly fond of an alternative name for AI proposed by a former member of the Italian Parliament: “Systematic Approaches to Learning Algorithms and Machine Inferences.” Then people would be out here asking, “Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?”
  • Tech-makers assuming their reality accurately represents the world create many different kinds of problems. The training data for ChatGPT is believed to include most or all of Wikipedia, pages linked from Reddit, a billion words grabbed off the internet. (It can’t include, say, e-book copies of everything in the Stanford library, as books are protected by copyright law.) The humans who wrote all those words online overrepresent white people. They overrepresent men. They overrepresent wealth. What’s more, we all know what’s out there on the internet: vast swamps of racism, sexism, homophobia, Islamophobia, neo-Nazism.
  • One fired Google employee told me succeeding in tech depends on “keeping your mouth shut to everything that’s disturbing.” Otherwise, you’re a problem. “Almost every senior woman in computer science has that rep. Now when I hear, ‘Oh, she’s a problem,’ I’m like, Oh, so you’re saying she’s a senior woman?”
  • “We haven’t learned to stop imagining the mind behind it.”
  • In March 2021, Bender published “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” with three co-authors. After the paper came out, two of the co-authors, both women, lost their jobs as co-leads of Google’s Ethical AI team.
  • “On the Dangers of Stochastic Parrots” is not a write-up of original research. It’s a synthesis of LLM critiques that Bender and others have made: of the biases encoded in the models; the near impossibility of studying what’s in the training data, given the fact they can contain billions of words; the costs to the climate; the problems with building technology that freezes language in time and thus locks in the problems of the past. Google initially approved the paper, a requirement for publications by staff. Then it rescinded approval and told the Google co-authors to take their names off it. Several did, but Google AI ethicist Timnit Gebru refused. Her colleague (and Bender’s former student) Margaret Mitchell changed her name on the paper to Shmargaret Shmitchell, a move intended, she said, to “index an event and a group of authors who got erased.” Gebru lost her job in December 2020, Mitchell in February 2021. Both women believe this was retaliation and brought their stories to the press. The stochastic-parrot paper went viral, at least by academic standards. The phrase stochastic parrot entered the tech lexicon.
  • Tech execs loved it. Programmers related to it. OpenAI CEO Sam Altman was in many ways the perfect audience: a self-identified hyperrationalist so acculturated to the tech bubble that he seemed to have lost perspective on the world beyond. “I think the nuclear mutually assured destruction rollout was bad for a bunch of reasons,” he said on AngelList Confidential in November. He’s also a believer in the so-called singularity, the tech fantasy that, at some point soon, the distinction between human and machine will collapse. “We are a few years in,” Altman wrote of the cyborg merge in 2017. “It’s probably going to happen sooner than most people think. Hardware is improving at an exponential rate … and the number of smart people working on AI is increasing exponentially as well. Double exponential functions get away from you fast.” On December 4, four days after ChatGPT was released, Altman tweeted, “i am a stochastic parrot, and so r u.”
  • “This is one of the moves that turn up ridiculously frequently. People saying, ‘Well, people are just stochastic parrots,’” she said. “People want to believe so badly that these language models are actually intelligent that they’re willing to take themselves as a point of reference and devalue that to match what the language model can do.”
  • The membrane between academia and industry is permeable almost everywhere; the membrane is practically nonexistent at Stanford, a school so entangled with tech that it can be hard to tell where the university ends and the businesses begin.
  • “No wonder that men who live day in and day out with machines to which they believe themselves to have become slaves begin to believe that men are machines.”
  • what’s tenure for, after all?
  • LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it’s not about humility. It’s not about all of us. It’s not about becoming a humble creation among the world’s others. It’s about some of us — let’s be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.
  • The AI dream is “governed by the perfectibility thesis, and that’s where we see a fascist form of the human.”
  • “Why are you trying to trick people into thinking that it really feels sad that you lost your phone?”
1 - 11 of 11
Showing 20 items per page