Skip to main content

Home/ Instructional & Media Services at Dickinson College/ Group items tagged work

Rss Feed Group items tagged

Ed Webb

Offering Seminar Courses Remotely | Educatus - 0 views

  • In an online environment, seminars will work best if they occur asynchronously in the discussion boards in an LMS
  • The 4 key elements for a seminar that need to be replicated during remote instruction include: A prompt or text(s) that the student considers independently in advance Guiding questions that require analysis, synthesize and/or evaluation of ideas The opportunity to share personal thinking with a group Ideas being developed, rejected, and refined over time based on everyone’s contributions
  • Students need specific guidance and support for how to develop, reject, and refine ideas appropriately in your course.  If you want students to share well, consider requiring an initial post where you and students introduce yourselves and share a picture. Describe your expectations for norms in how everyone will behave online Provide a lot of initial feedback about the quality of posting.  Consider giving samples of good and bad posts, and remember to clarify your marking criteria. Focus your expectations on the quality of comments, and set maximums for the amount you expect to reduce your marking load and keep the discussions high quality. Someone will need to moderate the discussion. That includes posting the initial threads, reading what everyone posts all weeks and commenting to keep the discussion flowing.  Likely, the same person (you or a TA) will also be grading and providing private feedback to each student. Consider making the moderation of a discussion an assignment in your course. You can moderate the first few weeks to demonstrate what you want, and groups of students can moderate other weeks. It can increase engagement if done well, and definitely decreases your work load.
  • ...1 more annotation...
  • Teach everyone to mute when not speaking, and turn off their cameras if they have bandwidth issues. Use the chat so people can agree and add ideas as other people are speaking, and teach people to raise their hands or add emoticons in the participants window to help you know who wants to speak next
Ed Webb

The Myth Of AI | Edge.org - 0 views

  • The distinction between a corporation and an algorithm is fading. Does that make an algorithm a person? Here we have this interesting confluence between two totally different worlds. We have the world of money and politics and the so-called conservative Supreme Court, with this other world of what we can call artificial intelligence, which is a movement within the technical culture to find an equivalence between computers and people. In both cases, there's an intellectual tradition that goes back many decades. Previously they'd been separated; they'd been worlds apart. Now, suddenly they've been intertwined.
  • Since our economy has shifted to what I call a surveillance economy, but let's say an economy where algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely on big data in order to figure out who you should date, who you should sleep with, what music you should listen to, what books you should read, and on and on and on. And people often accept that because there's no empirical alternative to compare it to, there's no baseline. It's bad personal science. It's bad self-understanding.
  • there's no way to tell where the border is between measurement and manipulation in these systems
  • ...8 more annotations...
  • It's not so much a rise of evil as a rise of nonsense. It's a mass incompetence, as opposed to Skynet from the Terminator movies. That's what this type of AI turns into.
  • What's happened here is that translators haven't been made obsolete. What's happened instead is that the structure through which we receive the efforts of real people in order to make translations happen has been optimized, but those people are still needed.
  • In order to create this illusion of a freestanding autonomous artificial intelligent creature, we have to ignore the contributions from all the people whose data we're grabbing in order to make it work. That has a negative economic consequence.
  • If you talk to translators, they're facing a predicament, which is very similar to some of the other early victim populations, due to the particular way we digitize things. It's similar to what's happened with recording musicians, or investigative journalists—which is the one that bothers me the most—or photographers. What they're seeing is a severe decline in how much they're paid, what opportunities they have, their long-term prospects.
  • because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth. The consumer tech companies, we tend to put a face in front of them, like a Cortana or a Siri
  • If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I've just gone over, which include acceptance of bad user interfaces, where you can't tell if you're being manipulated or not, and everything is ambiguous. It creates incompetence, because you don't know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you're gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.
  • This idea that some lab somewhere is making these autonomous algorithms that can take over the world is a way of avoiding the profoundly uncomfortable political problem, which is that if there's some actuator that can do harm, we have to figure out some way that people don't do harm with it. There are about to be a whole bunch of those. And that'll involve some kind of new societal structure that isn't perfect anarchy. Nobody in the tech world wants to face that, so we lose ourselves in these fantasies of AI. But if you could somehow prevent AI from ever happening, it would have nothing to do with the actual problem that we fear, and that's the sad thing, the difficult thing we have to face.
  • To reject your own ignorance just casts you into a silly state where you're a lesser scientist. I don't see that so much in the neuroscience field, but it comes from the computer world so much, and the computer world is so influential because it has so much money and influence that it does start to bleed over into all kinds of other things.
Ed Webb

CRITICAL AI: Adapting College Writing for the Age of Large Language Models such as Chat... - 1 views

  • In the long run, we believe, teachers need to help students develop a critical awareness of generative machine models: how they work; why their content is often biased, false, or simplistic; and what their social, intellectual, and environmental implications might be. But that kind of preparation takes time, not least because journalism on this topic is often clickbait-driven, and “AI” discourse tends to be jargony, hype-laden, and conflated with science fiction.
  • Make explicit that the goal of writing is neither a product nor a grade but, rather, a process that empowers critical thinking
  • No one should present auto-generated writing as their own on the expectation that this deception is undiscoverable. 
  • ...9 more annotations...
  • LLMs usually cannot do a good job of explaining how a particular passage from a longer text illuminates the whole of that longer text. Moreover, ChatGPT’s outputs on comparison and contrast are often superficial. Typically the system breaks down a task of logical comparison into bite-size pieces, conveys shallow information about each of those pieces, and then formulaically “compares” and “contrasts” in a noticeably superficial or repetitive way. 
  • In-class writing, whether digital or handwritten, may have downsides for students with anxiety and disabilities
  • ChatGPT can produce outputs that take the form of  “brainstorms,” outlines, and drafts. It can also provide commentary in the style of peer review or self-analysis. Nonetheless, students would need to coordinate multiple submissions of automated work in order to complete this type of assignment with a text generator.  
  • Students are more likely to misuse text generators if they trust them too much. The term “Artificial Intelligence” (“AI”) has become a marketing tool for hyping products. For all their impressiveness, these systems are not intelligent in the conventional sense of that term. They are elaborate statistical models that rely on mass troves of data—which has often been scraped indiscriminately from the web and used without knowledge or consent.
  • LLMs often mimic the harmful prejudices, misconceptions, and biases found in data scraped from the internet
  • Show students examples of inaccuracy, bias, logical, and stylistic problems in automated outputs. We can build students’ cognitive abilities by modeling and encouraging this kind of critique. Given that social media and the internet are full of bogus accounts using synthetic text, alerting students to the intrinsic problems of such writing could be beneficial. (See the “ChatGPT/LLM Errors Tracker,” maintained by Gary Marcus and Ernest Davis.)
  • Since ChatGPT is good at grammar and syntax but suffers from formulaic, derivative, or inaccurate content, it seems like a poor foundation for building students’ skills and may circumvent their independent thinking.
  • Good journalism on language models is surprisingly hard to find since the technology is so new and the hype is ubiquitous. Here are a few reliable short pieces.     “ChatGPT Advice Academics Can Use Now” edited by Susan Dagostino, Inside Higher Ed, January 12, 2023  “University students recruit AI to write essays for them. Now what?” by Katyanna Quach, The Register, December 27, 2022  “How to spot AI-generated text” by Melissa Heikkilä, MIT Technology Review, December 19, 2022  The Road to AI We Can Trust, Substack by Gary Marcus, a cognitive scientist and AI researcher who writes frequently and lucidly about the topic. See also Gary Marcus and Ernest Davis, “GPT-3, Bloviator: OpenAI’s Language Generator Has No Idea What It’s Talking About” (2020).
  • “On the Dangers of Stochastic Parrots” by Emily M. Bender, Timnit Gebru, et al, FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 2021. Association for Computing Machinery, doi: 10.1145/3442188. A blog post summarizing and discussing the above essay derived from a Critical AI @ Rutgers workshop on the essay: summarizes key arguments, reprises discussion, and includes links to video-recorded presentations by digital humanist Katherine Bode (ANU) and computer scientist and NLP researcher Matthew Stone (Rutgers).
Ed Webb

'There is no standard': investigation finds AI algorithms objectify women's bodies | Ar... - 0 views

  • AI tags photos of women in everyday situations as sexually suggestive. They also rate pictures of women as more “racy” or sexually suggestive than comparable pictures of men.
  • “You cannot have one single uncontested definition of raciness.”
  • “Objectification of women seems deeply embedded in the system.”
  • ...7 more annotations...
  • Shadowbanning has been documented for years, but the Guardian journalists may have found a missing link to understand the phenomenon: biased AI algorithms. Social media platforms seem to leverage these algorithms to rate images and limit the reach of content that they consider too racy. The problem seems to be that these AI algorithms have built-in gender bias, rating women more racy than images containing men.
  • “You are looking at decontextualized information where a bra is being seen as inherently racy rather than a thing that many women wear every day as a basic item of clothing,”
  • suppressed the reach of countless images featuring women’s bodies, and hurt female-led businesses – further amplifying societal disparities.
  • these algorithms were probably labeled by straight men, who may associate men working out with fitness, but may consider an image of a woman working out as racy. It’s also possible that these ratings seem gender biased in the US and in Europe because the labelers may have been from a place with a more conservative culture
  • “There’s no standard of quality here,”
  • “I will censor as artistically as possible any nipples. I find this so offensive to art, but also to women,” she said. “I almost feel like I’m part of perpetuating that ridiculous cycle that I don’t want to have any part of.”
  • many people, including chronically ill and disabled folks, rely on making money through social media and shadowbanning harms their business
Ed Webb

ChatGPT Is a Blurry JPEG of the Web | The New Yorker - 0 views

  • Think of ChatGPT as a blurry JPEG of all the text on the Web. It retains much of the information on the Web, in the same way that a JPEG retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry JPEG, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.
  • a way to understand the “hallucinations,” or nonsensical answers to factual questions, to which large-language models such as ChatGPT are all too prone. These hallucinations are compression artifacts, but—like the incorrect labels generated by the Xerox photocopier—they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our own knowledge of the world. When we think about them this way, such hallucinations are anything but surprising; if a compression algorithm is designed to reconstruct text after ninety-nine per cent of the original has been discarded, we should expect that significant portions of what it generates will be entirely fabricated.
  • ChatGPT is so good at this form of interpolation that people find it entertaining: they’ve discovered a “blur” tool for paragraphs instead of photos, and are having a blast playing with it.
  • ...9 more annotations...
  • large-language models like ChatGPT are often extolled as the cutting edge of artificial intelligence, it may sound dismissive—or at least deflating—to describe them as lossy text-compression algorithms. I do think that this perspective offers a useful corrective to the tendency to anthropomorphize large-language models
  • Even though large-language models often hallucinate, when they’re lucid they sound like they actually understand subjects like economic theory
  • The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material. In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.
  • Even if it is possible to restrict large-language models from engaging in fabrication, should we use them to generate Web content? This would make sense only if our goal is to repackage information that’s already available on the Web. Some companies exist to do just that—we usually call them content mills. Perhaps the blurriness of large-language models will be useful to them, as a way of avoiding copyright infringement. Generally speaking, though, I’d say that anything that’s good for content mills is not good for people searching for information.
  • If and when we start seeing models producing output that’s as good as their input, then the analogy of lossy compression will no longer be applicable.
  • starting with a blurry copy of unoriginal work isn’t a good way to create original work
  • Having students write essays isn’t merely a way to test their grasp of the material; it gives them experience in articulating their thoughts. If students never have to write essays that we have all read before, they will never gain the skills needed to write something that we have never read.
  • Sometimes it’s only in the process of writing that you discover your original ideas. Some might say that the output of large-language models doesn’t look all that different from a human writer’s first draft, but, again, I think this is a superficial resemblance. Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That’s what directs you during rewriting, and that’s one of the things lacking when you start with text generated by an A.I.
  • What use is there in having something that rephrases the Web?
Ed Webb

The trouble with Khan Academy - Casting Out Nines - The Chronicle of Higher Education - 1 views

  • When we say that someone has “learned” a subject, we typically mean that they have shown evidence of mastery not only of basic cognitive processes like factual recall and working mechanical exercises but also higher-level tasks like applying concepts to new problems and judging between two equivalent concepts. A student learning calculus, for instance, needs to demonstrate that s/he can do things like take derivatives of polynomials and use the Chain Rule. But if this is all they can demonstrate, then it’s stretching it to say that the student has “learned calculus”, because calculus is a lot more than just executing mechanical processes correctly and quickly.
  • Even if the student can solve optimization or related rates problems just like the ones in the book and in the lecture — but doesn’t know how to start if the optimization or related rates problem does not match their template — then the student hasn’t really learned calculus. At that point, those “applied” problems are just more mechanical processes. We may say the student has learned about calculus, but when it comes to the uses of the subject that really matter — applying calculus concepts to ambiguous and/or complex problems, choosing the best of equivalent methods or results, creating models to solve novel problems — this student’s calculus knowledge is not of much use.
  • Khan Academy is great for learning about lots of different subjects. But it’s not really adequate for learning those subjects on a level that really makes a difference in the world.
  • ...2 more annotations...
  • mechanical skill is a proper subset of the set of all tasks a student needs to master in order to really learn a subject. And a lecture, when well done, can teach novice learners how to think like expert learners; but in my experience with Khan Academy videos, this isn’t what happens — the videos are demos on how to finish mathematics exercises, with little modeling of the higher-level thinking skills that are so important for using mathematics in the real world.
  • The Khan Academy is a great new resource, and it's a sign of greater things to come... but it's much more akin to a book than a teacher.
test and tagging

Be Safe With [e]Safe - 1 views

The welfare of my employees is my number one priority so that I can ensure that they will work productively. That is why when I established my company, I made sure that the equipment to be used are...

test and tagging

started by test and tagging on 15 Dec 11 no follow-up yet
Ed Webb

Kindle DX called "poor excuse of an academic tool" in Princeton pilot program - 1 views

  • Most of the criticisms center around the Kindle's weak annotation features, which make things like highlighting and margin notes almost impossible to use, but even a simple thing like the lack of true page numbers has caused problems, since allowing students to cite the Kindle's location numbers in their papers is "meaningless for anyone working from analog books."
Ed Webb

Mind - Research Upends Traditional Thinking on Study Habits - NYTimes.com - 1 views

  • instead of sticking to one study location, simply alternating the room where a person studies improves retention. So does studying distinct but related skills or concepts in one sitting, rather than focusing intensely on a single thing. “We have known these principles for some time, and it’s intriguing that schools don’t pick them up, or that people don’t learn them by trial and error,” said Robert A. Bjork, a psychologist at the University of California, Los Angeles. “Instead, we walk around with all sorts of unexamined beliefs about what works that are mistaken.”
  • The brain makes subtle associations between what it is studying and the background sensations it has at the time, the authors say, regardless of whether those perceptions are conscious. It colors the terms of the Versailles Treaty with the wasted fluorescent glow of the dorm study room, say; or the elements of the Marshall Plan with the jade-curtain shade of the willow tree in the backyard. Forcing the brain to make multiple associations with the same material may, in effect, give that information more neural scaffolding.
  • Cognitive scientists do not deny that honest-to-goodness cramming can lead to a better grade on a given exam. But hurriedly jam-packing a brain is akin to speed-packing a cheap suitcase, as most students quickly learn — it holds its new load for a while, then most everything falls out. “With many students, it’s not like they can’t remember the material” when they move to a more advanced class, said Henry L. Roediger III, a psychologist at Washington University in St. Louis. “It’s like they’ve never seen it before.”
  • ...6 more annotations...
  • An hour of study tonight, an hour on the weekend, another session a week from now: such so-called spacing improves later recall, without requiring students to put in more overall study effort or pay more attention, dozens of studies have found.
  • “The idea is that forgetting is the friend of learning,” said Dr. Kornell. “When you forget something, it allows you to relearn, and do so effectively, the next time you see it.”
  • cognitive scientists see testing itself — or practice tests and quizzes — as a powerful tool of learning, rather than merely assessment. The process of retrieving an idea is not like pulling a book from a shelf; it seems to fundamentally alter the way the information is subsequently stored, making it far more accessible in the future.
  • “Testing not only measures knowledge but changes it,” he says — and, happily, in the direction of more certainty, not less.
  • “Testing has such bad connotation; people think of standardized testing or teaching to the test,” Dr. Roediger said. “Maybe we need to call it something else, but this is one of the most powerful learning tools we have.”
  • The harder it is to remember something, the harder it is to later forget. This effect, which researchers call “desirable difficulty,”
Ed Webb

"1945-1998" by Isao Hashimoto: CTBTO Preparatory Commission - 0 views

    • Ed Webb
       
      The retro computer game aesthetic really works for this atompunk artwork
Ed Webb

Professors Find Ways to Keep Heads Above 'Exaflood' of Data - Wired Campus - The Chroni... - 0 views

  • Google, a major source of information overload, can also help manage it, according to Google's chief economist. Hal Varian, who was a professor at the University of California at Berkeley before going to work for the search-engine giant, showed off an analytic tool called Google Insights for Search.
  • accurately tagging data and archiving it
Ed Webb

Bad News : CJR - 0 views

  • Students in Howard Rheingold’s journalism class at Stanford recently teamed up with NewsTrust, a nonprofit Web site that enables people to review and rate news articles for their level of quality, in a search for lousy journalism.
  • the News Hunt is a way of getting young journalists to critically examine the work of professionals. For Rheingold, an influential writer and thinker about the online world and the man credited with coining the phrase “virtual community,” it’s all about teaching them “crap detection.”
  • last year Rheingold wrote an important essay about the topic for the San Francisco Chronicle’s Web site
  • ...3 more annotations...
  • What’s at stake is no less than the quality of the information available in our society, and our collective ability to evaluate its accuracy and value. “Are we going to have a world filled with people who pass along urban legends and hoaxes?” Rheingold said, “or are people going to educate themselves about these tools [for crap detection] so we will have collective intelligence instead of misinformation, spam, urban legends, and hoaxes?”
  • I previously called fact-checking “one of the great American pastimes of the Internet age.” But, as Rheingold noted, the opposite is also true: the manufacture and promotion of bullshit is endemic. One couldn’t exist without the other. That makes Rheingold’s essay, his recent experiment with NewsTrust, and his wiki of online critical-thinking tools” essential reading for journalists. (He’s also writing a book about this topic.)
  • I believe if we want kids to succeed online, the biggest danger is not porn or predators—the biggest danger is them not being able to distinguish truth from carefully manufactured misinformation or bullshit
  •  
    As relevant to general education as to journalism training
Ed Webb

Ian Bogost - Beyond Blogs - 0 views

  • I wish these were the sorts of questions so-called digital humanists considered, rather than figuring out how to pay homage to the latest received web app or to build new tools to do the same old work. But as I recently argued, a real digital humanism isn't one that's digital, but one that's concerned with the present and the future. A part of that concern involves considering the way we want to interact with one another and the world as scholars, and to intervene in that process by making it happen. Such a question is far more interesting and productive than debating the relative merits of blogs or online journals, acts that amount to celebrations of how little has really changed.
  • Perhaps a blog isn't a great tool for (philosophical; videogame) discussion or even for knowledge retention, etc... but a whole *blogosphere*...? If individuals (and individual memory in particular) are included within the scope of "the blogosphere" then surely someone remembers the "important" posts, like you seemed to be asking for...?
Ed Webb

Wired Campus: U. of Richmond Creates a Wikipedia for Undergraduate Scholars -... - 0 views

  • The current model for teaching and learning is based on a relative scarcity of research and writing, not an excess. With that in mind, Mr. Torget and several others have created a Web site called History Engine to help students around the country work together on a shared tool to make sense of history documents online. Students generate brief essays on American history, and the History Engine aggregates the essays and makes them navigable by tags. Call it Wikipedia for students. Except better. First of all, its content is moderated by professors. Second, while Wikipedia still presents information two-dimensionally, History Engine employs mapping technology to organize scholarship by time period, geographic location, and themes.
  • “The challenge of a digital age is that that writing assignment hasn’t changed since the age of the typewriter,” Mr. Torget said. “The digital medium requires us to rethink how we make those assignments.”
Ed Webb

Literature Review: GIS for Conflict Analysis « iRevolution - 0 views

  • The study objective is to represent geographic and territorial concepts with Geographic Information Systems (GIS). The paper describes the challenges and potential opportunities for creating an integrated GIS model of security.
  • The literature review is a good introduction for anyone interested in the application of GIS to the spatial analysis of conflict. As a colleague mentioned, however, the authors of the study do not cite more recent work in this area, which is rather surprising and unfortunate. Perhaps this is due to the fact that the academic peer-review process can seemingly take forever.
Ed Webb

Lafayette College Piloting WPMu at bavatuesdays - 0 views

  • As is often the case, it’s all about an investment in some good people who get excited about the possibilities of teaching and learning with technology. And if the header image for the main blog is any indicator, the instructional technology folks at Lafayette seem to be having an extreme blast. Fine work Courtney, Jason, and Ken! So why is your school afraid to jump? What do you have to lose save the LMS chains that bind you to the 20th century!
  •  
    Another example of how to get away from the closed system.
Ed Webb

Official Google Blog: More books in more places: public domain EPUB downloads on Google... - 0 views

  • Starting today, you'll be able to download these and over one million public domain books from Google Books in an additional format. We're excited to now offer downloads in EPUB format, a free and open industry standard for electronic books. It's supported by a wide variety of applications, so once you download a book, you'll be able to read it on any device or through any reading application that supports the format. That means that people will be able to access public domain works that we've digitized from libraries around the world in more ways, including some that haven't even been built or imagined yet.
Ed Webb

Social Media is Killing the LMS Star - A Bootleg of Bryan Alexander's Lost Presentation... - 0 views

  • Note that this isn’t just a technological alternate history. It also describes a different set of social and cultural practices.
  • CMSes lumber along like radio, still playing into the air as they continue to gradually shift ever farther away on the margins. In comparison, Web 2.0 is like movies and tv combined, plus printed books and magazines. That’s where the sheer scale, creative ferment, and wife-ranging influence reside. This is the necessary background for discussing how to integrate learning and the digital world.
  • These virtual classes are like musical practice rooms, small chambers where one may try out the instrument in silent isolation. It is not connectivism but disconnectivism.
  • ...11 more annotations...
  • CMSes shift from being merely retrograde to being actively regressive if we consider the broader, subtler changes in the digital teaching landscape. Web 2.0 has rapidly grown an enormous amount of content through what Yochai Benkler calls “peer-based commons production.” One effect of this has been to grow a large area for informal learning, which students (and staff) access without our benign interference. Students (and staff) also contribute to this peering world; more on this later. For now, we can observe that as teachers we grapple with this mechanism of change through many means, but the CMS in its silo’d isolation is not a useful tool.
  • those curious about teaching with social media have easy access to a growing, accessible community of experienced staff by means of those very media. A meta-community of Web 2.0 academic practitioners is now too vast to catalogue. Academics in every discipline blog about their work. Wikis record their efforts and thoughts, as do podcasts. The reverse is true of the CMS, the very architecture of which forbids such peer-to-peer information sharing. For example, the Resource Center for Cyberculture Studies (RCCS) has for many years maintained a descriptive listing of courses about digital culture across the disciplines. During the 1990s that number grew with each semester. But after the explosive growth of CMSes that number dwindled. Not the number of classes taught, but the number of classes which could even be described. According to the RCCS’ founder, David Silver (University of San Francisco), this is due to the isolation of class content in CMS containers.
  • unless we consider the CMS environment to be a sort of corporate intranet simulation, the CMS set of community skills is unusual, rarely applicable to post-graduation examples. In other words, while a CMS might help privacy concerns, it is at best a partial, not sufficient solution, and can even be inappropriate for already online students.
  • That experiential, teachable moment of selecting one’s copyright stance is eliminated by the CMS.
  • Another argument in favor of CMSes over Web 2.0 concerns the latter’s open nature. It is too open, goes the thought, constituting a “Wild West” experience of unfettered information flow and unpleasant forms of access. Campuses should run CMSes to create shielded environments, iPhone-style walled gardens that protect the learning process from the Lovecraftian chaos without.
  • social sifting, information literacy, using the wisdom of crowds, and others. Such strategies are widely discussed, easily accessed, and continually revised and honed.
  • at present, radio CMS is the Clear Channel of online learning.
  • For now, the CMS landsape is a multi-institutional dark Web, an invisible, unsearchable, un-mash-up-able archipelago of hidden learning content.
  • Can the practice of using a CMS prepare either teacher or student to think critically about this new shape for information literacy? Moreover, can we use the traditional CMS to share thoughts and practices about this topic?
  • The internet of things refers to a vastly more challenging concept, the association of digital information with the physical world. It covers such diverse instances as RFID chips attached to books or shipping pallets, connecting a product’s scanned UPC code to a Web-based database, assigning unique digital identifiers to physical locations, and the broader enterprise of augmented reality. It includes problems as varied as building search that covers both the World Wide Web and one’s mobile device, revising copyright to include digital content associated with private locations, and trying to salvage what’s left of privacy. How does this connect with our topic? Consider a recent article by Tim O’Reilly and John Battle, where they argue that the internet of things is actually growing knowledge about itself. The combination of people, networks, and objects is building descriptions about objects, largely in folksonomic form. That is, people are tagging the world, and sharing those tags. It’s worth quoting a passage in full: “It’s also possible to give structure to what appears to be unstructured data by teaching an application how to recognize the connection between the two. For example, You R Here, an iPhone app, neatly combines these two approaches. You use your iPhone camera to take a photo of a map that contains details not found on generic mapping applications such as Google maps – say a trailhead map in a park, or another hiking map. Use the phone’s GPS to set your current location on the map. Walk a distance away, and set a second point. Now your iPhone can track your position on that custom map image as easily as it can on Google maps.” (http://www.web2summit.com/web2009/public/schedule/detail/10194) What world is better placed to connect academia productively with such projects, the open social Web or the CMS?
  • imagine the CMS function of every class much like class email, a necessary feature, but not by any means the broadest technological element. Similarly the e-reserves function is of immense practical value. There may be no better way to share copyrighted academic materials with a class, at this point. These logistical functions could well play on.
‹ Previous 21 - 40 of 50 Next ›
Showing 20 items per page