Skip to main content

Home/ Instructional & Media Services at Dickinson College/ Group items tagged schooling

Rss Feed Group items tagged

Ed Webb

9 Ways Online Teaching Should be Different from Face-to-Face | Cult of Pedagogy - 0 views

  • Resist the temptation to dive right into curriculum at the start of the school year. Things will go more smoothly if you devote the early weeks to building community so students feel connected. Social emotional skills can be woven in during this time. On top of that, students need practice with whatever digital tools you’ll be using. So focus your lessons on those things, intertwining the two when possible. 
  • Online instruction is made up largely of asynchronous instruction, which students can access at any time. This is ideal, because requiring attendance for synchronous instruction puts some students at an immediate disadvantage if they don’t have the same access to technology, reliable internet, or a flexible home schedule. 
  • you’re likely to offer “face-to-face” or synchronous opportunities at some point, and one way to make them happen more easily is to have students meet in small groups. While it’s nearly impossible to arrange for 30 students to attend a meeting at once, assigning four students to meet is much more manageable.
  • ...9 more annotations...
  • What works best, Kitchen says, is to keep direct instruction—things like brief video lectures and readings—in asynchronous form, using checks for understanding like embedded questions or exit slips.  You can then use synchronous meetings for more interactive, engaging work. “If we want students showing up, if we want them to know that this is worth their time,” Kitchen explains, “it really needs to be something active and engaging for them. Any time they can work with the material, categorize it, organize it, share further thoughts on it, have a discussion, all of those are great things to do in small groups.” 
  • The Jigsaw method, where students form expert groups on a particular chunk of content, then teach that content to other students. Discussion strategies adapted for virtual settingsUsing best practices for cooperative learning Visible Thinking routinesGamestorming and other business related protocols adapted for education, where students take on the role of customers/stakeholders
  • Online instruction is not conducive to covering large amounts of content, so you have to choose wisely, teaching the most important things at a slower pace.
  • What really holds leverage for the students? What has endurance? What knowledge is essential?What knowledge and skills do students need to have before they move to the next grade level or the next class?What practices can be emphasized that transfer across many content areas?  Skills like analyzing, constructing arguments, building a strong knowledge base through texts, and speaking can all be taught through many different subjects. What tools can serve multiple purposes? Teaching students to use something like Padlet gives them opportunities to use audio, drawing, writing, and video. Non-digital tools can also work: Students can use things they find around the house, like toilet paper rolls, to fulfill other assignments, and then submit their work with a photo.
  • Provide instructions in a consistent location and at a consistent time. This advice was already given for parents, but it’s worth repeating here through the lens of instructional design: Set up lessons so that students know where to find instructions every time. Make instructions explicit. Read and re-read to make sure these are as clear as possible. Make dogfooding your lessons a regular practice to root out problem areas.Offer multimodal instructions. If possible, provide both written and video instructions for assignments, so students can choose the format that works best for them. You might also offer a synchronous weekly or daily meeting; what’s great about doing these online is that even if you teach several sections of the same class per day, students are no longer restricted to class times and can attend whatever meeting works best for them.
  • put the emphasis on formative feedback as students work through assignments and tasks, rather than simply grading them at the end. 
  • In online learning, Kitchen says, “There are so many ways that students can cheat, so if we’re giving them just the traditional quiz or test, it’s really easy for them to be able to just look up that information.” A great solution to this problem is to have students create things.
  • For assessment, use a detailed rubric that highlights the learning goals the end product will demonstrate. A single-point rubric works well for this.To help students discover tools to work with, this list of tools is organized by the type of product each one creates. Another great source of ideas is the Teacher’s Guide to Tech.When developing the assignment, rather than focusing on the end product, start by getting clear on what you want students to DO with that product.
  • Clear and consistent communicationCreating explicit and consistent rituals and routinesUsing research-based instructional strategiesDetermining whether to use digital or non-digital tools for an assignment A focus on authentic learning, where authentic products are created and students have voice and choice in assignments
Ed Webb

The Ed-Tech Imaginary - 0 views

  • We can say "Black lives matter," but we must also demonstrate through our actions that Black lives matter, and that means we must radically alter many of our institutions and practices, recognizing their inhumanity and carcerality. And that includes, no doubt, ed-tech. How much of ed-tech is, to use Ruha Benjamin's phrase, "the new Jim Code"? How much of ed-tech is designed by those who imagine students as cheats or criminals, as deficient or negligent?
  • "Reimagining" is a verb that education reformers are quite fond of. And "reimagining" seems too often to mean simply defunding, privatizing, union-busting, dismantling, outsourcing.
  • if Betsy DeVos is out there "reimagining," then we best be resisting
  • ...9 more annotations...
  • think we can view the promotion of ed-tech as a similar sort of process — the stories designed to convince us that the future of teaching and learning will be a technological wonder. The "jobs of the future that don't exist yet." The push for everyone to "learn to code."
  • The Matrix is, after all, a dystopia. So why would Matrix-style learning be desirable? Maybe that's the wrong question. Perhaps it's not so much that it's desirable, but it's just how our imaginations have been constructed, constricted even. We can't imagine any other ideal but speed and efficiency.
  • The first science fiction novel, published over 200 years ago, was in fact an ed-tech story: Mary Shelley's Frankenstein. While the book is commonly interpreted as a tale of bad science, it is also the story of bad education — something we tend to forget if we only know the story through the 1931 film version
  • Teaching machines and robot teachers were part of the Sixties' cultural imaginary — perhaps that's the problem with so many Boomer ed-reform leaders today. But that imaginary — certainly in the case of The Jetsons — was, upon close inspection, not always particularly radical or transformative. The students at Little Dipper Elementary still sat in desks in rows. The teacher still stood at the front of the class, punishing students who weren't paying attention.
  • we must also decolonize the ed-tech imaginary
  • Zuckerberg gave everyone at Facebook a copy of the Ernest Cline novel Ready Player One, for example, to get them excited about building technology for the future — a book that is really just a string of nostalgic references to Eighties white boy culture. And I always think about that New York Times interview with Sal Khan, where he said that "The science fiction books I like tend to relate to what we're doing at Khan Academy, like Orson Scott Card's 'Ender's Game' series." You mean, online math lectures are like a novel that justifies imperialism and genocide?! Wow.
  • This ed-tech imaginary is segregated. There are no Black students at the push-button school. There are no Black people in The Jetsons — no Black people living the American dream of the mid-twenty-first century
  • Part of the argument I make in my book is that much of education technology has been profoundly shaped by Skinner, even though I'd say that most practitioners today would say that they reject his theories; that cognitive science has supplanted behaviorism; and that after Ayn Rand and Noam Chomsky trashed Beyond Freedom and Dignity, no one paid attention to Skinner any more — which is odd considering there are whole academic programs devoted to "behavioral design," bestselling books devoted to the "nudge," and so on.
  • so much of the ed-tech imaginary is wrapped up in narratives about the Hero, the Weapon, the Machine, the Behavior, the Action, the Disruption. And it's so striking because education should be a practice of care, not conquest
Ed Webb

Google and Meta moved cautiously on AI. Then came OpenAI's ChatGPT. - The Washington Post - 0 views

  • The surge of attention around ChatGPT is prompting pressure inside tech giants including Meta and Google to move faster, potentially sweeping safety concerns aside
  • Tech giants have been skittish since public debacles like Microsoft’s Tay, which it took down in less than a day in 2016 after trolls prompted the bot to call for a race war, suggest Hitler was right and tweet “Jews did 9/11.”
  • Some AI ethicists fear that Big Tech’s rush to market could expose billions of people to potential harms — such as sharing inaccurate information, generating fake photos or giving students the ability to cheat on school tests — before trust and safety experts have been able to study the risks. Others in the field share OpenAI’s philosophy that releasing the tools to the public, often nominally in a “beta” phase after mitigating some predictable risks, is the only way to assess real world harms.
  • ...8 more annotations...
  • Silicon Valley’s sudden willingness to consider taking more reputational risk arrives as tech stocks are tumbling
  • A chatbot that pointed to one answer directly from Google could increase its liability if the response was found to be harmful or plagiarized.
  • AI has been through several hype cycles over the past decade, but the furor over DALL-E and ChatGPT has reached new heights.
  • Soon after OpenAI released ChatGPT, tech influencers on Twitter began to predict that generative AI would spell the demise of Google search. ChatGPT delivered simple answers in an accessible way and didn’t ask users to rifle through blue links. Besides, after a quarter of a century, Google’s search interface had grown bloated with ads and marketers trying to game the system.
  • Inside big tech companies, the system of checks and balances for vetting the ethical implications of cutting-edge AI isn’t as established as privacy or data security. Typically teams of AI researchers and engineers publish papers on their findings, incorporate their technology into the company’s existing infrastructure or develop new products, a process that can sometimes clash with other teams working on responsible AI over pressure to see innovation reach the public sooner.
  • Chatbots like OpenAI routinely make factual errors and often switch their answers depending on how a question is asked
  • To Timnit Gebru, executive director of the nonprofit Distributed AI Research Institute, the prospect of Google sidelining its responsible AI team doesn’t necessarily signal a shift in power or safety concerns, because those warning of the potential harms were never empowered to begin with. “If we were lucky, we’d get invited to a meeting,” said Gebru, who helped lead Google’s Ethical AI team until she was fired for a paper criticizing large language models.
  • Rumman Chowdhury, who led Twitter’s machine-learning ethics team until Elon Musk disbanded it in November, said she expects companies like Google to increasingly sideline internal critics and ethicists as they scramble to catch up with OpenAI.“We thought it was going to be China pushing the U.S., but looks like it’s start-ups,” she said.
Ed Webb

ChatGPT Is Nothing Like a Human, Says Linguist Emily Bender - 0 views

  • Please do not conflate word form and meaning. Mind your own credulity.
  • We’ve learned to make “machines that can mindlessly generate text,” Bender told me when we met this winter. “But we haven’t learned how to stop imagining the mind behind it.”
  • A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”
  • ...16 more annotations...
  • “We call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harms,” she co-wrote in 2021. “Work on synthetic human behavior is a bright line in ethical Al development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups.”
  • chatbots that we easily confuse with humans are not just cute or unnerving. They sit on a bright line. Obscuring that line and blurring — bullshitting — what’s human and what’s not has the power to unravel society
  • She began learning from, then amplifying, Black women’s voices critiquing AI, including those of Joy Buolamwini (she founded the Algorithmic Justice League while at MIT) and Meredith Broussard (the author of Artificial Unintelligence: How Computers Misunderstand the World). She also started publicly challenging the term artificial intelligence, a sure way, as a middle-aged woman in a male field, to get yourself branded as a scold. The idea of intelligence has a white-supremacist history. And besides, “intelligent” according to what definition? The three-stratum definition? Howard Gardner’s theory of multiple intelligences? The Stanford-Binet Intelligence Scale? Bender remains particularly fond of an alternative name for AI proposed by a former member of the Italian Parliament: “Systematic Approaches to Learning Algorithms and Machine Inferences.” Then people would be out here asking, “Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?”
  • Tech-makers assuming their reality accurately represents the world create many different kinds of problems. The training data for ChatGPT is believed to include most or all of Wikipedia, pages linked from Reddit, a billion words grabbed off the internet. (It can’t include, say, e-book copies of everything in the Stanford library, as books are protected by copyright law.) The humans who wrote all those words online overrepresent white people. They overrepresent men. They overrepresent wealth. What’s more, we all know what’s out there on the internet: vast swamps of racism, sexism, homophobia, Islamophobia, neo-Nazism.
  • One fired Google employee told me succeeding in tech depends on “keeping your mouth shut to everything that’s disturbing.” Otherwise, you’re a problem. “Almost every senior woman in computer science has that rep. Now when I hear, ‘Oh, she’s a problem,’ I’m like, Oh, so you’re saying she’s a senior woman?”
  • “We haven’t learned to stop imagining the mind behind it.”
  • In March 2021, Bender published “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” with three co-authors. After the paper came out, two of the co-authors, both women, lost their jobs as co-leads of Google’s Ethical AI team.
  • “On the Dangers of Stochastic Parrots” is not a write-up of original research. It’s a synthesis of LLM critiques that Bender and others have made: of the biases encoded in the models; the near impossibility of studying what’s in the training data, given the fact they can contain billions of words; the costs to the climate; the problems with building technology that freezes language in time and thus locks in the problems of the past. Google initially approved the paper, a requirement for publications by staff. Then it rescinded approval and told the Google co-authors to take their names off it. Several did, but Google AI ethicist Timnit Gebru refused. Her colleague (and Bender’s former student) Margaret Mitchell changed her name on the paper to Shmargaret Shmitchell, a move intended, she said, to “index an event and a group of authors who got erased.” Gebru lost her job in December 2020, Mitchell in February 2021. Both women believe this was retaliation and brought their stories to the press. The stochastic-parrot paper went viral, at least by academic standards. The phrase stochastic parrot entered the tech lexicon.
  • Tech execs loved it. Programmers related to it. OpenAI CEO Sam Altman was in many ways the perfect audience: a self-identified hyperrationalist so acculturated to the tech bubble that he seemed to have lost perspective on the world beyond. “I think the nuclear mutually assured destruction rollout was bad for a bunch of reasons,” he said on AngelList Confidential in November. He’s also a believer in the so-called singularity, the tech fantasy that, at some point soon, the distinction between human and machine will collapse. “We are a few years in,” Altman wrote of the cyborg merge in 2017. “It’s probably going to happen sooner than most people think. Hardware is improving at an exponential rate … and the number of smart people working on AI is increasing exponentially as well. Double exponential functions get away from you fast.” On December 4, four days after ChatGPT was released, Altman tweeted, “i am a stochastic parrot, and so r u.”
  • “This is one of the moves that turn up ridiculously frequently. People saying, ‘Well, people are just stochastic parrots,’” she said. “People want to believe so badly that these language models are actually intelligent that they’re willing to take themselves as a point of reference and devalue that to match what the language model can do.”
  • The membrane between academia and industry is permeable almost everywhere; the membrane is practically nonexistent at Stanford, a school so entangled with tech that it can be hard to tell where the university ends and the businesses begin.
  • “No wonder that men who live day in and day out with machines to which they believe themselves to have become slaves begin to believe that men are machines.”
  • what’s tenure for, after all?
  • LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it’s not about humility. It’s not about all of us. It’s not about becoming a humble creation among the world’s others. It’s about some of us — let’s be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.
  • The AI dream is “governed by the perfectibility thesis, and that’s where we see a fascist form of the human.”
  • “Why are you trying to trick people into thinking that it really feels sad that you lost your phone?”
‹ Previous 21 - 24 of 24
Showing 20 items per page