Skip to main content

Home/ Instructional & Media Services at Dickinson College/ Group items tagged tech

Rss Feed Group items tagged

Ed Webb

The Ed-Tech Imaginary - 0 views

  • We can say "Black lives matter," but we must also demonstrate through our actions that Black lives matter, and that means we must radically alter many of our institutions and practices, recognizing their inhumanity and carcerality. And that includes, no doubt, ed-tech. How much of ed-tech is, to use Ruha Benjamin's phrase, "the new Jim Code"? How much of ed-tech is designed by those who imagine students as cheats or criminals, as deficient or negligent?
  • "Reimagining" is a verb that education reformers are quite fond of. And "reimagining" seems too often to mean simply defunding, privatizing, union-busting, dismantling, outsourcing.
  • if Betsy DeVos is out there "reimagining," then we best be resisting
  • ...9 more annotations...
  • think we can view the promotion of ed-tech as a similar sort of process — the stories designed to convince us that the future of teaching and learning will be a technological wonder. The "jobs of the future that don't exist yet." The push for everyone to "learn to code."
  • The Matrix is, after all, a dystopia. So why would Matrix-style learning be desirable? Maybe that's the wrong question. Perhaps it's not so much that it's desirable, but it's just how our imaginations have been constructed, constricted even. We can't imagine any other ideal but speed and efficiency.
  • The first science fiction novel, published over 200 years ago, was in fact an ed-tech story: Mary Shelley's Frankenstein. While the book is commonly interpreted as a tale of bad science, it is also the story of bad education — something we tend to forget if we only know the story through the 1931 film version
  • Teaching machines and robot teachers were part of the Sixties' cultural imaginary — perhaps that's the problem with so many Boomer ed-reform leaders today. But that imaginary — certainly in the case of The Jetsons — was, upon close inspection, not always particularly radical or transformative. The students at Little Dipper Elementary still sat in desks in rows. The teacher still stood at the front of the class, punishing students who weren't paying attention.
  • we must also decolonize the ed-tech imaginary
  • Zuckerberg gave everyone at Facebook a copy of the Ernest Cline novel Ready Player One, for example, to get them excited about building technology for the future — a book that is really just a string of nostalgic references to Eighties white boy culture. And I always think about that New York Times interview with Sal Khan, where he said that "The science fiction books I like tend to relate to what we're doing at Khan Academy, like Orson Scott Card's 'Ender's Game' series." You mean, online math lectures are like a novel that justifies imperialism and genocide?! Wow.
  • This ed-tech imaginary is segregated. There are no Black students at the push-button school. There are no Black people in The Jetsons — no Black people living the American dream of the mid-twenty-first century
  • Part of the argument I make in my book is that much of education technology has been profoundly shaped by Skinner, even though I'd say that most practitioners today would say that they reject his theories; that cognitive science has supplanted behaviorism; and that after Ayn Rand and Noam Chomsky trashed Beyond Freedom and Dignity, no one paid attention to Skinner any more — which is odd considering there are whole academic programs devoted to "behavioral design," bestselling books devoted to the "nudge," and so on.
  • so much of the ed-tech imaginary is wrapped up in narratives about the Hero, the Weapon, the Machine, the Behavior, the Action, the Disruption. And it's so striking because education should be a practice of care, not conquest
Ed Webb

ChatGPT Is Nothing Like a Human, Says Linguist Emily Bender - 0 views

  • Please do not conflate word form and meaning. Mind your own credulity.
  • We’ve learned to make “machines that can mindlessly generate text,” Bender told me when we met this winter. “But we haven’t learned how to stop imagining the mind behind it.”
  • A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”
  • ...16 more annotations...
  • “We call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harms,” she co-wrote in 2021. “Work on synthetic human behavior is a bright line in ethical Al development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups.”
  • chatbots that we easily confuse with humans are not just cute or unnerving. They sit on a bright line. Obscuring that line and blurring — bullshitting — what’s human and what’s not has the power to unravel society
  • She began learning from, then amplifying, Black women’s voices critiquing AI, including those of Joy Buolamwini (she founded the Algorithmic Justice League while at MIT) and Meredith Broussard (the author of Artificial Unintelligence: How Computers Misunderstand the World). She also started publicly challenging the term artificial intelligence, a sure way, as a middle-aged woman in a male field, to get yourself branded as a scold. The idea of intelligence has a white-supremacist history. And besides, “intelligent” according to what definition? The three-stratum definition? Howard Gardner’s theory of multiple intelligences? The Stanford-Binet Intelligence Scale? Bender remains particularly fond of an alternative name for AI proposed by a former member of the Italian Parliament: “Systematic Approaches to Learning Algorithms and Machine Inferences.” Then people would be out here asking, “Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?”
  • Tech-makers assuming their reality accurately represents the world create many different kinds of problems. The training data for ChatGPT is believed to include most or all of Wikipedia, pages linked from Reddit, a billion words grabbed off the internet. (It can’t include, say, e-book copies of everything in the Stanford library, as books are protected by copyright law.) The humans who wrote all those words online overrepresent white people. They overrepresent men. They overrepresent wealth. What’s more, we all know what’s out there on the internet: vast swamps of racism, sexism, homophobia, Islamophobia, neo-Nazism.
  • One fired Google employee told me succeeding in tech depends on “keeping your mouth shut to everything that’s disturbing.” Otherwise, you’re a problem. “Almost every senior woman in computer science has that rep. Now when I hear, ‘Oh, she’s a problem,’ I’m like, Oh, so you’re saying she’s a senior woman?”
  • “We haven’t learned to stop imagining the mind behind it.”
  • In March 2021, Bender published “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” with three co-authors. After the paper came out, two of the co-authors, both women, lost their jobs as co-leads of Google’s Ethical AI team.
  • “On the Dangers of Stochastic Parrots” is not a write-up of original research. It’s a synthesis of LLM critiques that Bender and others have made: of the biases encoded in the models; the near impossibility of studying what’s in the training data, given the fact they can contain billions of words; the costs to the climate; the problems with building technology that freezes language in time and thus locks in the problems of the past. Google initially approved the paper, a requirement for publications by staff. Then it rescinded approval and told the Google co-authors to take their names off it. Several did, but Google AI ethicist Timnit Gebru refused. Her colleague (and Bender’s former student) Margaret Mitchell changed her name on the paper to Shmargaret Shmitchell, a move intended, she said, to “index an event and a group of authors who got erased.” Gebru lost her job in December 2020, Mitchell in February 2021. Both women believe this was retaliation and brought their stories to the press. The stochastic-parrot paper went viral, at least by academic standards. The phrase stochastic parrot entered the tech lexicon.
  • Tech execs loved it. Programmers related to it. OpenAI CEO Sam Altman was in many ways the perfect audience: a self-identified hyperrationalist so acculturated to the tech bubble that he seemed to have lost perspective on the world beyond. “I think the nuclear mutually assured destruction rollout was bad for a bunch of reasons,” he said on AngelList Confidential in November. He’s also a believer in the so-called singularity, the tech fantasy that, at some point soon, the distinction between human and machine will collapse. “We are a few years in,” Altman wrote of the cyborg merge in 2017. “It’s probably going to happen sooner than most people think. Hardware is improving at an exponential rate … and the number of smart people working on AI is increasing exponentially as well. Double exponential functions get away from you fast.” On December 4, four days after ChatGPT was released, Altman tweeted, “i am a stochastic parrot, and so r u.”
  • “This is one of the moves that turn up ridiculously frequently. People saying, ‘Well, people are just stochastic parrots,’” she said. “People want to believe so badly that these language models are actually intelligent that they’re willing to take themselves as a point of reference and devalue that to match what the language model can do.”
  • The membrane between academia and industry is permeable almost everywhere; the membrane is practically nonexistent at Stanford, a school so entangled with tech that it can be hard to tell where the university ends and the businesses begin.
  • “No wonder that men who live day in and day out with machines to which they believe themselves to have become slaves begin to believe that men are machines.”
  • what’s tenure for, after all?
  • LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it’s not about humility. It’s not about all of us. It’s not about becoming a humble creation among the world’s others. It’s about some of us — let’s be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.
  • The AI dream is “governed by the perfectibility thesis, and that’s where we see a fascist form of the human.”
  • “Why are you trying to trick people into thinking that it really feels sad that you lost your phone?”
Ed Webb

Google and Meta moved cautiously on AI. Then came OpenAI's ChatGPT. - The Washington Post - 0 views

  • The surge of attention around ChatGPT is prompting pressure inside tech giants including Meta and Google to move faster, potentially sweeping safety concerns aside
  • Tech giants have been skittish since public debacles like Microsoft’s Tay, which it took down in less than a day in 2016 after trolls prompted the bot to call for a race war, suggest Hitler was right and tweet “Jews did 9/11.”
  • Some AI ethicists fear that Big Tech’s rush to market could expose billions of people to potential harms — such as sharing inaccurate information, generating fake photos or giving students the ability to cheat on school tests — before trust and safety experts have been able to study the risks. Others in the field share OpenAI’s philosophy that releasing the tools to the public, often nominally in a “beta” phase after mitigating some predictable risks, is the only way to assess real world harms.
  • ...8 more annotations...
  • Silicon Valley’s sudden willingness to consider taking more reputational risk arrives as tech stocks are tumbling
  • A chatbot that pointed to one answer directly from Google could increase its liability if the response was found to be harmful or plagiarized.
  • AI has been through several hype cycles over the past decade, but the furor over DALL-E and ChatGPT has reached new heights.
  • Soon after OpenAI released ChatGPT, tech influencers on Twitter began to predict that generative AI would spell the demise of Google search. ChatGPT delivered simple answers in an accessible way and didn’t ask users to rifle through blue links. Besides, after a quarter of a century, Google’s search interface had grown bloated with ads and marketers trying to game the system.
  • Inside big tech companies, the system of checks and balances for vetting the ethical implications of cutting-edge AI isn’t as established as privacy or data security. Typically teams of AI researchers and engineers publish papers on their findings, incorporate their technology into the company’s existing infrastructure or develop new products, a process that can sometimes clash with other teams working on responsible AI over pressure to see innovation reach the public sooner.
  • Chatbots like OpenAI routinely make factual errors and often switch their answers depending on how a question is asked
  • To Timnit Gebru, executive director of the nonprofit Distributed AI Research Institute, the prospect of Google sidelining its responsible AI team doesn’t necessarily signal a shift in power or safety concerns, because those warning of the potential harms were never empowered to begin with. “If we were lucky, we’d get invited to a meeting,” said Gebru, who helped lead Google’s Ethical AI team until she was fired for a paper criticizing large language models.
  • Rumman Chowdhury, who led Twitter’s machine-learning ethics team until Elon Musk disbanded it in November, said she expects companies like Google to increasingly sideline internal critics and ethicists as they scramble to catch up with OpenAI.“We thought it was going to be China pushing the U.S., but looks like it’s start-ups,” she said.
Ed Webb

Please do a bad job of putting your courses online - Rebecca Barrett-Fox - 0 views

  • Please do a bad job of putting your courses online
  • For my colleagues who are now being instructed to put some or all of the remainder of their semester online, now is a time to do a poor job of it. You are NOT building an online class. You are NOT teaching students who can be expected to be ready to learn online. And, most importantly, your class is NOT the highest priority of their OR your life right now. Release yourself from high expectations right now, because that’s the best way to help your students learn.
  • Remember the following as you move online: Your students know less about technology than you think. Many of them know less than you. Yes, even if they are digital natives and younger than you. They will be accessing the internet on their phones. They have limited data. They need to reserve it for things more important than online lectures. Students who did not sign up for an online course have no obligation to have a computer, high speed wifi, a printer/scanner, or a camera. Do not even survey them to ask if they have it. Even if they do, they are not required to tell you this. And if they do now, that doesn’t mean that they will when something breaks and they can’t afford to fix it because they just lost their job at the ski resort or off-campus bookstore. Students will be sharing their technology with other household members. They may have LESS time to do their schoolwork, not more.
  • ...14 more annotations...
  • Social isolation contributes to mental health problems. Social isolation contributes to domestic violence.
  • Do not require synchronous work. Students should not need to show up at a specific time for anything. REFUSE to do any synchronous work.
  • Do not record lectures unless you need to. (This is fundamentally different from designing an online course, where recorded information is, I think, really important.) They will be a low priority for students, and they take up a lot of resources on your end and on theirs. You have already built a rapport with them, and they don’t need to hear your voice to remember that.
  • Do record lectures if you need to. When information cannot be learned otherwise, include a lecture. Your university already some kind of tech to record lectures. DO NOT simply record in PowerPoint as the audio quality is low. While many people recommend lectures of only 5 minutes, I find that my students really do listen to longer lectures. Still, remember that your students will be frequently interrupted in their listening, so a good rule is 1 concept per lecture. So, rather than a lecture on ALL of, say, gender inequality in your Intro to Soc course, deliver 5 minutes on pay inequity (or 15 minutes or 20 minutes, if that’s what you need) and then a separate lecture on #MeToo and yet another on domestic violence. Closed caption them using the video recording software your university provides. Note that YouTube also generates closed captions [edited to add: they are not ADA compliant, though]. If you don’t have to include images, skip the video recording and do a podcast instead.
  • Editing is a waste of your time right now.
  • Listen for them asking for help. They may be anxious. They may be tired. Many students are returning to their parents’ home where they may not be welcome. Others will be at home with partners who are violent. School has been a safe place for them, and now it’s not available to them. Your class may matter to them a lot when they are able to focus on it, but it may not matter much now, in contrast to all the other things they have to deal with. Don’t let that hurt your feelings, and don’t hold it against them in future semesters or when they come back to ask for a letter of recommendation.
  • Allow every exam or quiz to be taken at least twice, and tell students that this means that if there is a tech problem on the first attempt, the second attempt is their chance to correct it. This will save you from the work of resetting tests or quizzes when the internet fails or some other tech problem happens. And since it can be very hard to discern when such failures are really failures or students trying to win a second attempt at a quiz or test, you avoid having to deal with cheaters.
  • Do NOT require students to use online proctoring or force them to have themselves recorded during exams or quizzes. This is a fundamental violation of their privacy, and they did NOT sign up for that when they enrolled in your course.
  • Circumvent the need for proctoring by making every exam open-notes, open-book, and open-internet. The best way to avoid them taking tests together or sharing answers is to use a large test bank.
  • Remind them of due dates. It might feel like handholding, but be honest: Don’t you appreciate the text reminder from your dentist that you have an appointment tomorrow? Your LMS has an announcement system that allows you to write an announcement now and post it later.
  • Make everything self-grading if you can (yes, multiple choice and T/F on quizzes and tests) or low-stakes (completed/not completed).
  • Don’t do too much. Right now, your students don’t need it. They need time to do the other things they need to do.
  • Make all work due on the same day and time for the rest of the semester. I recommend Sunday night at 11:59 pm.
  • This advice is very different from that which I would share if you were designing an online course. I hope it’s helpful, and for those of you moving your courses online, I hope it helps you understand the labor that is required in building an online course a bit better.
Ed Webb

Clear backpacks, monitored emails: life for US students under constant surveillance | E... - 0 views

  • This level of surveillance is “not too over-the-top”, Ingrid said, and she feels her classmates are generally “accepting” of it.
  • One leading student privacy expert estimated that as many as a third of America’s roughly 15,000 school districts may already be using technology that monitors students’ emails and documents for phrases that might flag suicidal thoughts, plans for a school shooting, or a range of other offenses.
  • Some parents said they were alarmed and frightened by schools’ new monitoring technologies. Others said they were conflicted, seeing some benefits to schools watching over what kids are doing online, but uncertain if their schools were striking the right balance with privacy concerns. Many said they were not even sure what kind of surveillance technology their schools might be using, and that the permission slips they had signed when their kids brought home school devices had told them almost nothing
  • ...13 more annotations...
  • When Dapier talks with other teen librarians about the issue of school surveillance, “we’re very alarmed,” he said. “It sort of trains the next generation that [surveillance] is normal, that it’s not an issue. What is the next generation’s Mark Zuckerberg going to think is normal?
  • “It’s the school as panopticon, and the sweeping searchlight beams into homes, now, and to me, that’s just disastrous to intellectual risk-taking and creativity.”
  • “They’re so unclear that I’ve just decided to cut off the research completely, to not do any of it.”
  • “They are all mandatory, and the accounts have been created before we’ve even been consulted,” he said. Parents are given almost no information about how their children’s data is being used, or the business models of the companies involved. Any time his kids complete school work through a digital platform, they are generating huge amounts of very personal, and potentially very valuable, data. The platforms know what time his kids do their homework, and whether it’s done early or at the last minute. They know what kinds of mistakes his kids make on math problems.
  • Felix, now 12, said he is frustrated that the school “doesn’t really [educate] students on what is OK and what is not OK. They don’t make it clear when they are tracking you, or not, or what platforms they track you on. “They don’t really give you a list of things not to do,” he said. “Once you’re in trouble, they act like you knew.”
  • As of 2018, at least 60 American school districts had also spent more than $1m on separate monitoring technology to track what their students were saying on public social media accounts, an amount that spiked sharply in the wake of the 2018 Parkland school shooting, according to the Brennan Center for Justice, a progressive advocacy group that compiled and analyzed school contracts with a subset of surveillance companies.
  • Many parents also said that they wanted more transparency and more parental control over surveillance. A few years ago, Ben, a tech professional from Maryland, got a call from his son’s principal to set up an urgent meeting. His son, then about nine or 10-years old, had opened up a school Google document and typed “I want to kill myself.” It was not until he and his son were in a serious meeting with school officials that Ben found out what happened: his son had typed the words on purpose, curious about what would happen. “The smile on his face gave away that he was testing boundaries, and not considering harming himself,” Ben said. (He asked that his last name and his son’s school district not be published, to preserve his son’s privacy.) The incident was resolved easily, he said, in part because Ben’s family already had close relationships with the school administrators.
  • there is still no independent evaluation of whether this kind of surveillance technology actually works to reduce violence and suicide.
  • Certain groups of students could easily be targeted by the monitoring more intensely than others, she said. Would Muslim students face additional surveillance? What about black students? Her daughter, who is 11, loves hip-hop music. “Maybe some of that language could be misconstrued, by the wrong ears or the wrong eyes, as potentially violent or threatening,” she said.
  • The Parent Coalition for Student Privacy was founded in 2014, in the wake of parental outrage over the attempt to create a standardized national database that would track hundreds of data points about public school students, from their names and social security numbers to their attendance, academic performance, and disciplinary and behavior records, and share the data with education tech companies. The effort, which had been funded by the Gates Foundation, collapsed in 2014 after fierce opposition from parents and privacy activists.
  • “More and more parents are organizing against the onslaught of ed tech and the loss of privacy that it entails. But at the same time, there’s so much money and power and political influence behind these groups,”
  • some privacy experts – and students – said they are concerned that surveillance at school might actually be undermining students’ wellbeing
  • “I do think the constant screen surveillance has affected our anxiety levels and our levels of depression.” “It’s over-guarding kids,” she said. “You need to let them make mistakes, you know? That’s kind of how we learn.”
Ed Webb

High-Tech Cheating on Homework Abounds, and Professors Are Partly to Blame - Technology... - 0 views

  • "I call it 'technological detachment phenomenon,'" he told me recently. "As long as there's some technology between me and the action, then I'm not culpable for the action." By that logic, if someone else posted homework solutions online, what's wrong with downloading them?
  • "The feeling about homework is that it's really just busywork,"
  • professors didn't put much effort into teaching, so students don't put real effort into learning
  • ...5 more annotations...
  • "The current system places too great a burden on individual faculty who would, under the circumstances, appear to have perverse incentives: Pursuing these matters lowers course evaluations, takes their severely limited time away from research for promotion, and unfortunately personalizes the issue when it is not personal at all, but a violation against the university."
  • In the humanities, professors have found technological tools to check for blatant copying on essays, and have caught so many culprits that the practice of running papers through plagiarism-detection services has become routine at many colleges. But that software is not suited to science-class assignments.
  • a "studio" model of teaching
  • The parents paid tuition in cash
  • The idea that students should be working in a shell is so interesting. It never even occurred to me as a student that I shouldn't work with someone else on my homework. How else do you figure it out? I guess that is peer-to-peer teaching. Copying someone else's work and presenting it as your own is clearly wrong (and, as demonstrated above, doesn't do the student any good), but learning from the resources at hand ought to be encouraged. Afterall, struggling through homework problems in intro physics is how you learn in the first place.
Ed Webb

Wired Campus: Whitman Takes Manhattan - Chronicle.com - 0 views

  • Next fall, some modern New Yorkers — students at City Tech, CUNY’s New York City College of Technology — will explore the Fulton Ferry Landing that Whitman described in the poem and record their investigations on a Web site. Meanwhile, thanks to open-source software, students at three other institutions — New York University, Rutgers University at Camden, and the University of Mary Washington, in Virginia — will be recording their own literary and geographical explorations of Whitman’s work on that same Web site. The project, “Looking for Whitman: The Poetry of Place in the Life and Work of Walt Whitman,” is the brainchild of a group of professors at all four schools led by Matthew K. Gold, an assistant professor of English at City Tech. It received a start-up grant of $25,000 from the National Endowment for the Humanities’ Office of Digital Humanities. James Groom, an instructional-technology specialist at the University of Mary Washington, is the site’s architect.
  • Mr. Gold believes that Whitman would appreciate the openness of the endeavor. The poet was nothing if not open source:
Ed Webb

Review: Windows 7 strong, but don't pay to upgrade by AP: Yahoo! Tech - 0 views

  •  
    Next week, Microsoft is releasing Windows 7, a slick, much improved operating system that should go a long way toward erasing the bad impression left by its previous effort, Vista.
Ed Webb

I unintentionally created a biased AI algorithm 25 years ago - tech companies are still... - 0 views

  • How and why do well-educated, well-intentioned scientists produce biased AI systems? Sociological theories of privilege provide one useful lens.
  • Scientists also face a nasty subconscious dilemma when incorporating diversity into machine learning models: Diverse, inclusive models perform worse than narrow models.
  • fairness can still be the victim of competitive pressures in academia and industry. The flawed Bard and Bing chatbots from Google and Microsoft are recent evidence of this grim reality. The commercial necessity of building market share led to the premature release of these systems.
  • ...3 more annotations...
  • Their training data is biased. They are designed by an unrepresentative group. They face the mathematical impossibility of treating all categories equally. They must somehow trade accuracy for fairness. And their biases are hiding behind millions of inscrutable numerical parameters.
  • biased AI systems can still be created unintentionally and easily. It’s also clear that the bias in these systems can be harmful, hard to detect and even harder to eliminate.
  • with North American computer science doctoral programs graduating only about 23% female, and 3% Black and Latino students, there will continue to be many rooms and many algorithms in which underrepresented groups are not represented at all.
Ed Webb

The Myth Of AI | Edge.org - 0 views

  • The distinction between a corporation and an algorithm is fading. Does that make an algorithm a person? Here we have this interesting confluence between two totally different worlds. We have the world of money and politics and the so-called conservative Supreme Court, with this other world of what we can call artificial intelligence, which is a movement within the technical culture to find an equivalence between computers and people. In both cases, there's an intellectual tradition that goes back many decades. Previously they'd been separated; they'd been worlds apart. Now, suddenly they've been intertwined.
  • Since our economy has shifted to what I call a surveillance economy, but let's say an economy where algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely on big data in order to figure out who you should date, who you should sleep with, what music you should listen to, what books you should read, and on and on and on. And people often accept that because there's no empirical alternative to compare it to, there's no baseline. It's bad personal science. It's bad self-understanding.
  • there's no way to tell where the border is between measurement and manipulation in these systems
  • ...8 more annotations...
  • It's not so much a rise of evil as a rise of nonsense. It's a mass incompetence, as opposed to Skynet from the Terminator movies. That's what this type of AI turns into.
  • What's happened here is that translators haven't been made obsolete. What's happened instead is that the structure through which we receive the efforts of real people in order to make translations happen has been optimized, but those people are still needed.
  • In order to create this illusion of a freestanding autonomous artificial intelligent creature, we have to ignore the contributions from all the people whose data we're grabbing in order to make it work. That has a negative economic consequence.
  • If you talk to translators, they're facing a predicament, which is very similar to some of the other early victim populations, due to the particular way we digitize things. It's similar to what's happened with recording musicians, or investigative journalists—which is the one that bothers me the most—or photographers. What they're seeing is a severe decline in how much they're paid, what opportunities they have, their long-term prospects.
  • because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth. The consumer tech companies, we tend to put a face in front of them, like a Cortana or a Siri
  • If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I've just gone over, which include acceptance of bad user interfaces, where you can't tell if you're being manipulated or not, and everything is ambiguous. It creates incompetence, because you don't know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you're gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.
  • This idea that some lab somewhere is making these autonomous algorithms that can take over the world is a way of avoiding the profoundly uncomfortable political problem, which is that if there's some actuator that can do harm, we have to figure out some way that people don't do harm with it. There are about to be a whole bunch of those. And that'll involve some kind of new societal structure that isn't perfect anarchy. Nobody in the tech world wants to face that, so we lose ourselves in these fantasies of AI. But if you could somehow prevent AI from ever happening, it would have nothing to do with the actual problem that we fear, and that's the sad thing, the difficult thing we have to face.
  • To reject your own ignorance just casts you into a silly state where you're a lesser scientist. I don't see that so much in the neuroscience field, but it comes from the computer world so much, and the computer world is so influential because it has so much money and influence that it does start to bleed over into all kinds of other things.
Ed Webb

12 Reasons to Ditch the Pen - Why it's no longer mightiest against the sword by Lisa Ni... - 1 views

  •  
    Not there yet
Ed Webb

What Bruce Sterling Actually Said About Web 2.0 at Webstock 09 | Beyond the Beyond from... - 0 views

  • things in it that pretended to be ideas, but were not ideas at all: they were attitudes
    • Ed Webb
       
      Like Edupunk
  • A sentence is a verbal construction meant to express a complete thought. This congelation that Tim O'Reilly constructed, that is not a complete thought. It's a network in permanent beta.
  • This chart is five years old now, which is 35 years old in Internet years, but intellectually speaking, it's still new in the world. It's alarming how hard it is to say anything constructive about this from any previous cultural framework.
  • ...20 more annotations...
  • "The cloud as platform." That is insanely great. Right? You can't build a "platform" on a "cloud!" That is a wildly mixed metaphor! A cloud is insubstantial, while a platform is a solid foundation! The platform falls through the cloud and is smashed to earth like a plummeting stock price!
  • luckily, we have computers in banking now. That means Moore's law is gonna save us! Instead of it being really obvious who owes what to whom, we can have a fluid, formless ownership structure that's always in permanent beta. As long as we keep moving forward, adding attractive new features, the situation is booming!
  • Web 2.0 is supposed to be business. This isn't a public utility or a public service, like the old model of an Information Superhighway established for the public good.
  • it's turtles all the way down
  • "Tagging not taxonomy." Okay, I love folksonomy, but I don't think it's gone very far. There have been books written about how ambient searchability through folksonomy destroys the need for any solid taxonomy. Not really. The reality is that we don't have a choice, because we have no conceivable taxonomy that can catalog the avalanche of stuff on the Web.
  • JavaScript is the duct tape of the Web. Why? Because you can do anything with it. It's not the steel girders of the web, it's not the laws of physics of the web. Javascript is beloved of web hackers because it's an ultimate kludge material that can stick anything to anything. It's a cloud, a web, a highway, a platform and a floor wax. Guys with attitude use JavaScript.
  • Before the 1990s, nobody had any "business revolutions." People in trade are supposed to be very into long-term contracts, a stable regulatory environment, risk management, and predictable returns to stockholders. Revolutions don't advance those things. Revolutions annihilate those things. Is that "businesslike"? By whose standards?
  • I just wonder what kind of rattletrap duct-taped mayhem is disguised under a smooth oxymoron like "collective intelligence."
  • the people whose granular bits of input are aggregated by Google are not a "collective." They're not a community. They never talk to each other. They've got basically zero influence on what Google chooses to do with their mouseclicks. What's "collective" about that?
  • I really think it's the original sin of geekdom, a kind of geek thought-crime, to think that just because you yourself can think algorithmically, and impose some of that on a machine, that this is "intelligence." That is not intelligence. That is rules-based machine behavior. It's code being executed. It's a powerful thing, it's a beautiful thing, but to call that "intelligence" is dehumanizing. You should stop that. It does not make you look high-tech, advanced, and cool. It makes you look delusionary.
  • I'd definitely like some better term for "collective intelligence," something a little less streamlined and metaphysical. Maybe something like "primeval meme ooze" or "semi-autonomous data propagation." Even some Kevin Kelly style "neobiological out of control emergent architectures." Because those weird new structures are here, they're growing fast, we depend on them for mission-critical acts, and we're not gonna get rid of them any more than we can get rid of termite mounds.
  • Web 2.0 guys: they've got their laptops with whimsical stickers, the tattoos, the startup T-shirts, the brainy-glasses -- you can tell them from the general population at a glance. They're a true creative subculture, not a counterculture exactly -- but in their number, their relationship to the population, quite like the Arts and Crafts people from a hundred years ago. Arts and Crafts people, they had a lot of bad ideas -- much worse ideas than Tim O'Reilly's ideas. It wouldn't bother me any if Tim O'Reilly was Governor of California -- he couldn't be any weirder than that guy they've got already. Arts and Crafts people gave it their best shot, they were in earnest -- but everything they thought they knew about reality was blown to pieces by the First World War. After that misfortune, there were still plenty of creative people surviving. Futurists, Surrealists, Dadaists -- and man, they all despised Arts and Crafts. Everything about Art Nouveau that was sexy and sensual and liberating and flower-like, man, that stank in their nostrils. They thought that Art Nouveau people were like moronic children.
  • in the past eighteen months, 24 months, we've seen ubiquity initiatives from Nokia, Cisco, General Electric, IBM... Microsoft even, Jesus, Microsoft, the place where innovative ideas go to die.
  • what comes next is a web with big holes blown in it. A spiderweb in a storm. The turtles get knocked out from under it, the platform sinks through the cloud. A lot of the inherent contradictions of the web get revealed, the contradictions in the oxymorons smash into each other. The web has to stop being a meringue frosting on the top of business, this make-do melange of mashups and abstraction layers. Web 2.0 goes away. Its work is done. The thing I always loved best about Web 2.0 was its implicit expiration date. It really took guts to say that: well, we've got a bunch of cool initiatives here, and we know they're not gonna last very long. It's not Utopia, it's not a New World Order, it's just a brave attempt to sweep up the ashes of the burst Internet Bubble and build something big and fast with the small burnt-up bits that were loosely joined. That showed more maturity than Web 1.0. It was visionary, it was inspiring, but there were fewer moon rockets flying out of its head. "Gosh, we're really sorry that we accidentally ruined the NASDAQ." We're Internet business people, but maybe we should spend less of our time stock-kiting. The Web's a communications medium -- how 'bout working on the computer interface, so that people can really communicate? That effort was time well spent. Really.
  • The poorest people in the world love cellphones.
  • Digital culture, I knew it well. It died -- young, fast and pretty. It's all about network culture now.
  • There's gonna be a Transition Web. Your economic system collapses: Eastern Europe, Russia, the Transition Economy, that bracing experience is for everybody now. Except it's not Communism transitioning toward capitalism. It's the whole world into transition toward something we don't even have proper words for.
  • The Transition Web is a culture model. If it's gonna work, it's got to replace things that we used to pay for with things that we just plain use.
  • Not every Internet address was a dotcom. In fact, dotcoms showed up pretty late in the day, and they were not exactly welcome. There were dot-orgs, dot edus, dot nets, dot govs, and dot localities. Once upon a time there were lots of social enterprises that lived outside the market; social movements, political parties, mutual aid societies, philanthropies. Churches, criminal organizations -- you're bound to see plenty of both of those in a transition... Labor unions... not little ones, but big ones like Solidarity in Poland; dissident organizations, not hobby activists, big dissent, like Charter 77 in Czechoslovakia. Armies, national guards. Rescue operations. Global non-governmental organizations. Davos Forums, Bilderberg guys. Retired people. The old people can't hold down jobs in the market. Man, there's a lot of 'em. Billions. What are our old people supposed to do with themselves? Websurf, I'm thinking. They're wise, they're knowledgeable, they're generous by nature; the 21st century is destined to be an old people's century. Even the Chinese, Mexicans, Brazilians will be old. Can't the web make some use of them, all that wisdom and talent, outside the market?
  • I've never seen so much panic around me, but panic is the last thing on my mind. My mood is eager impatience. I want to see our best, most creative, best-intentioned people in world society directly attacking our worst problems. I'm bored with the deceit. I'm tired of obscurantism and cover-ups. I'm disgusted with cynical spin and the culture war for profit. I'm up to here with phony baloney market fundamentalism. I despise a prostituted society where we put a dollar sign in front of our eyes so we could run straight into the ditch. The cure for panic is action. Coherent action is great; for a scatterbrained web society, that may be a bit much to ask. Well, any action is better than whining. We can do better.
Ed Webb

A Few Responses to Criticism of My SXSW-Edu Keynote on Media Literacy - 0 views

  • Can you give me examples of programs that are rooted in, speaking to, and resonant with conservative and religious communities in this country? In particular, I’d love to know about programs that work in conservative white Evangelical and religious black and LatinX communities? I’d love to hear how educators integrate progressive social justice values into conservative cultural logics. Context: To the best that I can tell, every program I’ve seen is rooted in progressive (predominantly white) ways of thinking. I know that communities who define “fake news” as CNN (as well as black communities who see mainstream media as rooted in the history of slavery and white supremacy) have little patience for the logics of progressive white educators. So what does media literacy look like when it starts with religious and/or conservative frameworks? What examples exist?
  • Can you tell me how you teach across gaslighting? How do you stabilize students’ trust in Information, particularly among those whose families are wary of institutions and Information intermediaries?Context: Foreign adversaries (and some domestic groups) are primarily focused on destabilizing people’s trust in information intermediaries. They want people to doubt everything and turn their backs on institutions. We are seeing the impact of this agenda. I’m not finding that teaching someone the source of a piece of content helps build up trust. Instead, it seems to further undermine it. So how do you approach media literacy to build up confidence in institutions and information intermediaries?
  • For what it’s worth, when I try to untangle the threads to actually address the so-called “fake news” problem, I always end in two places: 1) dismantle financialized capitalism (which is also the root cause of some of the most challenging dynamics of tech companies); 2) reknit the social fabric of society by strategically connecting people. But neither of those are recommendations for educators.
Ed Webb

9 Ways Online Teaching Should be Different from Face-to-Face | Cult of Pedagogy - 0 views

  • Resist the temptation to dive right into curriculum at the start of the school year. Things will go more smoothly if you devote the early weeks to building community so students feel connected. Social emotional skills can be woven in during this time. On top of that, students need practice with whatever digital tools you’ll be using. So focus your lessons on those things, intertwining the two when possible. 
  • Online instruction is made up largely of asynchronous instruction, which students can access at any time. This is ideal, because requiring attendance for synchronous instruction puts some students at an immediate disadvantage if they don’t have the same access to technology, reliable internet, or a flexible home schedule. 
  • you’re likely to offer “face-to-face” or synchronous opportunities at some point, and one way to make them happen more easily is to have students meet in small groups. While it’s nearly impossible to arrange for 30 students to attend a meeting at once, assigning four students to meet is much more manageable.
  • ...9 more annotations...
  • What works best, Kitchen says, is to keep direct instruction—things like brief video lectures and readings—in asynchronous form, using checks for understanding like embedded questions or exit slips.  You can then use synchronous meetings for more interactive, engaging work. “If we want students showing up, if we want them to know that this is worth their time,” Kitchen explains, “it really needs to be something active and engaging for them. Any time they can work with the material, categorize it, organize it, share further thoughts on it, have a discussion, all of those are great things to do in small groups.” 
  • The Jigsaw method, where students form expert groups on a particular chunk of content, then teach that content to other students. Discussion strategies adapted for virtual settingsUsing best practices for cooperative learning Visible Thinking routinesGamestorming and other business related protocols adapted for education, where students take on the role of customers/stakeholders
  • What really holds leverage for the students? What has endurance? What knowledge is essential?What knowledge and skills do students need to have before they move to the next grade level or the next class?What practices can be emphasized that transfer across many content areas?  Skills like analyzing, constructing arguments, building a strong knowledge base through texts, and speaking can all be taught through many different subjects. What tools can serve multiple purposes? Teaching students to use something like Padlet gives them opportunities to use audio, drawing, writing, and video. Non-digital tools can also work: Students can use things they find around the house, like toilet paper rolls, to fulfill other assignments, and then submit their work with a photo.
  • Online instruction is not conducive to covering large amounts of content, so you have to choose wisely, teaching the most important things at a slower pace.
  • Provide instructions in a consistent location and at a consistent time. This advice was already given for parents, but it’s worth repeating here through the lens of instructional design: Set up lessons so that students know where to find instructions every time. Make instructions explicit. Read and re-read to make sure these are as clear as possible. Make dogfooding your lessons a regular practice to root out problem areas.Offer multimodal instructions. If possible, provide both written and video instructions for assignments, so students can choose the format that works best for them. You might also offer a synchronous weekly or daily meeting; what’s great about doing these online is that even if you teach several sections of the same class per day, students are no longer restricted to class times and can attend whatever meeting works best for them.
  • put the emphasis on formative feedback as students work through assignments and tasks, rather than simply grading them at the end. 
  • In online learning, Kitchen says, “There are so many ways that students can cheat, so if we’re giving them just the traditional quiz or test, it’s really easy for them to be able to just look up that information.” A great solution to this problem is to have students create things.
  • For assessment, use a detailed rubric that highlights the learning goals the end product will demonstrate. A single-point rubric works well for this.To help students discover tools to work with, this list of tools is organized by the type of product each one creates. Another great source of ideas is the Teacher’s Guide to Tech.When developing the assignment, rather than focusing on the end product, start by getting clear on what you want students to DO with that product.
  • Clear and consistent communicationCreating explicit and consistent rituals and routinesUsing research-based instructional strategiesDetermining whether to use digital or non-digital tools for an assignment A focus on authentic learning, where authentic products are created and students have voice and choice in assignments
Ed Webb

Waving the Asynchronous Flag - CogDogBlog - 0 views

  • in all the pivot talk, there’s a tinge of favoring the synchronous over the asynchronous
  • it’s not synchronous BAD / asynchronous GOOD
  • In terms of teaching, it seems now seen through sepia toned web glasses, is one of my favorite approaches, of participants/learners creating/writing/publishing in their own spaces and the class space being a syndication hub. The old gold ds106, which, as I must remind is still chugging along after 10 years, while in that span, most every Name Your Tech Fad has crested and sunk to the bottom of the Gartner hype trough
  • ...2 more annotations...
  • I think we ought to be placing a lot of thought and effort into asynchronous events and activities
  • The whole idea of distributed activity, woven in with daily challenges and assignment banks, was asynchronous beauty. But not without synchronous bits, be it class visits or running live sessions on ds106radio. Twas a mix.
1 - 20 of 24 Next ›
Showing 20 items per page