Skip to main content

Home/ Instructional & Media Services at Dickinson College/ Group items tagged stories

Rss Feed Group items tagged

Ed Webb

Stories Matter - 0 views

  • Stories Matter will have a second phase wherein independent academics, teachers, and other interested communities will be able to download the software and apply it to their own collections, or interact with already clipped interviews posted by the Life Stories CURA project. The goal is to make Stories Matter an accessible tool for oral historians from all walks of life, and to provide people with an alternative to transcription that will ensure researchers continue interacting with and learning from the interviews they conduct once the interview is completed.
  • Download the software The application runs with Adobe AIR 1.5.1, on either the Windows, Mac or Linux platform.
  •  
    May be of interest to several on campus
Ed Webb

The Ed-Tech Imaginary - 0 views

  • We can say "Black lives matter," but we must also demonstrate through our actions that Black lives matter, and that means we must radically alter many of our institutions and practices, recognizing their inhumanity and carcerality. And that includes, no doubt, ed-tech. How much of ed-tech is, to use Ruha Benjamin's phrase, "the new Jim Code"? How much of ed-tech is designed by those who imagine students as cheats or criminals, as deficient or negligent?
  • "Reimagining" is a verb that education reformers are quite fond of. And "reimagining" seems too often to mean simply defunding, privatizing, union-busting, dismantling, outsourcing.
  • if Betsy DeVos is out there "reimagining," then we best be resisting
  • ...9 more annotations...
  • think we can view the promotion of ed-tech as a similar sort of process — the stories designed to convince us that the future of teaching and learning will be a technological wonder. The "jobs of the future that don't exist yet." The push for everyone to "learn to code."
  • The Matrix is, after all, a dystopia. So why would Matrix-style learning be desirable? Maybe that's the wrong question. Perhaps it's not so much that it's desirable, but it's just how our imaginations have been constructed, constricted even. We can't imagine any other ideal but speed and efficiency.
  • The first science fiction novel, published over 200 years ago, was in fact an ed-tech story: Mary Shelley's Frankenstein. While the book is commonly interpreted as a tale of bad science, it is also the story of bad education — something we tend to forget if we only know the story through the 1931 film version
  • Teaching machines and robot teachers were part of the Sixties' cultural imaginary — perhaps that's the problem with so many Boomer ed-reform leaders today. But that imaginary — certainly in the case of The Jetsons — was, upon close inspection, not always particularly radical or transformative. The students at Little Dipper Elementary still sat in desks in rows. The teacher still stood at the front of the class, punishing students who weren't paying attention.
  • we must also decolonize the ed-tech imaginary
  • Zuckerberg gave everyone at Facebook a copy of the Ernest Cline novel Ready Player One, for example, to get them excited about building technology for the future — a book that is really just a string of nostalgic references to Eighties white boy culture. And I always think about that New York Times interview with Sal Khan, where he said that "The science fiction books I like tend to relate to what we're doing at Khan Academy, like Orson Scott Card's 'Ender's Game' series." You mean, online math lectures are like a novel that justifies imperialism and genocide?! Wow.
  • This ed-tech imaginary is segregated. There are no Black students at the push-button school. There are no Black people in The Jetsons — no Black people living the American dream of the mid-twenty-first century
  • Part of the argument I make in my book is that much of education technology has been profoundly shaped by Skinner, even though I'd say that most practitioners today would say that they reject his theories; that cognitive science has supplanted behaviorism; and that after Ayn Rand and Noam Chomsky trashed Beyond Freedom and Dignity, no one paid attention to Skinner any more — which is odd considering there are whole academic programs devoted to "behavioral design," bestselling books devoted to the "nudge," and so on.
  • so much of the ed-tech imaginary is wrapped up in narratives about the Hero, the Weapon, the Machine, the Behavior, the Action, the Disruption. And it's so striking because education should be a practice of care, not conquest
Ed Webb

Would You Protect Your Computer's Feelings? Clifford Nass Says Yes. - ProfHacker - The ... - 0 views

  • The Man Who Lied to His Laptop condenses for a popular audience an argument that Nass has been making for at least 15 years: humans do not differentiate between computers and people in their social interactions.
  • At first blush, this sounds absurd. Everyone knows that it's "just a computer," and of course computers don't have feelings. And yet. Nass has a slew of amusing stories—and, crucially, studies based on those stories—indicating that, no matter what "everyone knows," people act as if the computer secretly cares. For example: In one study, users reviewed a software package, either on the same computer they'd used it on, or on a different computer. Consistently, participants gave the software better ratings when they reviewed in on the same computer—as if they didn't want the computer to feel bad. What's more, Nass notes, "every one of the participants insisted that she or he would never bother being polite to a computer" (7).
  • Nass found that users given completely random praise by a computer program liked it more than the same program without praise, even though they knew in advance the praise was meaningless. In fact, they liked it as much as the same program, if they were told the praise was accurate. (In other words, flattery was as well received as praise, and both were preferred to no positive comments.) Again, when questioned about the results, users angrily denied any difference at all in their reactions.
  •  
    How do you interact with the computing devices in your life?
Ed Webb

Search Engine Helps Users Connect In Arabic : NPR - 0 views

  • new technology that is revolutionizing the way Arabic-speaking people use the Internet
  • Abdullah says that of her 500 Egyptian students, 78 percent have never typed in Arabic online, a fact that greatly disturbed Habib Haddad, a Boston-based software engineer originally from Lebanon. "I mean imagine [if] 78 percent of French people don't type French," Haddad says. "Imagine how destructive that is online."
  • "The idea is, if you don't have an Arabic keyboard, you can type Arabic by spelling your words out phonetically," Jureidini says. "For example ... when you're writing the word 'falafel,' Yamli will convert that to Arabic in your Web browser. We will go and search not only the Arabic script version of that search query, but also for all the Western variations of that keyword."
  • ...1 more annotation...
  • At a recent "new" technology forum at MIT, Yamli went on to win best of show — a development that did not escape the attention of Google, which recently developed its own search and transliteration engine. "I guess Google recognizes a good idea when it sees it," Jureidini says. He adds, "And the way we counter it is by being better. We live and breathe Yamli every day, and we're constantly in the process of improving how people can use it." Experts in Arabic Web content say that since its release a year ago, Yamli has helped increase Arabic content on the Internet just by its use. They say that bodes well for the Arabic Web and for communication between the Arab and Western worlds.
Ed Webb

Dark Social: We Have the Whole History of the Web Wrong - Alexis C. Madrigal - The Atla... - 0 views

  • this vast trove of social traffic is essentially invisible to most analytics programs. I call it DARK SOCIAL. It shows up variously in programs as "direct" or "typed/bookmarked" traffic, which implies to many site owners that you actually have a bookmark or typed in www.theatlantic.com into your browser. But that's not actually what's happening a lot of the time. Most of the time, someone Gchatted someone a link, or it came in on a big email distribution list, or your dad sent it to you
  • the idea that "social networks" and "social media" sites created a social web is pervasive. Everyone behaves as if the traffic your stories receive from the social networks (Facebook, Reddit, Twitter, StumbleUpon) is the same as all of your social traffic
  • if you think optimizing your Facebook page and Tweets is "optimizing for social," you're only halfway (or maybe 30 percent) correct. The only real way to optimize for social spread is in the nature of the content itself. There's no way to game email or people's instant messages. There's no power users you can contact. There's no algorithms to understand. This is pure social, uncut
  • ...6 more annotations...
  • Almost 69 percent of social referrals were dark! Facebook came in second at 20 percent. Twitter was down at 6 percent
  • direct socia
  • the social sites that arrived in the 2000s did not create the social web, but they did structure it. This is really, really significant. In large part, they made sharing on the Internet an act of publishing (!), with all the attendant changes that come with that switch. Publishing social interactions makes them more visible, searchable, and adds a lot of metadata to your simple link or photo post. There are some great things about this, but social networks also give a novel, permanent identity to your online persona. Your taste can be monetized, by you or (much more likely) the service itself
  • the tradeoffs we make on social networks is not the one that we're told we're making. We're not giving our personal data in exchange for the ability to share links with friends. Massive numbers of people -- a larger set than exists on any social network -- already do that outside the social networks. Rather, we're exchanging our personal data in exchange for the ability to publish and archive a record of our sharing. That may be a transaction you want to make, but it might not be the one you've been told you made. 
  • "Only about four percent of total traffic is on mobile at all, so, at least as a percentage of total referrals, app referrals must be a tiny percentage,"
  • only 0.3 percent of total traffic has the Facebook mobile site as a referrer and less than 0.1 percent has the Facebook mobile app
  •  
    Heh. Social is really social, not 'social' - who knew?
Ed Webb

It's just not working out the way we thought it would « Lisa's (Online) Teach... - 0 views

  • Gradually, closed spaces (Facebook, Ning, even Google if you understand what they’re up to) have become the norm, as have monetized sites. The spaces that were free are no longer free, although many of us freely contributed our own work to these sites, providing the basis of their popularity in the first place. Crowdsourcing, celebrated in story and song, has become the exploitation of the work of others in order to make money or provide cheap customer service. The use of personal information for marketing purposes is widespread, and creative people are leaving the platforms that brought everyone into the agora in the first place. Scholars at first enthusiastic about the future now see it as a lonely place. And I see conversations where people who care deeply about the web, education for the 21st century, and learning theories are beginning to back away from proselytizing about academic openness.
  • it’s about users becoming the products in the marketplace and the amusements in the panopticon
  • Where before it might have made sense to say we should make sure everyone is web literate, now such literacy extends beyond critical thinking about websites into a deeper understanding of what the using the web means for individual privacy and independence. This time, the enemies of openness and freedom won’t need to argue their philosophical reasons – they’ll argue that they’re protecting people. And the trouble is, they may be right.
  • ...1 more annotation...
  • We need to be the antidote for blind adoption
Ed Webb

StoryMap JS - Telling stories with maps. - 1 views

  •  
    Looks neat. Only in Alpha as yet
Ed Webb

Study Shows Students Are Addicted to Social Media | News | Communications of the ACM - 0 views

  • most college students are not just unwilling, but functionally unable to be without their media links to the world. "I clearly am addicted and the dependency is sickening," says one person in the study. "I feel like most people these days are in a similar situation, for between having a Blackberry, a laptop, a television, and an iPod, people have become unable to shed their media skin."
  • what they wrote at length about was how they hated losing their personal connections. Going without media meant, in their world, going without their friends and family
  • they couldn't connect with friends who lived close by, much less those far away
  • ...8 more annotations...
  • "Texting and IM-ing my friends gives me a constant feeling of comfort," wrote one student. "When I did not have those two luxuries, I felt quite alone and secluded from my life. Although I go to a school with thousands of students, the fact that I was not able to communicate with anyone via technology was almost unbearable."
  • students' lives are wired together in such ways that opting out of that communication pattern would be tantamount to renouncing a social life
  • "Students expressed tremendous anxiety about being cut-off from information,"
  • How did they get the information? In a disaggregated way, and not typically from the news outlet that broke or committed resources to a story.
  • the young adults in this study appeared to be generally oblivious to branded news and information
  • an undifferentiated wave to them via social media
  • 43.3 percent of the students reported that they had a "smart phone"
  • Quotes
Ed Webb

Elgan: Why goofing off boosts productivity - 0 views

  • I believe that not only are office slackers more productive than work-only employees, but that people who work from home are more productive than the office crowd -- and for many of the same reasons
  • 2. It gets personal things off your mind.
  • 3. It builds work relationships.
  • ...2 more annotations...
  • 1. The subconscious mind keeps working.
  • 4. It converts real-time interactions into asynchronous ones.
  •  
    That's my story and I'm sticking to it.
Ed Webb

The powerful and mysterious brain circuitry that makes us love Google, Twitter, and tex... - 0 views

  • For humans, this desire to search is not just about fulfilling our physical needs. Panksepp says that humans can get just as excited about abstract rewards as tangible ones. He says that when we get thrilled about the world of ideas, about making intellectual connections, about divining meaning, it is the seeking circuits that are firing.
  • Our internal sense of time is believed to be controlled by the dopamine system. People with hyperactivity disorder have a shortage of dopamine in their brains, which a recent study suggests may be at the root of the problem. For them even small stretches of time seem to drag.
  • When we get the object of our desire (be it a Twinkie or a sexual partner), we engage in consummatory acts that Panksepp says reduce arousal in the brain and temporarily, at least, inhibit our urge to seek.
  • ...3 more annotations...
  • But our brains are designed to more easily be stimulated than satisfied. "The brain seems to be more stingy with mechanisms for pleasure than for desire," Berridge has said. This makes evolutionary sense. Creatures that lack motivation, that find it easy to slip into oblivious rapture, are likely to lead short (if happy) lives. So nature imbued us with an unquenchable drive to discover, to explore. Stanford University neuroscientist Brian Knutson has been putting people in MRI scanners and looking inside their brains as they play an investing game. He has consistently found that the pictures inside our skulls show that the possibility of a payoff is much more stimulating than actually getting one.
  • all our electronic communication devices—e-mail, Facebook feeds, texts, Twitter—are feeding the same drive as our searches. Since we're restless, easily bored creatures, our gadgets give us in abundance qualities the seeking/wanting system finds particularly exciting. Novelty is one. Panksepp says the dopamine system is activated by finding something unexpected or by the anticipation of something new. If the rewards come unpredictably—as e-mail, texts, updates do—we get even more carried away. No wonder we call it a "CrackBerry."
  • If humans are seeking machines, we've now created the perfect machines to allow us to seek endlessly. This perhaps should make us cautious. In Animals in Translation, Temple Grandin writes of driving two indoor cats crazy by flicking a laser pointer around the room. They wouldn't stop stalking and pouncing on this ungraspable dot of light—their dopamine system pumping. She writes that no wild cat would indulge in such useless behavior: "A cat wants to catch the mouse, not chase it in circles forever." She says "mindless chasing" makes an animal less likely to meet its real needs "because it short-circuits intelligent stalking behavior." As we chase after flickering bits of information, it's a salutary warning.
Ed Webb

I unintentionally created a biased AI algorithm 25 years ago - tech companies are still... - 0 views

  • How and why do well-educated, well-intentioned scientists produce biased AI systems? Sociological theories of privilege provide one useful lens.
  • Scientists also face a nasty subconscious dilemma when incorporating diversity into machine learning models: Diverse, inclusive models perform worse than narrow models.
  • fairness can still be the victim of competitive pressures in academia and industry. The flawed Bard and Bing chatbots from Google and Microsoft are recent evidence of this grim reality. The commercial necessity of building market share led to the premature release of these systems.
  • ...3 more annotations...
  • Their training data is biased. They are designed by an unrepresentative group. They face the mathematical impossibility of treating all categories equally. They must somehow trade accuracy for fairness. And their biases are hiding behind millions of inscrutable numerical parameters.
  • biased AI systems can still be created unintentionally and easily. It’s also clear that the bias in these systems can be harmful, hard to detect and even harder to eliminate.
  • with North American computer science doctoral programs graduating only about 23% female, and 3% Black and Latino students, there will continue to be many rooms and many algorithms in which underrepresented groups are not represented at all.
Ed Webb

The Generative AI Race Has a Dirty Secret | WIRED - 0 views

  • The race to build high-performance, AI-powered search engines is likely to require a dramatic rise in computing power, and with it a massive increase in the amount of energy that tech companies require and the amount of carbon they emit.
  • Every time we see a step change in online processing, we see significant increases in the power and cooling resources required by large processing centres
  • third-party analysis by researchers estimates that the training of GPT-3, which ChatGPT is partly based on, consumed 1,287 MWh, and led to emissions of more than 550 tons of carbon dioxide equivalent—the same amount as a single person taking 550 roundtrips between New York and San Francisco
  • ...3 more annotations...
  • There’s also a big difference between utilizing ChatGPT—which investment bank UBS estimates has 13 million users a day—as a standalone product, and integrating it into Bing, which handles half a billion searches every day.
  • Data centers already account for around one percent of the world’s greenhouse gas emissions, according to the International Energy Agency. That is expected to rise as demand for cloud computing increases, but the companies running search have promised to reduce their net contribution to global heating. “It’s definitely not as bad as transportation or the textile industry,” Gómez-Rodríguez says. “But [AI] can be a significant contributor to emissions.”
  • The environmental footprint and energy cost of integrating AI into search could be reduced by moving data centers onto cleaner energy sources, and by redesigning neural networks to become more efficient, reducing the so-called “inference time”—the amount of computing power required for an algorithm to work on new data.
Ed Webb

ChatGPT Is Nothing Like a Human, Says Linguist Emily Bender - 0 views

  • Please do not conflate word form and meaning. Mind your own credulity.
  • We’ve learned to make “machines that can mindlessly generate text,” Bender told me when we met this winter. “But we haven’t learned how to stop imagining the mind behind it.”
  • A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”
  • ...16 more annotations...
  • “We call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harms,” she co-wrote in 2021. “Work on synthetic human behavior is a bright line in ethical Al development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups.”
  • chatbots that we easily confuse with humans are not just cute or unnerving. They sit on a bright line. Obscuring that line and blurring — bullshitting — what’s human and what’s not has the power to unravel society
  • She began learning from, then amplifying, Black women’s voices critiquing AI, including those of Joy Buolamwini (she founded the Algorithmic Justice League while at MIT) and Meredith Broussard (the author of Artificial Unintelligence: How Computers Misunderstand the World). She also started publicly challenging the term artificial intelligence, a sure way, as a middle-aged woman in a male field, to get yourself branded as a scold. The idea of intelligence has a white-supremacist history. And besides, “intelligent” according to what definition? The three-stratum definition? Howard Gardner’s theory of multiple intelligences? The Stanford-Binet Intelligence Scale? Bender remains particularly fond of an alternative name for AI proposed by a former member of the Italian Parliament: “Systematic Approaches to Learning Algorithms and Machine Inferences.” Then people would be out here asking, “Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?”
  • Tech-makers assuming their reality accurately represents the world create many different kinds of problems. The training data for ChatGPT is believed to include most or all of Wikipedia, pages linked from Reddit, a billion words grabbed off the internet. (It can’t include, say, e-book copies of everything in the Stanford library, as books are protected by copyright law.) The humans who wrote all those words online overrepresent white people. They overrepresent men. They overrepresent wealth. What’s more, we all know what’s out there on the internet: vast swamps of racism, sexism, homophobia, Islamophobia, neo-Nazism.
  • One fired Google employee told me succeeding in tech depends on “keeping your mouth shut to everything that’s disturbing.” Otherwise, you’re a problem. “Almost every senior woman in computer science has that rep. Now when I hear, ‘Oh, she’s a problem,’ I’m like, Oh, so you’re saying she’s a senior woman?”
  • “We haven’t learned to stop imagining the mind behind it.”
  • In March 2021, Bender published “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” with three co-authors. After the paper came out, two of the co-authors, both women, lost their jobs as co-leads of Google’s Ethical AI team.
  • “On the Dangers of Stochastic Parrots” is not a write-up of original research. It’s a synthesis of LLM critiques that Bender and others have made: of the biases encoded in the models; the near impossibility of studying what’s in the training data, given the fact they can contain billions of words; the costs to the climate; the problems with building technology that freezes language in time and thus locks in the problems of the past. Google initially approved the paper, a requirement for publications by staff. Then it rescinded approval and told the Google co-authors to take their names off it. Several did, but Google AI ethicist Timnit Gebru refused. Her colleague (and Bender’s former student) Margaret Mitchell changed her name on the paper to Shmargaret Shmitchell, a move intended, she said, to “index an event and a group of authors who got erased.” Gebru lost her job in December 2020, Mitchell in February 2021. Both women believe this was retaliation and brought their stories to the press. The stochastic-parrot paper went viral, at least by academic standards. The phrase stochastic parrot entered the tech lexicon.
  • Tech execs loved it. Programmers related to it. OpenAI CEO Sam Altman was in many ways the perfect audience: a self-identified hyperrationalist so acculturated to the tech bubble that he seemed to have lost perspective on the world beyond. “I think the nuclear mutually assured destruction rollout was bad for a bunch of reasons,” he said on AngelList Confidential in November. He’s also a believer in the so-called singularity, the tech fantasy that, at some point soon, the distinction between human and machine will collapse. “We are a few years in,” Altman wrote of the cyborg merge in 2017. “It’s probably going to happen sooner than most people think. Hardware is improving at an exponential rate … and the number of smart people working on AI is increasing exponentially as well. Double exponential functions get away from you fast.” On December 4, four days after ChatGPT was released, Altman tweeted, “i am a stochastic parrot, and so r u.”
  • “This is one of the moves that turn up ridiculously frequently. People saying, ‘Well, people are just stochastic parrots,’” she said. “People want to believe so badly that these language models are actually intelligent that they’re willing to take themselves as a point of reference and devalue that to match what the language model can do.”
  • The membrane between academia and industry is permeable almost everywhere; the membrane is practically nonexistent at Stanford, a school so entangled with tech that it can be hard to tell where the university ends and the businesses begin.
  • “No wonder that men who live day in and day out with machines to which they believe themselves to have become slaves begin to believe that men are machines.”
  • what’s tenure for, after all?
  • LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it’s not about humility. It’s not about all of us. It’s not about becoming a humble creation among the world’s others. It’s about some of us — let’s be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.
  • The AI dream is “governed by the perfectibility thesis, and that’s where we see a fascist form of the human.”
  • “Why are you trying to trick people into thinking that it really feels sad that you lost your phone?”
1 - 20 of 20
Showing 20 items per page