Skip to main content

Home/ Instructional & Media Services at Dickinson College/ Group items tagged wikipedia

Rss Feed Group items tagged

Ed Webb

Wired Campus: U. of Richmond Creates a Wikipedia for Undergraduate Scholars -... - 0 views

  • The current model for teaching and learning is based on a relative scarcity of research and writing, not an excess. With that in mind, Mr. Torget and several others have created a Web site called History Engine to help students around the country work together on a shared tool to make sense of history documents online. Students generate brief essays on American history, and the History Engine aggregates the essays and makes them navigable by tags. Call it Wikipedia for students. Except better. First of all, its content is moderated by professors. Second, while Wikipedia still presents information two-dimensionally, History Engine employs mapping technology to organize scholarship by time period, geographic location, and themes.
  • “The challenge of a digital age is that that writing assignment hasn’t changed since the age of the typewriter,” Mr. Torget said. “The digital medium requires us to rethink how we make those assignments.”
Ed Webb

Keep the 'Research,' Ditch the 'Paper' - Commentary - The Chronicle of Higher Education - 1 views

  • we need to construct meaningful opportunities for students to actually engage in research—to become modest but real contributors to the research on an actual question. When students write up the work they’ve actually performed, they create data and potential contributions to knowledge, contributions that can be digitally published or shared with a target community
  • Schuman’s critique of traditional writing instruction is sadly accurate. The skill it teaches most students is little more than a smash-and-grab assault on the secondary literature. Students open a window onto a search engine or database. They punch through to the first half-dozen items. Snatching random gems that seem to support their preconceived thesis, they change a few words, cobble it all together with class notes in the form of an argument, and call it "proving a thesis."
  • What happens when a newly employed person tries to pass off quote-farmed drivel as professional communication?
  • ...6 more annotations...
  • Generally these papers are just pumped-up versions of the five-paragraph essay, with filler added. Thesis-driven, argumentative, like the newspaper editorials the genre is based on, this "researched writing" promises to solve big questions with little effort: "Reproductive rights resolved in five pages!"
  • Actual writing related to research is modest, qualified, and hesitant
  • our actual model involves elaborately respectful conversation, demonstrating sensitivity to the most nuanced claims of previous researchers
  • Academic, legal, medical, and business writing has easily understandable conventions. We responsibly survey the existing literature, formally or informally creating an annotated bibliography. We write a review of the literature, identifying a "blank" spot ignored by other scholars, or a "bright" spot where we see conflicting evidence. We describe the nature of our research in terms of a contribution to the blank or bright spot in that conversation. We conclude by pointing to further questions.
  • Millions of pieces of research writing that aren’t essays usefully circulate in the profession through any number of sharing technologies, including presentations and posters; grant and experiment proposals; curated, arranged, translated, or visualized data; knowledgeable dialogue in online media with working professionals; independent journalism, arts reviews, and Wikipedia entries; documentary pitches, scripts and storyboards; and informative websites.
  • real researchers don’t write a word unless they have something to contribute. We should teach our students to do the same
Ed Webb

It's just not working out the way we thought it would « Lisa's (Online) Teach... - 0 views

  • Gradually, closed spaces (Facebook, Ning, even Google if you understand what they’re up to) have become the norm, as have monetized sites. The spaces that were free are no longer free, although many of us freely contributed our own work to these sites, providing the basis of their popularity in the first place. Crowdsourcing, celebrated in story and song, has become the exploitation of the work of others in order to make money or provide cheap customer service. The use of personal information for marketing purposes is widespread, and creative people are leaving the platforms that brought everyone into the agora in the first place. Scholars at first enthusiastic about the future now see it as a lonely place. And I see conversations where people who care deeply about the web, education for the 21st century, and learning theories are beginning to back away from proselytizing about academic openness.
  • it’s about users becoming the products in the marketplace and the amusements in the panopticon
  • Where before it might have made sense to say we should make sure everyone is web literate, now such literacy extends beyond critical thinking about websites into a deeper understanding of what the using the web means for individual privacy and independence. This time, the enemies of openness and freedom won’t need to argue their philosophical reasons – they’ll argue that they’re protecting people. And the trouble is, they may be right.
  • ...1 more annotation...
  • We need to be the antidote for blind adoption
Ed Webb

Oxford University Press launches the Anti-Google - 0 views

  • he Anti-Google: Oxford Bibliographies Online (OBO)
  • essentially a straightforward, hyperlinked collection of professionally-produced, peer-reviewed bibliographies in different subject areas—sort of a giant, interactive syllabus put together by OUP and teams of scholars in different disciplines
  • "You can't come up with a search filter that solves the problem of information overload," Zucca told Ars. OUP is betting that the solution to the problem lies in content, which is its area of expertise, and not in technology, which is Google's and Microsoft's.
  • ...3 more annotations...
  • at least users can see exactly how the sausage is made. Contrast this to Google or Bing, where the search algorithm that produces results is a closely guarded secret.
  • The word that Zucca used a number of times in our chat was "authority," and OUP is betting that individual and institutional users will value the authority enough that they'll be willing to pay for access to the service
  • This paywall is the only feature of OBO that seems truly unfortunate, given that the competition (search and Wikipedia) is free. High school kids and motivated amateurs will be left slumming it with whatever they can get from the public Internet, and OBO's potential reach and impact will be severely limite
Ed Webb

The Wired Campus - A Year Later, a Texas University Says Giving Students iPhones Is an ... - 1 views

  • Abilene Christian University says handing out iPhones to its entire first-year class in 2008 has improved interaction between students and faculty members.
  • Does positive feeling mean better teaching and learning? Mr. Schubert adds that it's too early to collect enough data to understand how giving out iPhones improves education. Student testimonials in the report, however, highlight easier access to professors. One savvy student says having an iPhone means he's less confused in class. "My professor will ask a question about something and I don't know what it is, but right here on my phone, with just one touch, I have Dictionary.com, I have a Wikipedia app—I can look it up," said Tyler Sutphen, a marketing major. "I know what they're talking about, because it's right there."
Ed Webb

The Internet Intellectual - 0 views

  • Even Thomas Friedman would be aghast at some of Jarvis’s cheesy sound-bites
  • What does that actually mean?
  • In Jarvis’s universe, all the good things are technologically determined and all the bad things are socially determined
  • ...7 more annotations...
  • Jarvis never broaches such subtleties. His is a simple world:
  • why not consider the possibility that the incumbents may be using the same tools, Jarvis’s revered technologies, to tell us what to think, and far more effectively than before? Internet shelf space may be infinite, but human attention is not. Cheap self-publishing marginally improves one’s chances of being heard, but nothing about this new decentralized public sphere suggests that old power structures—provided they are smart and willing to survive—will not be able to use it to their benefit
  • Jarvis 1.0 was all about celebrating Google, but Jarvis 2.0 has new friends in Facebook and Twitter. (An Internet intellectual always keeps up.) Jarvis 1.0 wrote that “Google’s moral of universal empowerment is the sometimes-forgotten ideal of democracy,” and argued that the company “provides the infrastructure for a culture of choice,” while its “algorithms and its business model work because Google trusts us.” Jarvis 2.0 claims that “by sharing publicly, we people challenge Google’s machines and reclaim our authority on the internet from algorithms.”
  • Jarvis has another reference point, another sacred telos: the equally grand and equally inexorable march of the Internet, which in his view is a technology that generates its own norms, its own laws, its own people. (He likes to speak of “us, people of the Net.”) For the Technology Man, the Internet is the glue that holds our globalized world together and the divine numen that fills it with meaning. If you thought that ethnocentrism was bad, brace yourself for Internet-centrism
  • Why worry about the growing dominance of such digitalism? The reason should be obvious. As Internet-driven explanations crowd out everything else, our entire vocabulary is being re-defined. Collaboration is re-interpreted through the prism of Wikipedia; communication, through the prism of social networking; democratic participation, through the prism of crowd-sourcing; cosmopolitanism, through the prism of reading the blogs of exotic “others”; political upheaval, through the prism of the so-called Twitter revolutions. Even the persecution of dissidents is now seen as an extension of online censorship (rather than the other way around). A recent headline on the blog of the Harvard-based Herdictproject—it tracks Internet censorship worldwide—announces that, in Mexico and Morocco, “Online Censorship Goes Offline.” Were activists and dissidents never harassed before Twitter and Facebook?
  • Most Internet intellectuals simply choose a random point in the distant past—the honor almost invariably goes to the invention of the printing press—and proceed to draw a straight line from Gutenberg to Zuckerberg, as if the Counter-Reformation, the Thirty Years’ War, the Reign of Terror, two world wars—and everything else—never happened.
  • even their iPad is of interest to them only as a “platform”—another buzzword of the incurious—and not as an artifact that is assembled in dubious conditions somewhere in East Asian workshops so as to produce cultic devotion in its more fortunate owners. This lack of elementary intellectual curiosity is the defining feature of the Internet intellectual. History, after all, is about details, but no Internet intellectual wants to be accused of thinking small. And so they think big—sloppily, ignorantly, pretentiously, and without the slightest appreciation of the difference between critical thought and market propaganda.
  •  
    In which Evgeny rips Jeff a new one
Ed Webb

ChatGPT Is Nothing Like a Human, Says Linguist Emily Bender - 0 views

  • Please do not conflate word form and meaning. Mind your own credulity.
  • We’ve learned to make “machines that can mindlessly generate text,” Bender told me when we met this winter. “But we haven’t learned how to stop imagining the mind behind it.”
  • A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”
  • ...16 more annotations...
  • “We call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harms,” she co-wrote in 2021. “Work on synthetic human behavior is a bright line in ethical Al development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups.”
  • chatbots that we easily confuse with humans are not just cute or unnerving. They sit on a bright line. Obscuring that line and blurring — bullshitting — what’s human and what’s not has the power to unravel society
  • She began learning from, then amplifying, Black women’s voices critiquing AI, including those of Joy Buolamwini (she founded the Algorithmic Justice League while at MIT) and Meredith Broussard (the author of Artificial Unintelligence: How Computers Misunderstand the World). She also started publicly challenging the term artificial intelligence, a sure way, as a middle-aged woman in a male field, to get yourself branded as a scold. The idea of intelligence has a white-supremacist history. And besides, “intelligent” according to what definition? The three-stratum definition? Howard Gardner’s theory of multiple intelligences? The Stanford-Binet Intelligence Scale? Bender remains particularly fond of an alternative name for AI proposed by a former member of the Italian Parliament: “Systematic Approaches to Learning Algorithms and Machine Inferences.” Then people would be out here asking, “Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?”
  • Tech-makers assuming their reality accurately represents the world create many different kinds of problems. The training data for ChatGPT is believed to include most or all of Wikipedia, pages linked from Reddit, a billion words grabbed off the internet. (It can’t include, say, e-book copies of everything in the Stanford library, as books are protected by copyright law.) The humans who wrote all those words online overrepresent white people. They overrepresent men. They overrepresent wealth. What’s more, we all know what’s out there on the internet: vast swamps of racism, sexism, homophobia, Islamophobia, neo-Nazism.
  • One fired Google employee told me succeeding in tech depends on “keeping your mouth shut to everything that’s disturbing.” Otherwise, you’re a problem. “Almost every senior woman in computer science has that rep. Now when I hear, ‘Oh, she’s a problem,’ I’m like, Oh, so you’re saying she’s a senior woman?”
  • “We haven’t learned to stop imagining the mind behind it.”
  • In March 2021, Bender published “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” with three co-authors. After the paper came out, two of the co-authors, both women, lost their jobs as co-leads of Google’s Ethical AI team.
  • “On the Dangers of Stochastic Parrots” is not a write-up of original research. It’s a synthesis of LLM critiques that Bender and others have made: of the biases encoded in the models; the near impossibility of studying what’s in the training data, given the fact they can contain billions of words; the costs to the climate; the problems with building technology that freezes language in time and thus locks in the problems of the past. Google initially approved the paper, a requirement for publications by staff. Then it rescinded approval and told the Google co-authors to take their names off it. Several did, but Google AI ethicist Timnit Gebru refused. Her colleague (and Bender’s former student) Margaret Mitchell changed her name on the paper to Shmargaret Shmitchell, a move intended, she said, to “index an event and a group of authors who got erased.” Gebru lost her job in December 2020, Mitchell in February 2021. Both women believe this was retaliation and brought their stories to the press. The stochastic-parrot paper went viral, at least by academic standards. The phrase stochastic parrot entered the tech lexicon.
  • Tech execs loved it. Programmers related to it. OpenAI CEO Sam Altman was in many ways the perfect audience: a self-identified hyperrationalist so acculturated to the tech bubble that he seemed to have lost perspective on the world beyond. “I think the nuclear mutually assured destruction rollout was bad for a bunch of reasons,” he said on AngelList Confidential in November. He’s also a believer in the so-called singularity, the tech fantasy that, at some point soon, the distinction between human and machine will collapse. “We are a few years in,” Altman wrote of the cyborg merge in 2017. “It’s probably going to happen sooner than most people think. Hardware is improving at an exponential rate … and the number of smart people working on AI is increasing exponentially as well. Double exponential functions get away from you fast.” On December 4, four days after ChatGPT was released, Altman tweeted, “i am a stochastic parrot, and so r u.”
  • “This is one of the moves that turn up ridiculously frequently. People saying, ‘Well, people are just stochastic parrots,’” she said. “People want to believe so badly that these language models are actually intelligent that they’re willing to take themselves as a point of reference and devalue that to match what the language model can do.”
  • The membrane between academia and industry is permeable almost everywhere; the membrane is practically nonexistent at Stanford, a school so entangled with tech that it can be hard to tell where the university ends and the businesses begin.
  • “No wonder that men who live day in and day out with machines to which they believe themselves to have become slaves begin to believe that men are machines.”
  • what’s tenure for, after all?
  • LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it’s not about humility. It’s not about all of us. It’s not about becoming a humble creation among the world’s others. It’s about some of us — let’s be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.
  • The AI dream is “governed by the perfectibility thesis, and that’s where we see a fascist form of the human.”
  • “Why are you trying to trick people into thinking that it really feels sad that you lost your phone?”
1 - 8 of 8
Showing 20 items per page