Instead of burrowing into a silo or vertical on a single webpage, as our Gen Z digital natives do, fact checkers tended to read laterally, a strategy that sent them zipping off a site to open new tabs across the horizontal axis of their screens. And their first stop was often the site we tell kids they should avoid: Wikipedia. But checkers used Wikipedia differently than the rest of us often do, skipping the main article to dive straight into the references, where more established sources can be found. They knew that the more controversial the topic, the more likely the entry was to be "protected," through the various locks Wikipedia applies to prevent changes by anyone except high-ranking editors. Further, the fact checkers knew how to use a Wikipedia article's "Talk" page, the tab hiding in plain sight right next to the article—a feature few students even know about, still less consult. It's the "Talk" page where an article's claims are established, disputed, and, when the evidence merits it, altered.
Fluent in Social Media, Failing in Fake News: Generation Z, Online - Pacific Standard - 0 views
-
-
In the short term, we can do a few useful things. First, let's make sure that kids (and their teachers) possess some basic skills for evaluating digital claims. Some quick advice: When you land on an unfamiliar website, don't get taken in by official-looking logos or snazzy graphics. Open a new tab (better yet, several) and Google the group that's trying to persuade you. Second, don't click on the first result. Take a tip from fact checkers and practice click restraint: Scan the snippets (the brief sentence accompanying each search result) and make a smart first choice.
-
What if the answer isn't more media literacy, but a different kind of media literacy?
- ...1 more annotation...
The Digital-Humanities Bust - The Chronicle of Higher Education - 0 views
-
To ask about the field is really to ask how or what DH knows, and what it allows us to know. The answer, it turns out, is not much. Let’s begin with the tension between promise and product. Any neophyte to digital-humanities literature notices its extravagant rhetoric of exuberance. The field may be "transforming long-established disciplines like history or literary criticism," according to a Stanford Literary Lab email likely unread or disregarded by a majority in those disciplines. Laura Mandell, director of the Initiative for Digital Humanities, Media, and Culture at Texas A&M University, promises to break "the book format" without explaining why one might want to — even as books, against all predictions, doggedly persist, filling the airplane-hanger-sized warehouses of Amazon.com.
-
A similar shortfall is evident when digital humanists turn to straight literary criticism. "Distant reading," a method of studying novels without reading them, uses computer scanning to search for "units that are much smaller or much larger than the text" (in Franco Moretti’s words) — tropes, at one end, genres or systems, at the other. One of the most intelligent examples of the technique is Richard Jean So and Andrew Piper’s 2016 Atlantic article, "How Has the MFA Changed the American Novel?" (based on their research for articles published in academic journals). The authors set out to quantify "how similar authors were across a range of literary aspects, including diction, style, theme, setting." But they never cite exactly what the computers were asked to quantify. In the real world of novels, after all, style, theme, and character are often achieved relationally — that is, without leaving a trace in words or phrases recognizable as patterns by a program.
-
Perhaps toward that end, So, an assistant professor of English at the University of Chicago, wrote an elaborate article in Critical Inquiry with Hoyt Long (also of Chicago) on the uses of machine learning and "literary pattern recognition" in the study of modernist haiku poetry. Here they actually do specify what they instructed programmers to look for, and what computers actually counted. But the explanation introduces new problems that somehow escape the authors. By their own admission, some of their interpretations derive from what they knew "in advance"; hence the findings do not need the data and, as a result, are somewhat pointless. After 30 pages of highly technical discussion, the payoff is to tell us that haikus have formal features different from other short poems. We already knew that.
- ...2 more annotations...
1 - 3 of 3
Showing 20▼ items per page