Well, seeing as I started us off last week, I may as well do it again.
Given the growing influence of the Digital Humanities and its emphasis on multimodal and/or "buildable" projects - for example, Mark Sample advocates in doing away with essays entirely (404) - it makes sense that a new method of evaluation for this type of work would be required. Another driving factor in this need for a new system is the fact that the existing system is far from perfect; Jonathan Dresner outlines some of the problems with both absolute and relative systems of grading - grade decline and grade inflation, respectively ("Towards a Unified Theory of Grading"). The question then becomes how to create this new framework for evaluation. For the sake of simplicity, the framework I will discuss will focus on the evaluation of students' work, rather than the recognition of scholarly work by faculties and institutions. Based on reading a variety of articles discussing this subject, I feel that a combination of elements from the conventional system of grading within a framework of peer-review provides the most effective means of evaluation. Shannon Christine Mattern openly acknowledges that her own framework is a consolidation of a variety of different models ("Evaluating Multimodal Work, Revisited"), and it is a similar method that I am using for my own model, in combining her list of considerations along with Cathy Davidson's outlining of a peer-review or crowdsourced framework ("How to Crowdsource Grading").
Mattern provides a series of criteria and considerations for evaluating multimodal scholarship that is a particularly effective means of evaluating work from the position of an instructor. She separates her criteria into sections based on concept, design and technique, documentation, academic integrity and openness, and review and critique (EMWR). Several of her considerations for evaluation stem from pre-existing grading frameworks, such as the presence of a thesis or argument, evidence of research, whether the content of the project is supplemented with documented supporting evidence, and whether it can be said to adhere to a certain level of academic integrity (EMWR). One important question that is often asked in my field in relation to specific research is the question "so what?" If the project in question can self-reflexively approach and answer this question, then it has already proven its worth to the discipline. In fact, it is a question I have seen directly asked to presenters at Film Studies conferences. All of these elements are necessary for the consideration of the efficacy of any given project, regardless of its format or medium.
At the same time, there must be acknowledgement given to the specifics of digital or multimedia work, whether it is a website, a visualization, an interactive game, a tool, or any other form of project that could fall under the purview of the digital humanities. It is here that some of her other criteria can be used effectively, such as the effectiveness of the form or interface in use, the consideration of who the audience of the project may be, its accessibility, and whether it has been reviewed, either by experts or by presenting it at a conference (EMWR). There is one specific criterion that I think is particularly important, when she questions "need this have been a multimedia project, or could it just as easily have been executed on paper?" (EMWR) This gets to the heart of the intention behind a specific project, and to how effectively the technology behind it is being used. For example, in the interactive documentary Welcome to Pine Point, it is a question that the creators engage with directly when they admit that "it could have been a book, but it probably makes more sense that it became this" (The Goggles, "About this Project").
The second part of a successful framework for evaluation lies in the use of a peer-review system. This type of framework certainly has its advantages; when used effectively, it becomes a means of teaching students to think critically (Mattern, EMWR) and to engage with each other's work directly rather than just with the assigned readings. One method of peer-review that I have experienced as a student came from a course in which our final project underwent a "draft workshop" phase. All of us submitted our first draft copies to the other students in the course and then met in a workshop environment to discuss the strengths and weaknesses of each other's projects. Because it occurred before the completion of the assignment, we were able to revise and acknowledge the comments of our peers in the final version. Structuring the peer-review portion of the system as an ongoing and collaborative process allows the students to not only improve their work, but also may point out problems or issues that even the instructor may not notice.
While scholars like Cathy Davidson advocate for a purely peer-review based system (HCG), I feel that using a combination of adapted "conventional" criteria as outlined by Shannon Christine Mattern along with a draft or testing-phase peer-review is a much more effective means of evaluating digital humanities scholarship. One important factor that is always present in any evaluative framework is, as the Modern Language Association points out, the need for adaptability; there is no such thing as a universal framework ("Guidelines for Evaluating Work in Digital Humanities and Digital Media"). The key to a successful method is openness and transparency between the students and the instructor. Ensuring that the students understand the criteria by which they are evaluated - both by the instructor and by their peers - can prevent possible issues or complaints that may arise, which is something that Davidson mentions directly in the defense of her crowdsourced system (HCG). While I acknowledge that the framework I have outlined may not be perfect, I feel it is a more effective means of evaluating digital humanities scholarship than the existing system would allow.
Sample, Mark L. "What's Wrong with Writing Essays." Debates in the Digital Humanities. Ed. Matthew K. Gold. Minneapolis: University of Minnesota Press, 2012. 404-405.
The ethos of the contemporary Digital Humanities (DH) community is one that collectively values "collaboration, openness, nonhierarchical relations, and agility" (Kirschenbaum 2011:N.P.). It is also a community whose primary aim is to foster innovation, advance knowledge, and serve the public by creating tools and information that is openly accessible (Spiro 2011:N.P.): Given the fact that some argue how DH has the "potential to reshape fundamental aspects of academic practice" (Gold 2011:N.P.), questions around how scholarly projects in the digital humanities should be evaluated arise.
Evaluative criterions for scholarly digital humanities projects are fundamentally imperative if DH seeks to receive recognition and approval from the larger academic community. Considering the diversity of DH projects, a one-size-fits-all-approach to evaluation would be inadequate. Nevertheless, commonalities for evaluation among variegated DH practices and projects are necessary in spite of the fact that such pursuits are often complex and context dependent. Accordingly, a proposed series of evaluative criteria is explored in figure 1.1. This criterion reflects a balance between traditional scholarly evaluation and prototypical digital humanities projects. Conventional criteria of assessment are a useful starting point as DH has been described as a "jumping-off point for the building of a scholarly identity" (Waltzer).
The implications of these evaluative criteria are that contemporary academic institutions will need to place greater emphasis on computing strategies and techniques. Within the field of Sociology, computing does not hold prominence or precedence as a methodological research tool. Instead, most researchers in this field utilize interview techniques, participation, observation, etc. Accordingly, this field would need to embrace a hybrid approach (e.g. social computing) in order to reflect current scholarly projects and technologies within DH. Finally, given the complexity and diversity of DH projects, current standardized measures and practices within academia may need to be restructured or abandoned altogether (see Sample 2011).
Figure 1.1: Do the developers of a project connect theory and praxis? Does the project adhere to a coherent and logical argument that is supported by empirical evidence (Rallis and Rossman 2012)? Does the project succeed in addressing its stated goal? How are practices recorded and described throughout the research process? To what extent does the project adhere to practices within the DH community? How are the evaluators of scholarly projects reviewed? What kinds of ethical practices drive the research for the project? What role did reflexivity play within the process of developing the project? Does the project provide the "greatest good for the greatest amount of people" (ibid:74)? Who does the project benefit and what is its contribution to the DH community? What are its anticipated effects and how did it meet this projection? In what ways is this project interdisciplinary (Manoff 2004:22) and how can it be transferred for use by other academic disciplines? How is the developer of the project engaged with the community of practice and the community of discourse? How is the project open and accessible for academic and public review? How does this tool help students "critically produce, consume, and assess information during their college years and beyond" (Waltzer 2011)? Who is being represented in the work of the project? Does the final project represent all individuals contributing to the production of knowledge fairly and accurately?
Works Cited:
Kirschenbaum, Matthew. 2011. "What is Digital Humanities and What's It Doing in English Departments" in Debates in the Digital humanities:3-11. Spiro, Lisa. 2011. "This is Why We Fight: Defining the Values of the Digital Humanities" in Debates in the Digital humanities:16-35. Sample, Mark. 2011. "What's Wrong with Writing Essays" in Debates in the Digital humanities:406-408. Gold, Matthew. 2011. "The Digital Humanities Moment" in Debates in the Digital humanities:xi. Manoff, Marlene. 2004. "Theories of the Archive from Across the Disciplines," Libraries and the Academy 4.1:9-25. Rallis, Sharon and Gretchen Rossman. 2012. The Research Journey: Introduction to Inquiry. The Gilford Press. Waltzer, Luke. 2011. "Digital Humanities and the 'Ugly Stepchildren' of American Higher Education" in Debates in the Digital humanities:335-349.
Mark Sample notes that "The student essay is a twitch in a void, a compressed outpouring of energy…that means nothing to no one." and quoting Randy Bass, "…that nowhere but school would we ask somebody to write something that nobody will ever read." (Gold 2012). Now, this is to preface his comments about making student writing public, and it echoes a sentiment that Shawn Graham brought up in one of his talks with our group. I bring it up here as a way to preface my own comments about evaluative criteria in DH - in reviewing criteria discussed by Shannon Christine Mattern, Todd Presner, Geoffrey Rockwell, and the MLA, some common elements arise, and some of these I'll use in suggesting criteria. I think it fundamentally important that, alongside the evaluation of media and context and linking which I'll touch on, that sound scholarship, strength and clarity of argument, and academic integrity be cornerstones of evaluative criteria. Of course, if the nature of the scholarly enterprise in question is not the traditional written document, the criteria for its evaluation will be novel. The form itself will have to be subject to evaluation - does the form suit the concept or detract from it somehow, is technology utilized effectively or gratuitously, and so on. Conventional criteria of assessment remain useful to the extent that they apply. Consider that some assessment criteria of written assignments serve to train a student to adhere to a set of standards and formats - somewhat arbitrary - which arguably do one of two things: (i) allow for ease of reading and simplicity in evaluating sources; and (ii) prepare a student for later exercises in academic writing - grant applications, publication, and the like. Now, for as long as that model persists, then the conventional criteria of assessment will remain useful. Likewise, as models of publication change (becoming online, open source, collaborative, multimodal), then the criteria of assessment so too must change, if only to remain relevant. It would, I think, be equally absurd to evaluate a conventional essay's capacity for linking out and being linked to, as it would to evaluate an online multimedia project on the merits of its page numbers or 12 point Times New Roman. I see few implications of the criteria I suggest above for my own primary field of study, anthropology - mainly because I propose nothing radical in the above paragraphs. Rather, I suggest that conventional evaluative tools remain in use so long as they suit the medium, and appropriately suitable means be employed as media changes. Academic rigour, for instance, is an underlying quality of a scholarly undertaking - the calling for it is not something that I think should be subject to change. In anthropology, still so tied up in ethnographic projects, participant observation, and the lived experience of those with whom the researcher works, there is always this tacit understanding that research will be tied up in subjective experience - criteria is always changing, and though there exist methodological underpinnings for how to conduct one's self in the field, the end result might be so fundamentally different that what the researcher set out to accomplish that there indeed must be a different set of criteria applied for the evaluation of this work. Of course, ethics, academic integrity - these qualities remain as part of this criteria, even when the media changes from what James Clifford calls 'partial truths', these metaphorically 'fictional' ethnographies, to something altogether different (Clifford 1986; Ortner 1995). Works Cited Clifford, J. 1986. 'Introduction: Partial Truths', in J. Clifford and G. E. Marcus (eds.), Writing Culture: The Poetics and Politics ofEthnography, 1-26. Berkeley: University of California Press. Gold, Matthew K., ed., Debates in the Digital Humanities. http:://dhdebates.gc.cuny.edu/debates Ortner, Sherry B. 1995. Resistance and the Problem of Ethnographic Refusal. Comparative Studies in Society and History 37 (1). 173-193.
Before working through the readings for this week, I jotted down a handful of notes as to what I thought evaluative criteria for scholarly projects in DH would look like. Since reading, I've come to understand that in order to properly engage with the ethos of DH as we've outlined in course, evaluating projects becomes a much more complex and multifarious activity than I'd initially envisioned. In this short assignment, I will form a pastiche of a few of the most salient, or what I believe to be pertinent, issues raised and shared by four texts:
1) The MLA Guidelines for Evaluating Work in Digital Humanities and Digital Media 2) Shannon Christine Mattern's "Evaluating Multimodal Work, Revisited" 3) Geoffrey Rockwell's "A Short Guide to Evaluation of Digital Work" 4) Cheryl E. Ball's "Assessing Scholarly Multimedia: A Rhetorical Genre Studies Approach"
CONSIDERATIONS FOR EVALUATION:
A) conscious, vigorous and reflective evincing of how the creator understands the medium as carrying out the academic work
Unlike journal articles and traditional conference presentations which adhere to well-established, commonly-understood genres, the creation of digital, multimodal academic work can potentially evoke sensations of confusion or ambiguity. The creator of these kinds of projects, therefore, needs to promulgate the design and presentation aspects of their work, or, more specifically, how these projects develop and present the "conceptual core" (Kuhn's term) necessary in all academic work (Mattern, 2012; Ball, 2012). I'm thinking for example of the case where a novel is submitted as a thesis in MFA programs. A student does not merely submit the novel, but accompanies said novel with another text which discusses how that novel embodies concepts, theories, objectives, etc. (In some cases, however, the author/creator might wish for the design/form to speak for itself and evaluators can evaluate according to its effectiveness at doing so.)
In cases where clarity might wane, digital authors/creators, like MFA students, need to reflect on why the medium used was 'chosen' over traditional (print based formats), and how that medium conveys the academic objectives which the study strives to achieve. Perhaps it's a case where medium is interactive and the 'interactive' element is not merely an interesting feature but central to conceptual-thinking the project hopes to evoke. If the project is attempting to shift perspectives or conceptual frameworks, and is using the medium to do so, an explanation of these objectives should be made available to both evaluators and the public at large. Evaluators can, using the medium and the explanation, then assess its effectiveness or lack therefore of.
I believe in some cases where the medium presents a genre so foreign to the standard research projects present in the field (I'm thinking of projects which distort linearity, or ones which present options for initial engagement) an author would be wise to include a type of instruction manual as to how the work is to be read/interpreted. The goal - even if the work strives to experiment with the paradigmatic parameters - should not be to alienate fellow members of the field. Ultimately, the creator/author should make a rigorous effort to show how the digital work fits into, defines, shapes and progresses the field(s) in which it seeks to communicate (MLA Guidelines).
B) Detailed accounting of the ways the community, or authoritative bodies, have been part of the project's production.
Both Mattern (2012) and Rockwell (2012) stress that academic work in DH needs to make a concerted effort to validate itself, and, as such, evaluative criteria needs to take into account this very 'validation'. These topics of authority, author-ity, and information value are ones that we've been discussing throughout our Monday sessions. If everything can be made openly accessibly for all, if everyone can contribute, how can we be certain that our information/ knowledge is valuable? Is the very definition of value not contingent on scarcity, uniqueness, and/or, in Marxist terms, time-invested labour? If we are to take seriously Foucault's notion (and I'm not entirely sure that we should) that "the author does not precede the works, he is a certain functional principle by which, in our culture, one limits; in short, by which one impedes the free circulation, the free manipulation, the free composition, decomposition, and recomposition of fiction" (p. 899) (we can replace 'fiction' here with 'text' ('text' used in the most-encompassing sense - digital media, print, images, etc) and not alter Foucault's point in great degrees), where does this leave the authority of the author? Where does this leave need for validation?
Mattern and Rockwell don't address this point, but look at the issue (I'd argue) in terms more associated with praxis as it pertains to the current state of academia. They suggest that the creator/author needs to…
* be accurate and concise with all citations not only with respect to the conceptual work but also the design and construction of its materialized form as well as the 'human resources' (Mattern, 2012) that have contributed (credit all collaborators) * provide links to cited sources/contributors' work as necessary * make sure the work is accessible to everyone in the field as well as (potentially) the public at large * make sure it adheres to spec. standards (as applicable), and that it is, or can be, properly maintained over time * make sure the work has been effectively reviewed by 'experts' (Rockwell, 2012) or those with authority in the field. (One could maybe incorporate some of Fitzpatrick's ideas of peer-to-peer reviewing as well) *provide a history of the project (Did it receive funding? Was it presented at a conference? Has it also been made available in print form? (Rockwell, 2012)
As should be obvious, online digital publishing, especially online multimodal publishing, needs to be more openly self-reflexive and explicitly explanatory than does traditional scholarly work.
C) Evaluators need to be flexible
Ball (2012), stated this perfectly so I will simply quote her here:
"Readers may be expecting me to provide a transferable rubric for reading, analyzing, assessing, grading, or evaluating scholarly multimedia-particularly a rubric that could be useful for tenure and promotion purposes. I hope readers keep in mind that each of these interpretive and evaluative verbs (reading, grading, assessing, evaluating) indicates a different audience-randomly and overlapping: pleasure readers, students, scholars, hiring committees, tenure committees, teachers, and authors-each of which has different needs from, and comes to the reading experience with different value expectations of, such a piece of scholarship."
Seeing as I'm already well over the word-limit, I just want to state that I can see many innovative and exciting ways for scholars in all fields to begin experimenting with new forms of presenting serious, critical, academic work. All fields on local, institutional, national and international levels will have to continually work collaboratively on establishing, and enacting evaluative criteria. As the media changes, the modes of evaluating will have too as well. Could universities somehow evaluate not just publications, but the traction that publications get? Could reviewing a number of publications count as 'valuable' academic contributions? Could contributing to the platforms/ media which are used to present studies also count as academic contributions? How could the universities keep track and assess all of this? Is it a pipe-dream to imagine that it's possible to do so, or is it, as suggested by the very presence of these discussions, already a reality that requires urgent attention?
References:
Ball, C. E. (2012) Assessing scholarly multimedia: A rhetorical genre studies approach. Technical Communication Quarterly, 21(1).
Foucault, M. (1998). What is an author? In D. H. Richler (Ed.), The Critical Tradition: Classic Texts and Contemporary Trends. New York: Belfort Books.
I just want to take this opportunity to address the comments made by Chris about the evaluative method I outlined in this week's assignment (yay, peer review!).
I do agree with your assessment about the "finality" of essays. I think this gets to the heart of the comparison between "conventional" media and digital ones, as we've seen outlined before by Lev Manovich in comparing narrative and database. This references the concept that the internet (and everything involved with it) is ever-changing, flexible, and adaptable. Although not all essays provide a concrete conclusion to their argument, there is certainly an emphasis placed on that as an end goal.
I also agree with you that with the "traditional" writing assignment, the likelihood of active participation has an expiry date -- whereas a multimodal or digital-based projects have a much more extended shelf-life, as it were. This is something that Mattern references in her criteria by questioning the project's linkability and reviewability. Trevor Owens also mentions something along these lines when he discusses his use of blogs for his courses (411). It was this idea of an ongoing peer engagement with the works in question that I was trying to define, though I admittedly didn't do it as effectively as I would have liked.
I also appreciate that in responding to my assignment you're actively engaging with what I was attempting to advocate for in terms of a peer-review framework. I realize that using the word "phase" as a descriptor for the process I was describing was ill-advised, although it was an accurate description of my own specific experience that I was discussing. I had envisioned the peer-review framework as an ongoing process that could be repeated and adapted as the assignment in question required.
Lastly, in relation to my previous experience in the drafting workshop, I do agree that the motivation to fix mistakes prior to "official" assessment was a factor for all of us involved in the process. It did also encapsulate Davidson's notion of the "redo" as you mentioned, because the workshop was able to bring out some major issues and methodological problems that would have otherwise gone unnoticed until it was too late. As a result of this, our deadline for the assignment was actually extended, to allow us to fix our projects as needed based on the input of our peers in the course. Admittedly, the assignment was a conventional written essay, but it is a process that I feel can work effectively for other types of projects.
Owens, Trevor. "The Public Course Blog: The Required Reading We Write Ourselves for the Course That Never Ends" in Debates in the Digital Humanities. Ed. Matthew K. Gold. 409-411.
I think the finality of essays is a really interesting topic, and something I think all students need to be thinking about seriously. Coming from the perspective of an English student, I would argue that about half of our program is centered around essay writing, and when we think about the incredibly short life-cycle of our essays, it comes as no surprise that we have difficulty finding jobs once we graduate. If we are not taught how to apply our work outside of the classroom, how can we ever know what we are capable of when we leave the security of our schools? Schools give us an audience (even if it is only an audience of one - the professor who grades our work), but once we graduate, it becomes very difficult to know who will have interest in reading our work, where to publish our work, and most importantly - what to do with our work. So I think that thinking critically about this topic goes beyond our understanding of how our work is different in its digital format (i.e. how they can function as databases for topic mining or distant reading through tools, etc.), but what we can (and have to) do with this new format. That being said, do you (or anyone else from the class) think that the finality of essays will be "fixed" in the way we will begin to read essays differently (by posting them online and allowing discussion and discourse to be created around them), or do you think that we can (or should) even change the way that they are written in the first place? Just as Mark Sample wants to do away with essays entirely, do you think that we can counter finality by re-inventing the way we communicate our work? Would an English or Film Studies student's work garner a bigger audience by vlogging their analysis? Kelly Schrum, in her article "A Tale of Two Goldfish Bowls . . . Or What's Right with Digital Storytelling" states that "Several students adapted this approach to weekly assignments, submitting vlogs in place of blog postings. The blog discussion on copyright was thoughtful and lively, but [one student]'s vlog on the topic accomplished what a text-blog could not". Tad Suiter's video discusses how vlogs can be used effectively (this link will make the video start right where he starts talking about it) http://youtu.be/rpe9c7BVPfo?t=4m19s, and I was thinking about this in relation to our own blog posts here on Diigo. As you can see, I've only responded to your first point (the finality of essays), partially because it was the topic I was the most interested in, but also because I wanted to demonstrate the problem I think online discussion is going to have (for our class and all classes in general). As much as I want to reply to everybody's Assignments, the problem is that I simply don't have time - and I doubt anybody else in this class does either. I think online discussion can work very effectively, but when we all have a bunch of readings to do, assignments to mark (for those of us who are TAs) and blog posts of our own to write, it's hard to read through everyone else's assignments and respond with thoughtful critiques of them all. While I really liked Cathy Davidson's idea of crowdsourcing grading to her students, I think that assignments would need to be very short in order for us to be able to carefully read through all of them.
To really exemplify how problematic online discussion can be, I have to admit, it's probably not very likely that many (if any) will respond to this post and my question. As I mentioned, it's hard to find time to read through everyone's posts, and I think our own discussion will help us to both experience and observe these issues for busy students. Although I think online discussion is certainly the answer to eradicating the "finality" of essays, I don't think it's going to be possible within the format we're using. Maybe shorter posts would help, but I wonder if the answer is something like a vlog (as Schrum suggested)?
I think it is also important to think about whether our work will continue to be read and discussed outside of our class, and even after this semester is over. Although theoretically, posting our work online does eradicate its finality (since it has the possibility of always being read and commented on), in practice, I think that most of our work will cease to be read once the class it was written for is over. I'm certain that none of us will continue to comment on these blog posts after this semester, and this is a problem. I think that our work needs to always continue to grow and change (to mimic the nature of the medium it is posted on - the internet), and we don't yet have a framework for online assignments to allow this (or, we're just not ready for this type of never-ending discourse and continual growth).
So, I wonder what everyone thinks about these questions: Would continual online discussion work best if we had a word limit for our posts (like 150-200 words)? Do we need to adopt vlogs or something that will allow us to grab the attention of other readers/viewers more easily? Does the work for our classes need to transcend the audience of our classmates? Is it important for our work to never stop growing and changing (and thus, important to ensure that our work is never really "finalized")?
I like the idea of starting some discussions (but not all) circumscribed with a word-limit. You know, we should try it as an experiment. Maybe for the week after reading week, or the week after that, someone could put forth an argument related to the readings and we could debate it (reflect on it) in no more than 125-word posts (or 200, or whatever) and see what it's like. Later, in class we could have a discussion as to its benefits and drawbacks. It would be interesting to see how it works pedagogically, and the ways in which it might be applied to other learning scenarios. Maybe some other 'rules' or 'restrictions' would have to be put in place as well. Anyone game to try it?
I think you've raised many interesting points here, Chris, and I just have a couple thoughts to share. We can't assume everyone will be interested in our far-from-enlightening second-year paper on fill-in-the-blank. We can assume, however, that a certain body of individuals will be relate to our project - namely, the rest of the students in the same class/seminar. While part (a large one) of the point of the essay-writing task (despite what many language/communication theorists would argue) is simply the individual's working-through something with language, the last link - the social action or social function - of the process is almost always lost. In all my time in undergrad I received grades and comments on papers, but never had space allocated in class to discuss my paper with my peers, to go over the comments with the them and evaluate how effectively they carried out their project in comparison to mine. If I were to made a prof tomorrow (god, and the sense of all things sane, forbid), I would make spaces for these open, critical reflections. Students could help one another gain a better understanding of the materials discussed, of the educational process in general, and, more importantly, gain insight into how it is that they are or are not fitting into what the discipline requires. Exchanges like this, which necessitate face-to-face, tangible interaction, might really help build more of a community feel in seminars (something glaringly missing from on-line courses) as well as provide opportunities for learning about oneself and ones' peers.
Definitely agree with carrying out debates / a blog post with a prescribed word limit. Short, clear, and concise arguments. This would allow others to read entries in full and readily engage with others while drawing up the most pertinent information.
You definitely bring up some interesting points, Chris, but for the sake of brevity I'll just respond to those questions you outline at the end.
Like Jordon and Alessandro, I think the word limit is a great exercise. It forces us (and the hypothetical students) to really engage with the information we're trying to convey, in order to make is as concise as possible. I think having a start date for this after reading week works best, but maybe we should see what Brian thinks of it (since he is running the course, after all).
As a film student, I obviously love the idea of doing vlogs, but it seems to be one of those things that often work better in theory than in practice (at least in my experience). This is a concept that was brought up at our graduate conference last year, and the discussion around it became very heated. I guess it just boils down to the idea of how the technology itself is being used. As we discussed in class, if you're using a vlog format, you should be using the technology involved to its full potential, not just recording yourself saying the words you could just as easily use in an essay.
I think this ties into the final question I'll engage with, the notion of our work transcending the audience of our classmates. This is where using alternative mediums and formats would come in handy. I think, for the sake of the humanities, it is important that we're able to prove our importance (dare I say relevance) to those beyond the walls of the ivory tower. As I'm sure many of you have experienced, there's a perception currently that most degrees in the humanities are impractical (the dreaded question of "what will you do with THAT?). Having a larger online presence is beginning to change this somewhat, however we need to be willing to move beyond the physical written essay if we want the content of our research to be more widely disseminated and understood. It's much easier for a layman to engage with a vlog or even a blog than it is to engage with a 30 page article on obscure text x. I guess the question then becomes who we want our audience to be; should we be limiting or excluding certain audiences, and how should we articulate our research in order to engage with these audiences.
Alessandro, I certainly agree - I think having a week to test brief on-line discussion would be a great idea, especially if we can then discuss the benefits/drawbacks of our experience in-class as you suggested.
I also agree that we certainly can't assume everyone will be interested in our undergrad papers, and I've been thinking about what kind of platform our work could/should be published on (with this in mind). I think that what we need is a database that functions similarly to Jstor, where students who have an interest in a certain topic can search any work that has been done on it (and this would allow much more work to be available - and much more quickly - since the peer review process would not stand in the way). The homepage for this site could feature the latest essays/vlogs/etc. posted, and a voting system (similar to Reddit's) that allows both comments and reviews to be written for each paper, as well as allowing the most voted works to appear at the top of a search. The reason I'm suggesting something like this is because you're exactly right - not everyone will be interested in what a 2nd or 3rd year student has to say, but some might, and those that are willbe searching for essays related to their topic of interest anyway, so I think this type of database would be the best way to ensure our essays have continued relevance long after they are graded, are open for discourse and criticism, are available for those interested in their topic and not promoted blindly to users who have no interest in them, and can help us to establish - as an academic community - what work is relevant, useful and well-written and what is not. A platform like this could even provide students with top-rated essays something to put on their CVs (which is important, because a large number of undergrad students graduate without anything published or worth mentioning on their CVs).
And Devin, not to worry - I only responded to your first point in your last post, so it's not a problem at all. If anything, I guess we're getting good experience in brief discussion and being able to focus on what's important! I'll email Professor Greenspan and see what he thinks - I hope he won't mind? I find that, as more and more (and more) essays, dissertations and books are being published each year, it's becoming impossible for any single student to read and engage with every source related to that topic, and I think that the same logic applies for online class discussion. It's simply impossible for any one student to read and respond to everything their other classmates post, so this allows discourse to form around what is most important without burdening us too much amidst our other responsibilities.
I also definitely agree with you - I think vlogs would be useless unless they're used the right way (which means that they have to be both entertaining and doing more than they would in the form of written text). In order for us to understand how vlogs function, how to make them interesting and how to employ their unique capabilities, I think we would need to be trained a bit to ensure that we would be using them effectively. For our purposes, that makes it really difficult because (to my knowledge) we don't have any professors who could give us a crash-course on vlogging, but at the same time, if no one is available to teach us, maybe its best to just try it and (possibly) fail? After all, isn't DH all about trying new things and being open to failure and criticism?
To respond to your last point, I think this question definitely needs to be answered before creating anything like the platform I suggested, and I'm not quite sure where I stand on this. On the one hand, I think it has to be open to everyone, as potential future employers (who want to see examples of our work) and scholars (who could use our work) need to have access to it. At the same time, if we make access open to everybody, I feel that the comment section would succumb very quickly to the trolls from Youtube and 4chan. That being said, I think it would make the most sense to allow everyone access to the essays/mutimedia work themselves, but only allow certain people the ability to contribute (i.e. the ability to vote, review and comment). Who gets the ability to contribute? I'm not sure. I guess students and faculty from around the world could log in (the same way we log in to Jstor), but I think that would exclude a lot of college/university alumni who may want to contribute as well, so I'm not sure what sort of access system would make the most sense.
I think some really interesting ideas have been raised here, and I certainly agree that it would be a worthwhile experiment to try implementing a word limit on our blog posts. It would, as you all have said, make it easier for us, as busy students, to engage with each other in an online discussion as concise, straight-to-the-point posts are much more user friendly. (I say this quite confidently after having spent quite a bit of time trying to catch up on everything I've missed in this discussion!) Beyond that, though, this experiment could help us address a few issues that frequently come up in our class discussions. We have talked about Twitter quite a bit, and how useful the 140 character limit can be in training people to clearly and efficiently communicate what they have to say; limiting our writing in this way could have the pedagogical benefit of refining our abilities to communicate through writing. Just to be clear, I am not suggesting that we go so far as to limit our posts to 140 characters-- I simply wanted to highlight that a lot of our class discussion regarding Twitter backs up the arguments posted here. We have also acknowledged a few times now in class, that humanities students frequently struggle to talk about their own work in a concise, accurate and interesting way. I think that experience with short, online posts like those that Chris and Alessandro have described, would really help to alleviate this struggle. Not only would we, as humanities students, gain experience expressing ourselves with concision and clarity, but also in learning how to present our work. In such a short piece of writing that is open to the public, a student must present their ideas in such a way that is both brief and intriguing or no one will read it (and we cycle back to the problem of audience which Sample addresses). This is a skill that would be extremely beneficial to us outside of the online discussion as well. Clearly I could use some practice in brevity, because this is all to say that I think that this experiment could not only produce more fruitful online discussion, but also have positive pedagogical effects outside of the blog.
I would also like to address the idea that our audience (particularly as undergrads) is limited to our peers. I am conflicted with this idea, because I can't help but agree, however I think that this attitude is a big problem. Why shouldn't people want to read those papers? Undergrads have good ideas that are worth sharing. These ideas, and the ways that they are expressed, could probably use some polishing, but that's true of everyone. I wonder if the idea that our undergrad papers would hold little interest for those outside of our classmates is simply a product of the system. Typically, our papers are intended for a very limited audience of one, and as a result we fail to see the value of our own work. This is why Mark Sample feels the need to "instill in [his] students the sense that what they think and what they say and what they write matters-to [him]; to them; to their classmates; and, through open access blogs and wikis, to the world." The current essay-writing system of evaluation has not instilled this sense of value in us as students, and so we fail to see why people outside of our limited peer group would be interested in our work--what a sad position to be in! Then, as we continue on in our educations we extend this way of thinking--why should anyone outside of our field want to read or grad work? Perhaps the only reason that we think our potential audience is limited, is that it has been; if we remove these limitations through the open, online posting of our work, maybe our way of thinking would change with our audience.
I just wanted to post a summary of what I submitted for last week's assignment. I believe that there are various possibilities for developing evaluative criteria of scholarly projects in the digital humanities. If we assume that one of the main criteria DH concerns itself with involves promoting the spirit of collaboration, then perhaps the collaborative participation of scholars in any and all research activities could be taken into consideration for assessment. I suggest that collaborative efforts could be archived in digital portfolios where compilations of literature reviews, seminar contributions, peer reviews, and blog postings or other publications could be kept. The portfolio could serve as a resource for doing further research, or perhaps eventually transcribe to a popular blog where open access to new ideas could be further collaborated upon through digital networking. Moreover, the portfolio could be an interdisciplinary resource for researchers to develop in their collaborative programs. Conventional criteria for DH scholarship must take into consideration that digital scholarship happens within complex networks of human production (Nowviskie, 2012). Because of the interdisciplinary nature of DH, where scholars from different disciplines might develop research essays or theses, conventional methods of assessment that focus on promotion and tenure of DH scholars alone might not always be suitable. For graduate students from different departments, any compilation of works made and ideas shared that can assist in advancing research should be considered for possible evaluative criteria. Furthermore, DH professors could contribute to the evaluation process by assisting in supervising and/or assessing research essays and theses done in home departments (where say a collaborative program such as ours is the case). Currently, DH seems to put more focus on research and digital projects than pedagogy and assessment, so any research initiatives by students could perhaps be given consideration for evaluation purposes. In the area of applied linguistics, digital innovations in second language (L2) pedagogy could be promoted in L2 curriculums, and could also serve as part of the L2 evaluative criteria. In my research, I would like to promote collaborative learning so as to encourage L2 teachers to use digital texts, digital tools, and digital media while exploring the possibility of open access through discourse practices that might involve say a blog community for writing activities and peer review. Perhaps an interesting research study might be to investigate whether blogging and/or digital writing portfolios would be suitable for evaluating the intercultural communicative competence of L2 learners. Of course, the digital integration of learning activities would be unique based on an L2 curricular context, and might include activities such as webquests, critical reflection journals, VLC projects, and perhaps telecollaborative assignments. All of these components have the capacity to be incorporated into an L2 portfolio and used as part of the L2 evaluation criteria. I am currently drafting a proposal for my ALDS academic writing course to investigate the effectiveness of using blogs in developing academic writing. Interestingly, I have just discovered that this idea has been in use since 2008 in CUNY's WAC program with its incorporation of Blog@Baruch (Gold, 2012). Using a portfolio for assessment is by no means exclusive, but because its function can be multimodal and even multidisciplinary, it could make for a useful tool in both L2 & DH scholarly projects as part of the evaluative criteria.
If we've all honed in on anything from this week's readings, it's that the essay is hugely problematic. Coming from English, which hugely relies on the essay format, it really is depressing to think that my work has only really ever been read by one professor at a time. And while a grade was usually assigned, it's hard to know just how good/bad it was-to my one-person audience, just how influential was my piece? Did it change them in any way? It's hard to say. Either way it doesn't matter, because all those essays are just past exercises with no more life to them. This system really makes me question how relevant my studies have been. Grading is good for a sense of satisfaction, but it really doesn't say anything about how my work has affected an audience. Second opinions would be nice, even if they aren't "official" through evaluation. The discussion so far has been leading towards some kind of forum for us to post our work. I really like this idea, because it allows us all to really get ideas out there and grouped together in a collective way. I like the idea of open access so that it isn't just limited to academia as well-After all, isn't the point to have our ideas reach a broader audience? But like you guys said, there are problems with this idea, like unwanted trolls or poor ideas. Some kind of upvote system could be helpful for a quick glimpse of how the community feels overall about a comment or project. I suppose this is where evaluative criteria would really come into play, as a mere like/dislike system really doesn't convey that much information. Maybe some kind of rating system on the effectiveness on a variety of aspects on the project could be implemented instead? Users could express whether the ideas being raised are innovative, if these ideas were portrayed effectively, how outside sources complimented the ideas, etc. Of course there are more criteria that could be raised, but by categorizing a communal form of evaluation users can get more of a breakdown on how the project has been perceived. To put projects, comments, and evaluations in perspective, users of such a site can say where they stand in the academic world (third year undergrad, first year masters, high school diploma) to help others filter through the sources of ideas on the site. Of course, verifying this kind of thing could be problematic…But I guess good or bad ideas could give them away? As for the limited word count idea, I think it sounds really helpful for all of us. In the end I guess it will save us all time and act as an exercise in trying new things. If we're trying to break away from an essay style of writing, concise styles may be just what we need to start changing things. At the very least, this could give us help in writing abstracts-not that I'm necessarily advocating keeping to our old publishing methods. But even if we do reach the stage of an open forum like this site we're envisioning, a concise summary of a project can be helpful. This will help us to quickly get the important information out there before anyone commits reading time.
As I proposed in class an alternative way to discuss ideas and assignments is to exchange assignments and have another student present another person's work to the class. This would limit the amount of reading we would have to do while getting a chance to receive critical peer review. The one disadvantage of this idea is not having the opportunity to present our own work to the class as we have for past assignments. There is value in being able to present your own ideas to the class and receive feedback. However, as we have seen before with limited time in seminar this does not allow enough time for ample peer review on our work. In this new form of presenting we get a chance to have one person besides the professor provide critical feedback. I was wondering to get a better sense of the opinion of the class which students prefer to present their own work over having someone else present their work to the class?
I'm down for giving it a shot if other people are as well. It would be interesting to hear how someone else interpreted the project we had attempted to explain and in turn could lead to deeper discussions.
I think it's a good idea too, however I'm not sure how we would implement this this late in the semester. It might be a bit tight to do this kind of peer-review presentation for the visual analysis assignment (unless we postpone the presentations for another week). I think there is one last written assignment due after the group project though? Can't remember what that involves right now though. Thoughts?
Given the growing influence of the Digital Humanities and its emphasis on multimodal and/or "buildable" projects - for example, Mark Sample advocates in doing away with essays entirely (404) - it makes sense that a new method of evaluation for this type of work would be required. Another driving factor in this need for a new system is the fact that the existing system is far from perfect; Jonathan Dresner outlines some of the problems with both absolute and relative systems of grading - grade decline and grade inflation, respectively ("Towards a Unified Theory of Grading"). The question then becomes how to create this new framework for evaluation. For the sake of simplicity, the framework I will discuss will focus on the evaluation of students' work, rather than the recognition of scholarly work by faculties and institutions. Based on reading a variety of articles discussing this subject, I feel that a combination of elements from the conventional system of grading within a framework of peer-review provides the most effective means of evaluation. Shannon Christine Mattern openly acknowledges that her own framework is a consolidation of a variety of different models ("Evaluating Multimodal Work, Revisited"), and it is a similar method that I am using for my own model, in combining her list of considerations along with Cathy Davidson's outlining of a peer-review or crowdsourced framework ("How to Crowdsource Grading").
Mattern provides a series of criteria and considerations for evaluating multimodal scholarship that is a particularly effective means of evaluating work from the position of an instructor. She separates her criteria into sections based on concept, design and technique, documentation, academic integrity and openness, and review and critique (EMWR). Several of her considerations for evaluation stem from pre-existing grading frameworks, such as the presence of a thesis or argument, evidence of research, whether the content of the project is supplemented with documented supporting evidence, and whether it can be said to adhere to a certain level of academic integrity (EMWR). One important question that is often asked in my field in relation to specific research is the question "so what?" If the project in question can self-reflexively approach and answer this question, then it has already proven its worth to the discipline. In fact, it is a question I have seen directly asked to presenters at Film Studies conferences. All of these elements are necessary for the consideration of the efficacy of any given project, regardless of its format or medium.
At the same time, there must be acknowledgement given to the specifics of digital or multimedia work, whether it is a website, a visualization, an interactive game, a tool, or any other form of project that could fall under the purview of the digital humanities. It is here that some of her other criteria can be used effectively, such as the effectiveness of the form or interface in use, the consideration of who the audience of the project may be, its accessibility, and whether it has been reviewed, either by experts or by presenting it at a conference (EMWR). There is one specific criterion that I think is particularly important, when she questions "need this have been a multimedia project, or could it just as easily have been executed on paper?" (EMWR) This gets to the heart of the intention behind a specific project, and to how effectively the technology behind it is being used. For example, in the interactive documentary Welcome to Pine Point, it is a question that the creators engage with directly when they admit that "it could have been a book, but it probably makes more sense that it became this" (The Goggles, "About this Project").
The second part of a successful framework for evaluation lies in the use of a peer-review system. This type of framework certainly has its advantages; when used effectively, it becomes a means of teaching students to think critically (Mattern, EMWR) and to engage with each other's work directly rather than just with the assigned readings. One method of peer-review that I have experienced as a student came from a course in which our final project underwent a "draft workshop" phase. All of us submitted our first draft copies to the other students in the course and then met in a workshop environment to discuss the strengths and weaknesses of each other's projects. Because it occurred before the completion of the assignment, we were able to revise and acknowledge the comments of our peers in the final version. Structuring the peer-review portion of the system as an ongoing and collaborative process allows the students to not only improve their work, but also may point out problems or issues that even the instructor may not notice.
While scholars like Cathy Davidson advocate for a purely peer-review based system (HCG), I feel that using a combination of adapted "conventional" criteria as outlined by Shannon Christine Mattern along with a draft or testing-phase peer-review is a much more effective means of evaluating digital humanities scholarship. One important factor that is always present in any evaluative framework is, as the Modern Language Association points out, the need for adaptability; there is no such thing as a universal framework ("Guidelines for Evaluating Work in Digital Humanities and Digital Media"). The key to a successful method is openness and transparency between the students and the instructor. Ensuring that the students understand the criteria by which they are evaluated - both by the instructor and by their peers - can prevent possible issues or complaints that may arise, which is something that Davidson mentions directly in the defense of her crowdsourced system (HCG). While I acknowledge that the framework I have outlined may not be perfect, I feel it is a more effective means of evaluating digital humanities scholarship than the existing system would allow.
Works Cited:
Davidson, Cathy "How to Crowdsource Grading" HASTAC. July 26, 2009.
http://www.hastac.org/blogs/cathy-davidson/how-crowdsource-grading
Dresner, Jonathan "Towards a Unified Theory of Grading" Dresner World History. N.d.
http://dresnerworld.edublogs.org/about/towards-a-unified-theory-of-grading/
The Goggles, "About this Project" Welcome to Pine Point, NFB Interactive, 2011.
http://pinepoint.nfb.ca/#/pinepoint
Mattern, Shannon Christine "Evaluating Multimodal Work Revisited" Words in Space, August 28, 2012.
http://www.wordsinspace.net/wordpress/2012/08/28/evaluating-multimodal-work-revisited/
Sample, Mark L. "What's Wrong with Writing Essays." Debates in the Digital Humanities. Ed. Matthew K. Gold. Minneapolis: University of Minnesota Press, 2012. 404-405.
Evaluative criterions for scholarly digital humanities projects are fundamentally imperative if DH seeks to receive recognition and approval from the larger academic community. Considering the diversity of DH projects, a one-size-fits-all-approach to evaluation would be inadequate. Nevertheless, commonalities for evaluation among variegated DH practices and projects are necessary in spite of the fact that such pursuits are often complex and context dependent. Accordingly, a proposed series of evaluative criteria is explored in figure 1.1. This criterion reflects a balance between traditional scholarly evaluation and prototypical digital humanities projects. Conventional criteria of assessment are a useful starting point as DH has been described as a "jumping-off point for the building of a scholarly identity" (Waltzer).
The implications of these evaluative criteria are that contemporary academic institutions will need to place greater emphasis on computing strategies and techniques. Within the field of Sociology, computing does not hold prominence or precedence as a methodological research tool. Instead, most researchers in this field utilize interview techniques, participation, observation, etc. Accordingly, this field would need to embrace a hybrid approach (e.g. social computing) in order to reflect current scholarly projects and technologies within DH. Finally, given the complexity and diversity of DH projects, current standardized measures and practices within academia may need to be restructured or abandoned altogether (see Sample 2011).
Figure 1.1: Do the developers of a project connect theory and praxis? Does the project adhere to a coherent and logical argument that is supported by empirical evidence (Rallis and Rossman 2012)? Does the project succeed in addressing its stated goal? How are practices recorded and described throughout the research process? To what extent does the project adhere to practices within the DH community? How are the evaluators of scholarly projects reviewed? What kinds of ethical practices drive the research for the project? What role did reflexivity play within the process of developing the project? Does the project provide the "greatest good for the greatest amount of people" (ibid:74)? Who does the project benefit and what is its contribution to the DH community? What are its anticipated effects and how did it meet this projection? In what ways is this project interdisciplinary (Manoff 2004:22) and how can it be transferred for use by other academic disciplines? How is the developer of the project engaged with the community of practice and the community of discourse? How is the project open and accessible for academic and public review? How does this tool help students "critically produce, consume, and assess information during their college years and beyond" (Waltzer 2011)? Who is being represented in the work of the project? Does the final project represent all individuals contributing to the production of knowledge fairly and accurately?
Works Cited:
Kirschenbaum, Matthew. 2011. "What is Digital Humanities and What's It Doing in English Departments" in Debates in the Digital humanities:3-11.
Spiro, Lisa. 2011. "This is Why We Fight: Defining the Values of the Digital Humanities" in Debates in the Digital humanities:16-35.
Sample, Mark. 2011. "What's Wrong with Writing Essays" in Debates in the Digital humanities:406-408.
Gold, Matthew. 2011. "The Digital Humanities Moment" in Debates in the Digital humanities:xi.
Manoff, Marlene. 2004. "Theories of the Archive from Across the Disciplines," Libraries and the Academy 4.1:9-25.
Rallis, Sharon and Gretchen Rossman. 2012. The Research Journey: Introduction to Inquiry. The Gilford Press.
Waltzer, Luke. 2011. "Digital Humanities and the 'Ugly Stepchildren' of American Higher Education" in Debates in the Digital humanities:335-349.
Conventional criteria of assessment remain useful to the extent that they apply. Consider that some assessment criteria of written assignments serve to train a student to adhere to a set of standards and formats - somewhat arbitrary - which arguably do one of two things: (i) allow for ease of reading and simplicity in evaluating sources; and (ii) prepare a student for later exercises in academic writing - grant applications, publication, and the like. Now, for as long as that model persists, then the conventional criteria of assessment will remain useful. Likewise, as models of publication change (becoming online, open source, collaborative, multimodal), then the criteria of assessment so too must change, if only to remain relevant. It would, I think, be equally absurd to evaluate a conventional essay's capacity for linking out and being linked to, as it would to evaluate an online multimedia project on the merits of its page numbers or 12 point Times New Roman.
I see few implications of the criteria I suggest above for my own primary field of study, anthropology - mainly because I propose nothing radical in the above paragraphs. Rather, I suggest that conventional evaluative tools remain in use so long as they suit the medium, and appropriately suitable means be employed as media changes. Academic rigour, for instance, is an underlying quality of a scholarly undertaking - the calling for it is not something that I think should be subject to change. In anthropology, still so tied up in ethnographic projects, participant observation, and the lived experience of those with whom the researcher works, there is always this tacit understanding that research will be tied up in subjective experience - criteria is always changing, and though there exist methodological underpinnings for how to conduct one's self in the field, the end result might be so fundamentally different that what the researcher set out to accomplish that there indeed must be a different set of criteria applied for the evaluation of this work. Of course, ethics, academic integrity - these qualities remain as part of this criteria, even when the media changes from what James Clifford calls 'partial truths', these metaphorically 'fictional' ethnographies, to something altogether different (Clifford 1986; Ortner 1995).
Works Cited
Clifford, J. 1986. 'Introduction: Partial Truths', in J. Clifford and G. E. Marcus (eds.), Writing Culture: The Poetics and Politics ofEthnography, 1-26. Berkeley: University of California Press.
Gold, Matthew K., ed., Debates in the Digital Humanities. http:://dhdebates.gc.cuny.edu/debates
Ortner, Sherry B. 1995. Resistance and the Problem of Ethnographic Refusal. Comparative Studies in Society and History 37 (1). 173-193.
1) The MLA Guidelines for Evaluating Work in Digital Humanities and Digital Media
2) Shannon Christine Mattern's "Evaluating Multimodal Work, Revisited"
3) Geoffrey Rockwell's "A Short Guide to Evaluation of Digital Work"
4) Cheryl E. Ball's "Assessing Scholarly Multimedia: A Rhetorical Genre Studies Approach"
CONSIDERATIONS FOR EVALUATION:
A) conscious, vigorous and reflective evincing of how the creator understands the medium as carrying out the academic work
Unlike journal articles and traditional conference presentations which adhere to well-established, commonly-understood genres, the creation of digital, multimodal academic work can potentially evoke sensations of confusion or ambiguity. The creator of these kinds of projects, therefore, needs to promulgate the design and presentation aspects of their work, or, more specifically, how these projects develop and present the "conceptual core" (Kuhn's term) necessary in all academic work (Mattern, 2012; Ball, 2012). I'm thinking for example of the case where a novel is submitted as a thesis in MFA programs. A student does not merely submit the novel, but accompanies said novel with another text which discusses how that novel embodies concepts, theories, objectives, etc. (In some cases, however, the author/creator might wish for the design/form to speak for itself and evaluators can evaluate according to its effectiveness at doing so.)
In cases where clarity might wane, digital authors/creators, like MFA students, need to reflect on why the medium used was 'chosen' over traditional (print based formats), and how that medium conveys the academic objectives which the study strives to achieve. Perhaps it's a case where medium is interactive and the 'interactive' element is not merely an interesting feature but central to conceptual-thinking the project hopes to evoke. If the project is attempting to shift perspectives or conceptual frameworks, and is using the medium to do so, an explanation of these objectives should be made available to both evaluators and the public at large. Evaluators can, using the medium and the explanation, then assess its effectiveness or lack therefore of.
I believe in some cases where the medium presents a genre so foreign to the standard research projects present in the field (I'm thinking of projects which distort linearity, or ones which present options for initial engagement) an author would be wise to include a type of instruction manual as to how the work is to be read/interpreted. The goal - even if the work strives to experiment with the paradigmatic parameters - should not be to alienate fellow members of the field. Ultimately, the creator/author should make a rigorous effort to show how the digital work fits into, defines, shapes and progresses the field(s) in which it seeks to communicate (MLA Guidelines).
B) Detailed accounting of the ways the community, or authoritative bodies, have been part of the project's production.
Both Mattern (2012) and Rockwell (2012) stress that academic work in DH needs to make a concerted effort to validate itself, and, as such, evaluative criteria needs to take into account this very 'validation'. These topics of authority, author-ity, and information value are ones that we've been discussing throughout our Monday sessions. If everything can be made openly accessibly for all, if everyone can contribute, how can we be certain that our information/ knowledge is valuable? Is the very definition of value not contingent on scarcity, uniqueness, and/or, in Marxist terms, time-invested labour? If we are to take seriously Foucault's notion (and I'm not entirely sure that we should) that "the author does not precede the works, he is a certain functional principle by which, in our culture, one limits; in short, by which one impedes the free circulation, the free manipulation, the free composition, decomposition, and recomposition of fiction" (p. 899) (we can replace 'fiction' here with 'text' ('text' used in the most-encompassing sense - digital media, print, images, etc) and not alter Foucault's point in great degrees), where does this leave the authority of the author? Where does this leave need for validation?
Mattern and Rockwell don't address this point, but look at the issue (I'd argue) in terms more associated with praxis as it pertains to the current state of academia. They suggest that the creator/author needs to…
* be accurate and concise with all citations not only with respect to the conceptual work but also the design and construction of its materialized form as well as the 'human resources' (Mattern, 2012) that have contributed (credit all collaborators)
* provide links to cited sources/contributors' work as necessary
* make sure the work is accessible to everyone in the field as well as (potentially) the public at large
* make sure it adheres to spec. standards (as applicable), and that it is, or can be, properly maintained over time
* make sure the work has been effectively reviewed by 'experts' (Rockwell, 2012) or those with authority in the field. (One could maybe incorporate some of Fitzpatrick's ideas of peer-to-peer reviewing as well)
*provide a history of the project (Did it receive funding? Was it presented at a conference? Has it also been made available in print form? (Rockwell, 2012)
As should be obvious, online digital publishing, especially online multimodal publishing, needs to be more openly self-reflexive and explicitly explanatory than does traditional scholarly work.
C) Evaluators need to be flexible
Ball (2012), stated this perfectly so I will simply quote her here:
"Readers may be expecting me to provide a transferable rubric for reading, analyzing, assessing, grading, or evaluating scholarly multimedia-particularly a rubric that could be useful for tenure and promotion purposes. I hope readers keep in mind that each of these interpretive and evaluative verbs (reading, grading, assessing, evaluating) indicates a different audience-randomly and overlapping: pleasure readers, students, scholars, hiring committees, tenure committees, teachers, and authors-each of which has different needs from, and comes to the reading experience with different value expectations of, such a piece of scholarship."
Seeing as I'm already well over the word-limit, I just want to state that I can see many innovative and exciting ways for scholars in all fields to begin experimenting with new forms of presenting serious, critical, academic work. All fields on local, institutional, national and international levels will have to continually work collaboratively on establishing, and enacting evaluative criteria. As the media changes, the modes of evaluating will have too as well. Could universities somehow evaluate not just publications, but the traction that publications get? Could reviewing a number of publications count as 'valuable' academic contributions? Could contributing to the platforms/ media which are used to present studies also count as academic contributions? How could the universities keep track and assess all of this? Is it a pipe-dream to imagine that it's possible to do so, or is it, as suggested by the very presence of these discussions, already a reality that requires urgent attention?
References:
Ball, C. E. (2012) Assessing scholarly multimedia: A rhetorical genre studies approach. Technical Communication Quarterly, 21(1).
Foucault, M. (1998). What is an author? In D. H. Richler (Ed.), The Critical Tradition: Classic Texts and Contemporary Trends. New York: Belfort Books.
MLA Guidelines for Evaluating Work in Digital Humanities and Digital Media. Retrieved from http://www.mla.org/guidelines_evaluation_digital.
Rockwell, G. (2012). Short guide to the evaluation of digital work. Journal of Digital Humanities, 1 (4). Retrieved from http://journalofdigitalhumanities.org/1-4/short-guide-to-evaluation-of-digital-work-by-geoffrey-rockwell/
Mattern, S.C. (2012) "Evaluating Multimodal Work, Revisited" Retrieved from
http://www.wordsinspace.net/wordpress/2012/08/28/evaluating-multimodal-workrevisited.
I do agree with your assessment about the "finality" of essays. I think this gets to the heart of the comparison between "conventional" media and digital ones, as we've seen outlined before by Lev Manovich in comparing narrative and database. This references the concept that the internet (and everything involved with it) is ever-changing, flexible, and adaptable. Although not all essays provide a concrete conclusion to their argument, there is certainly an emphasis placed on that as an end goal.
I also agree with you that with the "traditional" writing assignment, the likelihood of active participation has an expiry date -- whereas a multimodal or digital-based projects have a much more extended shelf-life, as it were. This is something that Mattern references in her criteria by questioning the project's linkability and reviewability. Trevor Owens also mentions something along these lines when he discusses his use of blogs for his courses (411). It was this idea of an ongoing peer engagement with the works in question that I was trying to define, though I admittedly didn't do it as effectively as I would have liked.
I also appreciate that in responding to my assignment you're actively engaging with what I was attempting to advocate for in terms of a peer-review framework. I realize that using the word "phase" as a descriptor for the process I was describing was ill-advised, although it was an accurate description of my own specific experience that I was discussing. I had envisioned the peer-review framework as an ongoing process that could be repeated and adapted as the assignment in question required.
Lastly, in relation to my previous experience in the drafting workshop, I do agree that the motivation to fix mistakes prior to "official" assessment was a factor for all of us involved in the process. It did also encapsulate Davidson's notion of the "redo" as you mentioned, because the workshop was able to bring out some major issues and methodological problems that would have otherwise gone unnoticed until it was too late. As a result of this, our deadline for the assignment was actually extended, to allow us to fix our projects as needed based on the input of our peers in the course. Admittedly, the assignment was a conventional written essay, but it is a process that I feel can work effectively for other types of projects.
Works Cited:
Davidson, Cathy. "How to Crowdsource Grading"
http://www.hastac.org/blogs/cathy-davidson/how-crowdsource-grading
Owens, Trevor. "The Public Course Blog: The Required Reading We Write Ourselves for the Course That Never Ends" in Debates in the Digital Humanities. Ed. Matthew K. Gold. 409-411.
Manovich, Lev. "Database as Symbolic Form"
http://transcriptions.english.ucsb.edu/archive/courses/warner/english197/Schedule_files/Manovich/Database_as_symbolic_form.htm
Mattern, Shannon Christine. "Evaluating Multimodal Work, Revisited"
http://www.wordsinspace.net/wordpress/2012/08/28/evaluating-multimodal-work-revisited/
So I think that thinking critically about this topic goes beyond our understanding of how our work is different in its digital format (i.e. how they can function as databases for topic mining or distant reading through tools, etc.), but what we can (and have to) do with this new format.
That being said, do you (or anyone else from the class) think that the finality of essays will be "fixed" in the way we will begin to read essays differently (by posting them online and allowing discussion and discourse to be created around them), or do you think that we can (or should) even change the way that they are written in the first place?
Just as Mark Sample wants to do away with essays entirely, do you think that we can counter finality by re-inventing the way we communicate our work? Would an English or Film Studies student's work garner a bigger audience by vlogging their analysis? Kelly Schrum, in her article "A Tale of Two Goldfish Bowls . . . Or What's Right with Digital Storytelling" states that "Several students adapted this approach to weekly assignments, submitting vlogs in place of blog postings. The blog discussion on copyright was thoughtful and lively, but [one student]'s vlog on the topic accomplished what a text-blog could not".
Tad Suiter's video discusses how vlogs can be used effectively (this link will make the video start right where he starts talking about it) http://youtu.be/rpe9c7BVPfo?t=4m19s, and I was thinking about this in relation to our own blog posts here on Diigo.
As you can see, I've only responded to your first point (the finality of essays), partially because it was the topic I was the most interested in, but also because I wanted to demonstrate the problem I think online discussion is going to have (for our class and all classes in general).
As much as I want to reply to everybody's Assignments, the problem is that I simply don't have time - and I doubt anybody else in this class does either. I think online discussion can work very effectively, but when we all have a bunch of readings to do, assignments to mark (for those of us who are TAs) and blog posts of our own to write, it's hard to read through everyone else's assignments and respond with thoughtful critiques of them all.
While I really liked Cathy Davidson's idea of crowdsourcing grading to her students, I think that assignments would need to be very short in order for us to be able to carefully read through all of them.
To really exemplify how problematic online discussion can be, I have to admit, it's probably not very likely that many (if any) will respond to this post and my question. As I mentioned, it's hard to find time to read through everyone's posts, and I think our own discussion will help us to both experience and observe these issues for busy students. Although I think online discussion is certainly the answer to eradicating the "finality" of essays, I don't think it's going to be possible within the format we're using. Maybe shorter posts would help, but I wonder if the answer is something like a vlog (as Schrum suggested)?
I think it is also important to think about whether our work will continue to be read and discussed outside of our class, and even after this semester is over. Although theoretically, posting our work online does eradicate its finality (since it has the possibility of always being read and commented on), in practice, I think that most of our work will cease to be read once the class it was written for is over. I'm certain that none of us will continue to comment on these blog posts after this semester, and this is a problem.
I think that our work needs to always continue to grow and change (to mimic the nature of the medium it is posted on - the internet), and we don't yet have a framework for online assignments to allow this (or, we're just not ready for this type of never-ending discourse and continual growth).
So, I wonder what everyone thinks about these questions:
Would continual online discussion work best if we had a word limit for our posts (like 150-200 words)?
Do we need to adopt vlogs or something that will allow us to grab the attention of other readers/viewers more easily?
Does the work for our classes need to transcend the audience of our classmates?
Is it important for our work to never stop growing and changing (and thus, important to ensure that our work is never really "finalized")?
I think you've raised many interesting points here, Chris, and I just have a couple thoughts to share. We can't assume everyone will be interested in our far-from-enlightening second-year paper on fill-in-the-blank. We can assume, however, that a certain body of individuals will be relate to our project - namely, the rest of the students in the same class/seminar. While part (a large one) of the point of the essay-writing task (despite what many language/communication theorists would argue) is simply the individual's working-through something with language, the last link - the social action or social function - of the process is almost always lost. In all my time in undergrad I received grades and comments on papers, but never had space allocated in class to discuss my paper with my peers, to go over the comments with the them and evaluate how effectively they carried out their project in comparison to mine. If I were to made a prof tomorrow (god, and the sense of all things sane, forbid), I would make spaces for these open, critical reflections. Students could help one another gain a better understanding of the materials discussed, of the educational process in general, and, more importantly, gain insight into how it is that they are or are not fitting into what the discipline requires. Exchanges like this, which necessitate face-to-face, tangible interaction, might really help build more of a community feel in seminars (something glaringly missing from on-line courses) as well as provide opportunities for learning about oneself and ones' peers.
Like Jordon and Alessandro, I think the word limit is a great exercise. It forces us (and the hypothetical students) to really engage with the information we're trying to convey, in order to make is as concise as possible. I think having a start date for this after reading week works best, but maybe we should see what Brian thinks of it (since he is running the course, after all).
As a film student, I obviously love the idea of doing vlogs, but it seems to be one of those things that often work better in theory than in practice (at least in my experience). This is a concept that was brought up at our graduate conference last year, and the discussion around it became very heated. I guess it just boils down to the idea of how the technology itself is being used. As we discussed in class, if you're using a vlog format, you should be using the technology involved to its full potential, not just recording yourself saying the words you could just as easily use in an essay.
I think this ties into the final question I'll engage with, the notion of our work transcending the audience of our classmates. This is where using alternative mediums and formats would come in handy. I think, for the sake of the humanities, it is important that we're able to prove our importance (dare I say relevance) to those beyond the walls of the ivory tower. As I'm sure many of you have experienced, there's a perception currently that most degrees in the humanities are impractical (the dreaded question of "what will you do with THAT?). Having a larger online presence is beginning to change this somewhat, however we need to be willing to move beyond the physical written essay if we want the content of our research to be more widely disseminated and understood. It's much easier for a layman to engage with a vlog or even a blog than it is to engage with a 30 page article on obscure text x. I guess the question then becomes who we want our audience to be; should we be limiting or excluding certain audiences, and how should we articulate our research in order to engage with these audiences.
I also agree that we certainly can't assume everyone will be interested in our undergrad papers, and I've been thinking about what kind of platform our work could/should be published on (with this in mind).
I think that what we need is a database that functions similarly to Jstor, where students who have an interest in a certain topic can search any work that has been done on it (and this would allow much more work to be available - and much more quickly - since the peer review process would not stand in the way).
The homepage for this site could feature the latest essays/vlogs/etc. posted, and a voting system (similar to Reddit's) that allows both comments and reviews to be written for each paper, as well as allowing the most voted works to appear at the top of a search.
The reason I'm suggesting something like this is because you're exactly right - not everyone will be interested in what a 2nd or 3rd year student has to say, but some might, and those that are willbe searching for essays related to their topic of interest anyway, so I think this type of database would be the best way to ensure our essays have continued relevance long after they are graded, are open for discourse and criticism, are available for those interested in their topic and not promoted blindly to users who have no interest in them, and can help us to establish - as an academic community - what work is relevant, useful and well-written and what is not.
A platform like this could even provide students with top-rated essays something to put on their CVs (which is important, because a large number of undergrad students graduate without anything published or worth mentioning on their CVs).
I'll email Professor Greenspan and see what he thinks - I hope he won't mind?
I find that, as more and more (and more) essays, dissertations and books are being published each year, it's becoming impossible for any single student to read and engage with every source related to that topic, and I think that the same logic applies for online class discussion. It's simply impossible for any one student to read and respond to everything their other classmates post, so this allows discourse to form around what is most important without burdening us too much amidst our other responsibilities.
I also definitely agree with you - I think vlogs would be useless unless they're used the right way (which means that they have to be both entertaining and doing more than they would in the form of written text). In order for us to understand how vlogs function, how to make them interesting and how to employ their unique capabilities, I think we would need to be trained a bit to ensure that we would be using them effectively.
For our purposes, that makes it really difficult because (to my knowledge) we don't have any professors who could give us a crash-course on vlogging, but at the same time, if no one is available to teach us, maybe its best to just try it and (possibly) fail?
After all, isn't DH all about trying new things and being open to failure and criticism?
To respond to your last point, I think this question definitely needs to be answered before creating anything like the platform I suggested, and I'm not quite sure where I stand on this.
On the one hand, I think it has to be open to everyone, as potential future employers (who want to see examples of our work) and scholars (who could use our work) need to have access to it. At the same time, if we make access open to everybody, I feel that the comment section would succumb very quickly to the trolls from Youtube and 4chan.
That being said, I think it would make the most sense to allow everyone access to the essays/mutimedia work themselves, but only allow certain people the ability to contribute (i.e. the ability to vote, review and comment).
Who gets the ability to contribute? I'm not sure. I guess students and faculty from around the world could log in (the same way we log in to Jstor), but I think that would exclude a lot of college/university alumni who may want to contribute as well, so I'm not sure what sort of access system would make the most sense.
We have also acknowledged a few times now in class, that humanities students frequently struggle to talk about their own work in a concise, accurate and interesting way. I think that experience with short, online posts like those that Chris and Alessandro have described, would really help to alleviate this struggle. Not only would we, as humanities students, gain experience expressing ourselves with concision and clarity, but also in learning how to present our work. In such a short piece of writing that is open to the public, a student must present their ideas in such a way that is both brief and intriguing or no one will read it (and we cycle back to the problem of audience which Sample addresses). This is a skill that would be extremely beneficial to us outside of the online discussion as well. Clearly I could use some practice in brevity, because this is all to say that I think that this experiment could not only produce more fruitful online discussion, but also have positive pedagogical effects outside of the blog.
I would also like to address the idea that our audience (particularly as undergrads) is limited to our peers. I am conflicted with this idea, because I can't help but agree, however I think that this attitude is a big problem. Why shouldn't people want to read those papers? Undergrads have good ideas that are worth sharing. These ideas, and the ways that they are expressed, could probably use some polishing, but that's true of everyone. I wonder if the idea that our undergrad papers would hold little interest for those outside of our classmates is simply a product of the system. Typically, our papers are intended for a very limited audience of one, and as a result we fail to see the value of our own work. This is why Mark Sample feels the need to "instill in [his] students the sense that what they think and what they say and what they write matters-to [him]; to them; to their classmates; and, through open access blogs and wikis, to the world." The current essay-writing system of evaluation has not instilled this sense of value in us as students, and so we fail to see why people outside of our limited peer group would be interested in our work--what a sad position to be in! Then, as we continue on in our educations we extend this way of thinking--why should anyone outside of our field want to read or grad work? Perhaps the only reason that we think our potential audience is limited, is that it has been; if we remove these limitations through the open, online posting of our work, maybe our way of thinking would change with our audience.
Conventional criteria for DH scholarship must take into consideration that digital scholarship happens within complex networks of human production (Nowviskie, 2012). Because of the interdisciplinary nature of DH, where scholars from different disciplines might develop research essays or theses, conventional methods of assessment that focus on promotion and tenure of DH scholars alone might not always be suitable. For graduate students from different departments, any compilation of works made and ideas shared that can assist in advancing research should be considered for possible evaluative criteria. Furthermore, DH professors could contribute to the evaluation process by assisting in supervising and/or assessing research essays and theses done in home departments (where say a collaborative program such as ours is the case). Currently, DH seems to put more focus on research and digital projects than pedagogy and assessment, so any research initiatives by students could perhaps be given consideration for evaluation purposes.
In the area of applied linguistics, digital innovations in second language (L2) pedagogy could be promoted in L2 curriculums, and could also serve as part of the L2 evaluative criteria. In my research, I would like to promote collaborative learning so as to encourage L2 teachers to use digital texts, digital tools, and digital media while exploring the possibility of open access through discourse practices that might involve say a blog community for writing activities and peer review. Perhaps an interesting research study might be to investigate whether blogging and/or digital writing portfolios would be suitable for evaluating the intercultural communicative competence of L2 learners. Of course, the digital integration of learning activities would be unique based on an L2 curricular context, and might include activities such as webquests, critical reflection journals, VLC projects, and perhaps telecollaborative assignments. All of these components have the capacity to be incorporated into an L2 portfolio and used as part of the L2 evaluation criteria. I am currently drafting a proposal for my ALDS academic writing course to investigate the effectiveness of using blogs in developing academic writing. Interestingly, I have just discovered that this idea has been in use since 2008 in CUNY's WAC program with its incorporation of Blog@Baruch (Gold, 2012). Using a portfolio for assessment is by no means exclusive, but because its function can be multimodal and even multidisciplinary, it could make for a useful tool in both L2 & DH scholarly projects as part of the evaluative criteria.
The discussion so far has been leading towards some kind of forum for us to post our work. I really like this idea, because it allows us all to really get ideas out there and grouped together in a collective way. I like the idea of open access so that it isn't just limited to academia as well-After all, isn't the point to have our ideas reach a broader audience? But like you guys said, there are problems with this idea, like unwanted trolls or poor ideas. Some kind of upvote system could be helpful for a quick glimpse of how the community feels overall about a comment or project. I suppose this is where evaluative criteria would really come into play, as a mere like/dislike system really doesn't convey that much information. Maybe some kind of rating system on the effectiveness on a variety of aspects on the project could be implemented instead? Users could express whether the ideas being raised are innovative, if these ideas were portrayed effectively, how outside sources complimented the ideas, etc. Of course there are more criteria that could be raised, but by categorizing a communal form of evaluation users can get more of a breakdown on how the project has been perceived. To put projects, comments, and evaluations in perspective, users of such a site can say where they stand in the academic world (third year undergrad, first year masters, high school diploma) to help others filter through the sources of ideas on the site. Of course, verifying this kind of thing could be problematic…But I guess good or bad ideas could give them away?
As for the limited word count idea, I think it sounds really helpful for all of us. In the end I guess it will save us all time and act as an exercise in trying new things. If we're trying to break away from an essay style of writing, concise styles may be just what we need to start changing things. At the very least, this could give us help in writing abstracts-not that I'm necessarily advocating keeping to our old publishing methods. But even if we do reach the stage of an open forum like this site we're envisioning, a concise summary of a project can be helpful. This will help us to quickly get the important information out there before anyone commits reading time.