Skip to main content

Home/ DIGH5000-14W/ Contents contributed and discussions participated by Chris Milando

Contents contributed and discussions participated by Chris Milando

Matt Bastin-Millar

A Playful Multitude...Redditing again. - 16 views

started by Matt Bastin-Millar on 11 Mar 14 no follow-up yet
  • Chris Milando
     
    Thanks for sharing this Matt! I really wish we had a chance to talk more about the de Peuter and Dyer-Witherford's article, but it didn't necessarily fit in with the topic of video games and the study of history. However, I think this article deserved its own class to be discussed with other readings about the same issue.

    I was going to show a clip from the bonus features of God of War III - after you beat the game, you get to see a mini-documentary about the process of creating the game and what it was like to work at Santa Monica Studio.
    You can watch a clip of it here: http://www.youtube.com/watch?v=uBHGs8HoYuo
    It's strange because I can't tell if they actually have as much fun working as the video makes it seem, or if the higher ups cut out negative interviews or comments by members of the team. There is a point where they begin discussing crunch time (http://youtu.be/uBHGs8HoYuo?t=23m21s), but we don't get to hear any real complaints from the team, which I thought was kind of odd.
    Brian and I discussed the video a few days ago and he mentioned that it wouldn't really make sense for the company to make the gamer feel bad about what they're playing , and I realized that if we knew about all of the labour involved in making the games we play, we probably wouldn't play them as much (just as we might stop eating most of our favourite foods if we knew how they were made).
    I'm not very well versed in Marxist theory, but I'm pretty sure I read something about how this works with Capitalism - the labour that goes into a commodity is erased or made as invisible as possible so that the nicely packaged item you buy looks as if it hasn't been touched by human hands. This way, we don't have to think about what we're buying.
    If anyone knows where I might have read this from, let me know - I'd like to get a refresher on Marxist theory.

    The reddit link really helps to contextualize the idea of immaterial labour outside of the gaming industry, and it got me wondering if it exists outside the working world as well.
    Do you think that schools operate in this manner to a degree? Can we consider homework as a sort of immaterial labour? There has already been scholarship and debates on whether students necessarily need to do work and readings outside of school, and although I'm not going to get into the topic much, these are a couple of neat things to look at: http://www.telegraph.co.uk/education/educationnews/10101361/School-bans-homework-to-give-pupils-more-family-time.html,
    http://www.debate.org/opinions/should-schools-give-homework,
    http://www.thedailyriff.com/articles/the-finland-phenomenon-inside-the-worlds-most-surprising-school-system-588.php,
    and http://cdn.slowrobot.com/3420131500195.jpg.
    (This isn't a topic that needs to be discussed, I just thought it was interesting to see how the concept of immaterial labour can be discussed outside the video game industry - and maybe could help us think of other ways immaterial labour affects our lives).
Chris Milando

March 10th Presentation Powerpoint and Notes - 3 views

digh5000 video games history digital dh education
started by Chris Milando on 12 Mar 14 no follow-up yet
Chris Milando

Highlights for Morris': The Future of the Civil War through Gaming: Morgan's Raid Video... - 2 views

  • Because elementary teachers have limited amounts of time for social studies, if they teach it at all, they need powerful direct experiences that allow them to introduce a topic and provide some context for it, a game or activity that helps the students learn a concept
  • All of this must occur in a block of about thirty minutes or the teacher will determine that it takes too much time
  • Fact checking is, of course, a historian’s greatest role to insure that the design team gets the story correct
  • ...11 more annotations...
  • Since games are best at helping the player to understand processes, the historian has an important role on the design team in identifying the process to be explored through the game
  • The rest of the team brings talents from art, computer, education, music, and telecommunication backgrounds, but they may not share the understanding of the historical events or the commitment to getting details accurate.
  • Historians contribute to the process of designing a gaming experience by providing three important aspects of the design sequence: fact checking, process identification, and curriculum collaboration.
  • Visitors to museums and historic sites and school students can engage in interactive history-themed games to learn historical, geographic, economic, or civic processes
  • As a marketing tool these games can help a historical site establish followers who become interested in the content prior to making a pilgrimage to the site
  • When teachers select educational games for the classroom, the games must be keyed into academic standards, and they must have measurable knowledge outcomes in subjects such as history, geography, economics, or civics.
  • Narrative-driven solutions are not appropriate for games
  • Video is very narrative-driven and allows the flexibility of looking at multiple perspectives from a single phone by referring to multiple quotations from primary sources and examining different characters present on a battlefield at the same time
  • A more interactive approach for classroom students would be for them to gather evidence from the battlefield to provide evidence of their construction of knowledge
  • A virtual experience like a game or video never takes the place of a direct experience, but it does allow a greater audience to become attracted to the site.
  • A good game never tells a story even if it is a simple adventure story; a good game always examines a process and engages the player in decision making.
  •  
    Quick Summary: Morris argues that video games can provide accurate historical information - as long as they are not presented as narratives. He argues that video games can (and should) be used for educational purposes, and can be effective tools in teaching history in a quick, engaging and fun way. He also advocates for the role of the historian and a general sense of collaboration in creating educational video games.
Chris Milando

Highlights for Gibbs and Owens': Writing History in the Digital Age » Hermene... - 0 views

  • historical scholarship increasingly depends on our interactions with data, from battling the hidden algorithms of Google Book Search to text mining a hand-curated set of full-text documents.
  • Even though methods for exploring and interacting with data have begun to permeate historical research, historians’ writing has largely remained mired in traditional forms and conventions
  • In this essay we consider data as computer-processable information.
  • ...69 more annotations...
  • Examples include discussions of data queries, workflows with particular tools, and the production and interpretation of data visualizations
  • At a minimum, historians need to embrace new priorities for research publications that explicate their process of interfacing with, exploring, and then making sense of historical sources in a fundamentally digital form—that is, the hermeneutics of data.
  • This may mean de-emphasizing narrative in favor of illustrating the rich complexities between an argument and the data that supports it
  • This is especially true in terms of the sheer quantity of data now available that can be gathered in a short time and thus guide humanistic inquiry
  • We must also point out that, while data certainly can be employed as evidence for a historical argument, data are not necessarily evidence in themselves
  • we argue that the creation of, interaction with, and interpretation of data must become more integral to historical writing.
  • Use of data in the humanities has recently attracted considerable attention, and no project more so than Culturomics, a quantitative study of culture using Google Books
  • the nature of data and the way it has been used by historians in the past differs in several important respects from contemporary uses of data
  • This chapter discusses some new ways in which historians might rethink the nature of historical writing as both a product and a process of understanding.
  • The process of guiding should be a greater part of our historical writing.
  • As humanists continue to prove that data manipulation and machine learning can confirm existing knowledge, such techniques come closer to telling us something we don’t already know
  • However, even these projects generally focus on research (or research potential) rather than on making their methodology accessible to a broader humanities audience
  • The processes for working with the vast amounts of easily accessible and diverse large sets of data suggest a need for historians to formulate, articulate, and propagate ideas about how data should be approached in historical research
  • What does it mean to “use” data in historical work?
  • For one, it does not refer only to historical analysis via complex statistical methods to create knowledge.
  • We should be clear about what using data does not imply.
  • Perhaps such a potential dependence on numbers became even more unpalatable to non-numerical historians after an embrace of the cultural turn, the importance of subjectivity
  • Even as data become more readily available and as historians begin to acquire data manipulation skills as part of their training, rigorous mathematics is not necessarily essential for using data efficiently and effectively
  • work with data can be exploratory and deliberately without the mathematical rigor that social scientists must use to support their epistemological claims.
  • historians need not treat and interpret data only for rigorous hypothesis testing
  • To some extent, historians have always collected, analyzed, and written about data. But having access to vastly greater quantities of data, markedly different kinds of datasets, and a variety of complex tools and methodologies for exploring it means that “using” signifies a much broader range of activities than it has previously.
  • data does not always have to be used as evidence
  • knowledge from visualizations as not simply “transferred, revealed, or perceived, but…created through a dynamic process.
  • Data in a variety of forms can provoke new questions and explorations, just as visualizations themselves have been recently described as “generative and iterative, capable of producing new knowledge through the aesthetic provocation
  • It can also help with discovering and framing research questions.
  • using large amounts of data for research should not be considered opposed to more traditional use of historical sources.
  • humanists will find it useful to pivot between distant and close readings
  • More often than not, distant reading will involve (if not require) creative and reusable techniques to re-imagine and re-present the past—at least more so than traditional humanist texts do.
  • we need more explicit and careful (if not playful) ways ways of writing about them
  • teven Ramsay has suggested that there is a new kind of role for searching to play in the hermeneutic process of understanding, especially in the value of ‘screwing around’ and embracing the serendipitous discovery that our recent abundance of data makes possibl
  • historical writing has been largely confined by linear narratives, usually in the form of journal articles and monographs
  • easier than ever for historians to combine different kinds of datasets—and thus provide an exciting new way to triangulate historical knowledge
  • The insistence on creating a narrative in static form, even if online, is particularly troubling because it obscures the methods for discovery that underlie the hermeneutic research process.
  • Although relatively simple text searches or charts that aid in our historical analysis are perhaps not worth including in a book
  • While these can present new perspectives on the past, they can only do so to the extent that other historians feel comfortable with the methodologies that are used.
  • This means using appropriate platforms to explain our methods.
  • It is clear that a new relationship between text and data has begun to unfold.13 This relationship must inform our approach to writing as well as research.
  • We need history writing that interfaces with, explains, and makes accessible the data that historians use
  • the reasons why many historians remain skeptical about data are not all that different from the reasons they can be skeptical about text.
  • We need history writing that will foreground the new historical methods to manipulate text/data coming online, including data queries and manipulation, and the production and interpretation of visualizations.
  • Beyond explicit tutorials, there are several key advantages in foregrounding our work with data:
  • It allows others to verify historical claims;
  • In addition to accelerating research, foregrounding methodology and (access to) data gives rise to a constellation of questions that are becoming increasingly relevant for historians.
  • 2) It is instructive as part of teaching and exposing historical research practices; 3) It allows us to keep pace with changing tools and ways of using them.
  • Dave Perry in his blog post “Be Online or Be Irrelevant” suggests that academic blogging can encourage “a digital humanism which takes down those walls and claims a new space for scholarship and public intellectualism.”14 This cannot happen unless our methodologies with data remain transparent.
  • we should embrace more public modes of writing and thinking as a way to challenge the kind of work that scholars do.
  • Google’s data is proprietary and exactly what comprises it is unclear
  • Perhaps more importantly, this graph does not indicate anything interesting about why the term “user” spiked as it did—the real question that historians want to answer.
  • But these are not reasons to discard the tool or to avoid writing about it
  • Historians might well start framing research questions this way, with quick uses of the Ngram viewer or other tools
  • But going beyond the data—making sense of it—can be facilitated by additional expertise in ways that our usually much more naturally circumscribed historical data has generally not required.
  • Owens blogged about this research while it was in progress, describing what he was interested in, how he got his data, how he was working with it, along with a link for others to explore and download the data.
  • Owens received several substantive comments from scholars and researchers.
  • These ranged from encouraging the exploration of technical guides, learning from scholarship on the notion of the reader in the context of the history of the book, and suggestions for different prepositions that could further elucidate semantic relationships about “users.”
  • Sharing preliminary representations of data, providing some preliminary interpretations of them, and inviting others to consider how best to make sense of the data at hand, quickly sparked a substantive scholarly conversation
  • this chart is not historical evidence of sufficient (if any) rigor to support historical knowledge claims about what is or isn’t a user.
  • How far, for example, can expressions of data like Google’s Ngram viewer be used in historical work?
  • how does one cite data without black-boxy mathematical reductions, and bring the data itself into the realm of scholarly discourse?
  • How does one show, for example, that references to “sinful” in the nineteenth century appear predominantly in sermon and other exegetical literature in the early part of the century, but become overshadowed by more secular references later in the century? Typically, this would be illustrated with pithy, anecdotal examples taken to be representative of the phenomenon. But does this adequately represent the research methodology? Does it allow anyone to investigate for themselves? Or learn from the methodology?
  • Far better would be to explain the steps used to collect and reformat the data; ideally, the data would be available for download
  • Exposed data allow us to approach interesting questions from multiple and interdisciplinary points of view in the way that citations to textual sources do not
  • As it becomes easier and easier for historians to explore and play with data it becomes essential for us to reflect on how we should incorporate this as part of our research and writing practices.
  • Overall, there has been no aversion to using data in historical research. But historians have started to use data on new scales, and to combine different kinds of data that range widely over typical disciplinary boundaries
  • The ease and increasing presence of data, in terms of both digitized and increasingly born digital research materials, mean that—irrelevant of historical field—the historian faces new methodological challenges.
  • Approaching these materials in a context sensitive way requires substantial amounts of time and energy devoted to how exactly we can interpret data
  • we have argued that historians should deliberately and explicitly share examples of how they are finding and manipulating data in their research with greater methodological transparency in order to promote the spirit of humanistic inquiry and interpretation.
  • Historical data might require little more than simple frequency counts, simple correlations, or reformatting to make it useful to the historian looking for anomalies, trends, or unusual but meaningful coincidences.
  • To argue against the necessity of mathematical complexity is also to suggest that it is a mistake to treat data as self-evident or that data implicitly constitute historical argument or proof.
  • Working with data can be playful and exploratory, and useful techniques should be shared as readily as research discoveries
  •  
    Gibbs and Owens explain that data and information need to be played with. "Data does not always have to be used as evidence" in itself - it can also be used as a springboard for questions and further discovery (data is "generative").
Chris Milando

» Highlights for McCall's: Historical Simulations as Problem Spaces: Criticis... - 0 views

  • The concept of problem space is a highly useful tool for studying historical simulations, teaching history, and using the former to help in the latter.
  • Simulation games are interpretations of the past designed as problem spaces
  • In the field of educational and cognitive research a problem space is a mental map of the options one has to try to reach a goal, the various states.
  • ...54 more annotations...
  • There is no implication of physical space. In contrast work by some scholars of video games, most notably Jenkins and Squire, discuss video games as contested spaces: here there are certainly problems, but the space itself (or rather the representation of it) becomes critical.
  • concepts a historical problem space has the following features:
  • Players, or in the physical world, agents, with roles and goals generally contextualized in space
  • Choices and strategies the players can implement in an effort to achieve their goals
  • That simulation games represent problem-spaces is in some respects just a more sophisticated articulation of the basic core of game-ness. By most definitions games require players, conflict, and a quantifiable outcome.
  • What a historical simulation game does beyond this basic game-ness, however, is craft a virtual problem space that represents to some degree a real-world one.
  • As expansive as a game might be in its treatments, it will impose arbitrary limits on its subject. These limits begin with the roles and goals of the player, decisions that shape the entire design
  • The quantifiable gameplay elements and mechanics all, in a tightly designed game any way, factor directly into whether the player achieves their goals.
  • There has been excellent discussion on Play the Past about the appropriateness of, and methods for critiquing simulations historically.
  • they are interpretations in the form of quantifiable problem spaces
  • It suggests considerations for rigorous and meaningful criticism that is holistic and sensitive to the medium
  • why Colonization codes native peoples the way it does, why Civilization does not deal with social issues in cities, or why East India Company does not represent the tensions between English and Indian customs—one needs to consider holistically the problem space selected by the designers
  • Generally speaking however—and I welcome examples where this is not the case—simulation games, especially pleasurable and/or commercially successful ones  must commit to a very small set of roles and goals, often one role and one goal. Even where roles and goals differ and conflict, they tend to be set up as binary opposites or at least draw from the same well of constraints and affordances.
  • This is in large part, again, because games must be closed functioning systems: each part must connect to every other part. So a game cannot represent roles and goals well that do not fit into the core choices, affordances, and constraints of the chosen problem space.
  • Slaves in the game become a commodity, a valuable source of cheap labor and it is not unreasonable at all for players to initiate battles in the hopes of gaining more slaves for mines and building projects.
  • Suppose, however, one wanted to criticize formally this historical representation of slaves. One might start by noting that these slaves have very little agency.
  • slaves become nothing more than affordances, resources for the player to exploit in the game
  • Why does the game not portray the agency of slaves? How Longbow defined the primary problem space, the human player’s problem space, is a critical answer. For the player Philip king of Macedon is the role with a goal of uniting Macedonia and building a Balkan empire. With this role and goal driving the articulation of the problem space, depicting slaves in the game as affordances is fully understandable.
  • It is important to note, however, that saying a portrayal of ancient slaves, native Americans, Hessian mercenaries, railroad barons or any other agent or aspect of the past, takes the form it does because of the problem space is not meant to be a tactic for ending discussion or defending an implementation (one could imagine such a chilling effect: “why are they portrayed this way? Because the problem space demanded it. Oh … okay, so what’s for lunch?”). It is meant to focus criticism on a game holistically and consider how the affordances and constraints of the simulation game medium and the interests and goals of a game’s creators (their concerns, assumptions, hopes, attitudes, what have you) shape a game’s interpretation of the past.
  • These designers have their own goals, and they are generally different from those of the historian.
  • At no point in the process of identifying problems of historical interpretation in a simulation game should the goal be to blame a game designer for somehow failing to get “the facts straight” (whatever that means) or for intentionally misrepresenting the past.
  • I suggest, as historians, that sentiment also applies to understanding why a historical game takes the form it does. The goal should not be to assign blame but to understand how the past is represented in games that suggest they are about historical topics and why it is represented in the ways they are. This requires understanding the medium and its constraints and affordances, the audience and its expectations, the designers and their goals, and the ways these and other factors shape how knowledge of the past is transmitted from that past to our living rooms
  • So, what kinds of questions might one ask of a simulation game as a problem space and what kinds of meaningful criticisms/evaluations can be made? A few, necessarily incomplete suggestions:
  • One might meaningfully question why the particular main roles and goals for the game were selected in the same way one can meaningfully question why certain generations of historians privileged one set of topics and questions over another. Indeed meaningful answers to such questions can be given based on careful research of prevailing ideas at the time. Simulation games, for example, tend to be inclined to issues of domination whether in political, military, or economic forms – discussing why this is continues to be a lively debate.
  • One absolutely should question whether the roles and goals selected for the players are historically legitimate. In other words, do they reflect what our evidence suggests were some important roles and goals in the past?
  • A thorough critique of why slaves are mere tools in Hegemony, happiness is the defining metric for success in CivCity: Rome, Indian culture is not represented in East India Company, or any other element in any game, should consider the goals set out for the game and the supporting game mechanics to be compelling.
  • So, suppose that one accepts the roles and goals of a game as historically valid goals, i.e. goals that reasonably represent what good evidence suggests motivated some peoples of the past. That might well mean that a thorough challenge to the portrayal of some historical agents in the game could only be made by suggesting:
  • the agents could not reasonably be conceived to play that role  in the problem space from the point of view of the player, the primary agent
    • Chris Milando
       
      So it's not just the experience. The spacial context also helps us to understand what caused specific decisions and events in history. While books can accurately tell us what happened, games can make us understand /why/.
  • So challenging the portrayal of slaves in Hegemony, if one accepts the historical validity of the role and goals (which I do), would require suggesting how slaves could have been portrayed more complexly and validly within the defined problem space, how they could have had a greater portrayal of agency through expanded roles and goals.
  • It becomes necessary to move outside the game design itself and consider what external factors (modern cultural assumptions and misunderstandings, design deadlines, demands of game-ness) shaped the inaccuracies.
  • simulation games are human interpretations of the past subject to certain constraints, as sources and media they should be considered holistically, and this can be done by thinking in terms of problem spaces.
  • When it comes to the history class, there is significant educational value to studying the past in terms of historical problem spaces. This is not to say that students should come to view the past exclusively or mostly in terms of problem spaces. It is simply to suggest that problem spaces provide an excellent framework for achieving certain goals in a 21st century history education.
  • Players and actions in physical space: One of the points I made in Gaming the Past[3] is that teachers and students too easily and often forget that humans in the past (and present) operated in physical, spatial contexts. Even the most intellectual/emotional/spiritual of goals is embodied in a physical and spatial context. Understanding that context helps understand agents’ roles, goals, choices, affordances, and constraints.
  • what more legitimate roles the agent could have played in the game that would mesh with the system incorporating the player’s roles and goals in the problem space
  • Players with choices and strategies
  • Affordances and constraints: Agents in the past (and present) have opportunities and roadblocks, abundances and scarcities, talents and weaknesses, access and exclusion. These affordances and constraints shape their choices, goals, and roles.
  • Spatial context: it is worth repeating. Human motives, goals, and actions are physically contextualized as are many of the affordances and constraints that influence these things. The psychological, the emotional, the spiritual, and the intellectual play critical roles, to be sure. Human goals and actions, however, cannot be severed from their environments and remain fully comprehensible.
  • Why use the idea of problem space as a framework for studying, teaching, and learning about the past?
  • One of the goals of history education should be for students to understand how factors shape and promote certain actions and outcomes over others, how everything is hardly ever equal, and how everything is contextualized.
  • It teaches to contextualize actions within space rather than divorcing choices from their real-world context. Humans in the past and present do not make decisions in vacuums. Learning to consider the context for decisions and actions before considering the decisions and actions is critical to studying human behavior.
    • Chris Milando
       
      This is what it is all about. We can learn from history books what happened, but games allow us to feel and understand /why/. Information on its own is insufficient - we need context to get a proper understanding.
  • It fosters flexible problem solving and critical inquiry as students consider why actors made the choices they did, what else they could have chosen, and what the likely results of those other choices might have been (all of which is important counter-factual reasoning). It undermines the perennial problem of viewing the past as pre-determined. Training flexible problem solvers like this should be a goal high on the list for history teachers. These are the thinkers that can see many sides of a problem, analyze different possibilities, and, hopefully, come up with excellent solutions.
  • make notes on the following: role, goals, geographical setting, types of choices available, affordances (I didn’t call them that at first, but got there quickly), and constraints.
  • I indicated I would start the class off by giving some background biographical information on Pliny.
  • Comprehension: even those who sometimes struggled with the challenge of making sense of primary sources and organizing a variety of historical evidence reported their sense that they understood Pliny and his world better than they normally understood many topics we explored
  • t is too easy for evidence and facts (such as they are) to get divorced from one another and appear meaningless, particularly when one lacks a deep background in a subject.
  • Engagement: problem solving is inherently engaging
  • Flexibility and Creativity: Historical imagination requires individuals not only to understand the evidence for what did happen but also to use that evidence to consider what could have happened. To be able to reconstruct a world of possibilities requires creativity and flexibility far beyond that fostered by the rote examination of what did happen and the simple acceptance of standard explanations for why it had to be that way. Again, this is the kind of powerful thinking a 21st century history education should foster: ending not with how things are but considering how they can be.
  • what reasonably valid simulation games offer most of all to students of the past is the ability to explore problem spaces from the strategic, if not emotional and intellectual, perspective of a player/agent in the space.
  • Simulation games are particularly good at modeling choice in problem spaces.
  • When students play and critique simulation games, they can actually make choices within a problem space and see how they are resolved.
  • potentially a much closer analogy to the reality of the past problem than regular classroom media
  • Of course we must be very careful when using simulation games to help students study problem spaces. The games will tend to focus on one set of roles and goals in the problem space and it is essential to remind students that there are many roles and goals.
  •  
    Quick Summary: McCall explores problem spaces - self-contained moments in history where a gamer can relive the experience of a specific event in time, and understand how history came to be through the decisions they are forced to make.
Chris Milando

»Highlights for Chapman's: Privileging Form Over Content: Analysing Historica... - 0 views

  • At this early stage in the serious study of historical videogames, we must be sure to adopt an approach that privileges understanding the videogame form (and the varying structures this entails) and its integral role in the production and reception of historical meaning, rather than solely, or even primarily, on the content of specific products as historical narratives.
  • In essence, when we play we may well be “reading” (i.e. interpreting and negotiating historical signifiers and narrative) but we are also “doing” (i.e. playing).
  • Content cannot be separated from its form, just as history cannot be understood separately from the modes in which it is written, coded, filmed, played, read, or viewed.
  • ...27 more annotations...
  • This last concern is integral to understanding games because, unlike the majority of historical forms, videogames have an additional layer of meaning negotiation because they are actively configured by their audiences
    • Chris Milando
       
      This is super important and defines the idea of the form (the experience) as what we look for in a video game.  This is what the genre will be used for in learning about history.
  • To do so requires an analytical approach that fuses Salen and Zimmerman’s three schemas of games: play, rules, and culture, while allowing the consideration of the player’s role in the negotiation and fusion of this triad.
  • This article calls for academic work on historical videogames to move beyond the examination of the particular historical content of each game (i.e., historical accuracy or what a game ‘says’ about a particular period it depicts) and to adopt an analytical framework that privileges analysis of form (i.e., how the particular audio-visual-ludic structures of the game operate to produce meaning and allow the player to playfully explore/configure discourse about the past).
  • Simply focusing on the accuracy of the game often re-informs us about popular history rather than recognizing the opportunities for engaging with discourse about the past (and the nature of this discourse) that this new historical form can offer
  • Critiques of particular historical films were assumed to be indicative of some kind of basic structural inability of film to function as a mode of historical expression. Many scholars concluded that film could not constitute “proper history.”
  • the notion of “accuracy” or “truth” is collapsed with and thus taken to mean, “in alignment with the narratives of book-history.”
  • historical videogames mostly relinquish the telling of the experiences of specific historical agents, and favour instead typical historical environments, characters, scenarios, and experiences.
  • Obviously the aim of the developers of historical videogames like Civilization or Brothers in Arms (in addition to create an entertaining game), is to create history, not as it can be represented in a book but as it can be represented in a videogame.
  • Analysis on the basis of content alone almost invariably involves comparisons with historical narratives constructed and received in book form, which is often problematically understood as the only form capable of producing “proper” history
  • Most often these narratives are used as the benchmark for establishing truth or accuracy and thus, the examination of content
  • These written interpretations are taken to be history (or more accurately, the past) itself, rather than history as it can be written, which naturally cannot be bluntly compared to history as it can be played
  • history on film must be considered on its own terms.
  • Games will likely never produce the same opportunities for discourse as a book, but then why should they?
  • Each form utilizes different structures that, considered alongside one another as part of a larger transmedia meta-discourse, create much more interesting collaborative opportunities for establishing historical understanding than one or the other alone.
  • Examining only content also necessarily involves asking questions about what is included or left out of a particular videogame’s representation. This is rarely a useful question beyond the basis of a general common sense. Historical videogames are, like all histories, mimetic cultural products
  • The benefit will be more than just increased knowledge of a particular historical representation, but also insights about form (a particular game-structure’s operations) that are transferable to an understanding of games with similar ludic (and audio-visual) elements.
  • how much is to be actually gained by knowing, for instance, that certain shoes were not genuinely available until the 1490s rather than the 1470s, or that a particular character, though historically typical, did not truly exist? Relatively little, compared to the “feel” of a period or location, the life, colour, action, and processes (with which the book can struggle) and which can be easily communicated in games.
  • It is only by focusing on form that we can understand how the game can produce meaning in these, arguably, new ways, that neither book nor cinema can effectively utilize, whilst still remaining engaged with a larger historical discourse.
  • Historical videogames must be understood on their own terms, without relinquishing our understanding of the basic tenets of historical theory as they universally apply to history as a practice within any form (e.g. history is referential and representational).
  • Accepting this challenge requires a new approach to historical videogames, one that involves analysing the structures that produce meaning.
  • These are structures which create opportunities for players to negotiate meaning in the ways that we are familiar with from other more “passive” media but also allow them to actively configure their own historical experience through play.
  • the agency which the player wields and the challenges they confront, which allow a somewhat unique form of engagement with historical discourse.
  • though written logically, are still subjective aesthetics that attempt to represent historical experience through reactively producing signs to be read and responses to be acted upon.
  • In short, in any historical videogame, the aesthetics of historical description also function at a ludic level, producing a form of “procedural rhetoric” that, depending on a particular game’s (or genre’s) structures, can influence virtually all of the other historical signifiers through which the game produces meaning.
  • Having identified combinations of these audio-visual-ludic structures, we can then approach other games that operate similarly with an understanding of what opportunities for historical meaning-making they are likely to offer
  • When we look at the videogame form in this way we can, I hope, begin to create a cohesive understanding of how games represent the past and what structures create particular playful opportunities for players to explore, understand, and interact with these representations.
  •  
    Quick Summary: Do we need to look to games for historical accuracy? Chapman argues that we don't really - instead, we need to look to them for a historically accurate experience. This is what helps us to understand the context behind the information we get from books.
Chris Milando

Highlights for de Peuter and Dyer-Witherford article: Mobilising and Counter-Mobilising... - 0 views

  • article
  • This article is a preliminary portrait of work in the video and computer game development industry, a sector of creative, cognitive labour that exemplifies the allure of new media work
  • there are promising signs of game designers and audiences creatively reorienting their playful dispositions and intellectual capacities toward the subversion of the very logics of expropriation, commodification, and corporatisation that sustain the digital play industry in particular and global capital in general.
  • ...151 more annotations...
  • this article examines the conditions of digital game labour, this cultural industry’s “work as play” mantra, the pleasures and potentialities of game production, the blemishes that mar this attractive vista, and the new infractions these tensions provoke
  • Drawing on interviews we conducted with game developers in Canada
  • In addition to looking at how game labour is mobilised in commercial game development, we also consider in this article how game labour is counter-mobilised – dissident directions that are emerging in the subjectivities, organisation, and creations of this form of new media labour.
  • tactical games created in the context of political activism
  • a job making virtual games seems employment nirvana – a promise of being paid to play
  • experiments in open-source game development
  • our inquiry into the composition of game labour is part of a longer study of computer and video games. Our study proposes that interactive games are the paradigmatic media of “Empire”, using that term in the inflection given to it by Michael Hardt and Antonio Negri in their companion books, Empire (2000) and Multitude (2004).
  • Our hypothesis is that digital games are produced by and productive of the multi-layered arrangement of military, economic, and subjective forces associated with the form of imperial power theorised by Hardt and Negri
  • the five reasons we think digital games are exemplary creations of Empire
  • he largest game firms and markets are located in the United States, Japan, and Europe, though South Korea and China are quickly becoming burgeoning regions of game-capital’s expansion
  • the corporate organisation of the game industry spans the “world market” (Hardt and Negri, 2000: 254-256). Game companies roam the entire planet in search of workers and consumers, establishing a globe-girdling network of production and consumption.
  • Early digital games were created during the Cold War by hackers and hobbyists within the military-academic complex. The creations of this autonomous invention power were only later harnessed by entrepreneurs – the act of capture that set in motion a multi-billion dollar cultural industry (Kline et al., 2003: 86-88). Since its inception, the digital play industry has continually discovered profitable new strategies by capturing counter-play.
  • The concept of immaterial labour invites us to assess the multitudinous potentialities of the new forms of work
  • The storylines, missions, and emotionality of countless video and computer games express and reinforce the military, economic, and political logics of Empire
  • America’s Army, with its recruitment and training goals, The Sims, with its simulation of extreme consumerism, Impossible Creatures, with its bio-engineering experiments, and Vice City: Grand Theft Auto, with its cynicism and violence are virtualities produced by and productive of Empire
  • digital games are a paradigmatic media of Empire
  • Digital games exemplify Empire’s mobilisation of “immaterial labour”
  • non-commercial, dissident applications of digital play have emerged in the context of the counter-globalisation movement – from feminist game art to game-inspired experiments in distributed counter-planning
  • Immaterial game labour also reveals the blurring of work and non-work time
  • The activity of making and playing games combines the range of qualitative features of immaterial labour: scientific know-how, hi-tech proficiency, cultural creativity, human sociability, and cooperative interactivity
  • discontented game workers have recently ignited controversy around exploitive practices, like excessive hours, that are common in “cool” media industries.
  • The transnational architecture of game production reminds us that the world market may be a “smooth” space but it’s far from level
  • A concept such as immaterial labour, for example, enables us to defamiliarise interactive play, reconceive it, and glimpse aspects that are often occluded
  • The concept of Empire and the discussions surrounding it, provide, we argue, a rich and coherent – although also eminently debatable – depiction of post-Fordist, transnationalised capitalism
  • By examining game work in terms of immaterial labour we can start to show how it relates to other aspects of this social field – participating, for instance, both in the structures of “networked power” that uphold contemporary sovereignty, and the insurrections of the “multitude” that challenge it.
  • To draw an analogy to the music industry, the game publisher is like the record label, the developer like the band. Developers make games, while publishers finance, distribute, and market them.
  • Publishers include the colossal video game console-makers (Microsoft, with its Xbox, Nintendo, with its GameCube, and Sony, with its PlayStation II), a collection of transnational publishing conglomerates (e.g., Electronic Arts, THQ, UbiSoft), and a number of smaller but still powerful publishers.
  • Publishers exert massive influence over what games are made and when, largely because of their control of financing and marketing levers
    • Chris Milando
       
      Super Important!
  • tremendous control consolidating in the hands of one company in particular, Electronic Arts (EA).
  • Publishing is the site for strategic control in the games sector because their marketing campaigns – which today account for as much as a third of a game’s total costs – command the all-important distribution bottleneck, influencing what games actually make it to a store shelf.
  • it is not uncommon for publishers to cancel a development contract mid-way.
  • At the top are a handful of mammoth developers with between five hundred and over two thousand employees, releasing dozens of titles each year.
  • Below these is a stratum of mid-sized studios that enjoy an established record with one or more publishers, have more than one hundred employees, and release a couple of games each year.
  • In addition to operating their own in-house development studios, publishers contract “third-party” development studios to make games for their publishing label
  • Finally, there are innumerable start-ups – typically digital “garage” operations developing prototype games in the hope of getting a publishing deal. Many, perhaps most, perish.
  • Developers are significantly disadvantaged in relation to publishers, to whom all but the largest or most famous studios relinquish creative control and intellectual property rights
  • One studio manager describes the power relationship of a developer to a publisher as ‘indentured servitude’.
  • developers are ‘the David; the publisher is the Goliath’
  • Then there is an echelon of small studios with less than one hundred employees, producing one game every eighteen months or so, often scrambling from one contract to the next.
  • game development is an exemplary site of “immaterial labour”
  • This term is used by autonomist theorists to designate the ‘distinctive quality’ of work in ‘the epoch in which information and communication play an essential role in each stage of the process of production’
  • Hardt and Negri (2000: 289-294) distinguish various sub-categories of immaterial labour, including work with computers and networks, work manipulating and managing emotion, work involving communication and images, and work entailing high levels of coordination and cooperation
  • The net of immaterial labour is cast widely: bio-tech lab technicians and game designers – as much as call centre operators, childcare providers, and even virtual game players – are engaged in immaterial labour
  • Programmers, or engineers, develop “game engines” and write the code on which a game’s functionality is based
  • We want to stress that digital games are “immaterial” commodities and they may be designed by “immaterial” labourers – but at some point in the production chain some unmistakably material labour is required, producing a tangible good, whether that be a game cartridge or a game console.
  • we focus our attention on the labour at the “high” immaterial end of the game value chain in the North, and the unique forms of incentive and discipline it incites
  • the day of the lone-wolf commercial game developer is definitively over
  • Intensely cooperative, the labour of developing a single game can evolve over a period of between six and twenty-four months, and involve teams of between twenty-five and one hundred people.
  • Most big games cost $5-10 million to produce, and $25 million budgets are ‘around the corner’
  • Designers establish the basic game concept, characters, and play mechanics.
  • The main job types in game development include design, production, art, programming, and testing
  • Artists develop characters, virtual worlds, animation, special effects, and sound.
  • Today’s most visible immaterial workers are those in high-tech milieu and in cultural industries
  • Producers have a “leadership role” in administering the budget, coordinating the project, and managing the development team; they are charged with maintaining a coherent vision of the game’s design, facilitating communication among the sub-teams, and addressing “personnel and motivational issues”. Finally, testers play a game to evaluate it for “bugs” and playability
  • Game development typically involves four stages. In “pre-production” the conceptual infrastructure for the game is designed, its look mapped, schedules created, and resources assigned. In “prototyping” programmers create the tools that build the game, and the rendering tools which iterate animation or special effects, permitting artists to design, review, and edit their creations. Artists are working on two- and three-dimensional models, developing textures, and animation for characters and the game world, while software engineers code the game mechanics and the story. The third stage is “production”, with its sub-stages of alpha, beta, and final. Game engines are now complete, and characters and animation are embedded in a working game. At “alpha” the game isn’t fully stable, but all the art, code, and features are present. Testers are evaluating levels, and returning them for correction to the development team. At “beta” the game should be full and stable, adapted to the “platform” it will play on, and it is undergoing play testing. At “final” the product is shipped to the publisher, who will run its own tests before approving a game for release
  • The empirical basis for the analysis that follows is a three-year study of the Canadian video and computer game industry. We conducted personal interviews with about forty games workers, including producers, artists, programmers, designers, testers, studio executives, and owners
  • Comparable to the Australian situation, the Canadian game industry is a small but significant node in the global digital play business
  • Canada hosts a number of renowned developers, like BioWare Corp. and Relic Entertainment; several multinational mega-publishers, like EA and Rockstar Games, operate (and buy out) studios in Canada; and the cities of Montréal and Vancouver are internationally recognised metropolitan hubs of game creation
  • The Canadian game sector mainly services publishers based in the United States and Europe who want to take advantage of a lower-valued Canadian currency, a skilled labour force, and, in Montréal especially, attractive government subsidies
  • a preliminary profile can be drawn: relatively young, generally well paid but unevenly precarious, and overwhelmingly male
  • The largest proportion of the game workforce is between their late-teens and early-thirties, a generational bracket that jives with the twenty-nine year-old age of the “average” gamer
  • many new recruits to game jobs hold college or university degrees in areas such as computer science, physics, and fine arts
  • There are few university-level specialised game programs, though in 2004 EA – a company that sees ‘universities as the next-generation of talent’ – donated US$8 million to help launch just such a program (Rueff cited in Delaney, 2004)
  • Celebrity” designers can earn $500,000 or more; programmers and artists, about $60,000; and game-testers are often paid minimum wage
  • There are, however, a growing number of developers setting up in smaller towns, due to the growing supply of skilled labour and the lure of reduced overhead costs – including lower wages.
  • The game workforce is, by and large, male.
  • Even if there has been a shift in the gender of game players, ‘there’s not much of a change in hiring numbers
  • The verdict of most women insiders is scathing: ‘It’s a total old boys club’
  • Projects to explore paths beyond the gender clichés in virtual game content ‘do not get support in the industry at all…. [Y]ou have a really dominant gender leading and they’re the ones who have the purse strings’.
  • Creative expression, cooperative activity, and a “playful” environment arose again and again in our interviews as prime sources of enjoyment in game work.
  • the prospect of achieving this independence – let alone actually realising an “original” idea – is increasingly difficult in a risk-averse industry that prefers formula to experimentation. Yet the possibility of that creative autonomy arrests the imagination and secures the loyalty of countless aspiring developers.
  • There’s nobody telling you how to do something. There’s no paperwork getting in your way. There are no set rules that you have to follow – rules that you don’t feel are necessary. There’s no formal way that you are supposed to do a technical design.
  • Game developers often talked about space for creative freedom in relation to their studio’s “flat” organisational structure, which seems to be most common in small to mid-size studios. ‘There’s little bureaucracy. It’s just people doing their thing to make good games’, explains one programmer. Others stress the self-organised character of the collaborative process:
  • We have very little hierarchy, very little formal structure, very little “understood” ways of doing things…. In a situation where everyone more or less knows their role, it works out well: everyone just divides the work, you work on your bit, and everyone knows what to do. It just works out.
  • To function smoothly, though, a smooth, open play of communication is required.
  • Cooperation within and among the sub-groups of a development team is cited by game workers as a most gratifying aspect of their work
  • Software is a very dynamic, huge system: this is something I find attractive about the games industry. You have all these different components: artists, programmers, legal, production, data. You’ve got all these people that don’t understand each other’s jobs. And (yet) you have to make that all come together as one cohesive piece of art.
  • A third pleasure of game development is what we call the work as play ethos – a central strategy deployed by game-capital to mobilise immaterial game labour.
  • The work as play milieu of contemporary game studios spans a varying range of perks and promises: flexible hours, lax dress code, free food, fitness facilities, parties, and funky interior design; and it also encompasses a host of intangible qualities, from “rebelliousness” to twisted humour to self-expression.
  • Generally, when you go to work, it’s not, “Ah, I gotta go to work”. It’s, “I’m going to work, cool!” You come in, you see your friends, you get to make video games, and you get to play some. It’s pretty cool. It’s really not even so much like work here.
  • studios bend to a work as play model in part because singularity and openness is understood to facilitate the flow of creativity
    • Chris Milando
       
      Super important! Theme: Creativity requires play
  • Creation is only possible when there’s a certain type of confidence, of friendliness and cooperation between the people who are participating in the work
  • ‘industrialization of bohemia’ – in particular the idea that games corporations aren’t actually part of the ‘corporate world’
  • To tap this “jaded intelligence” game studios tend to elaborate a work as play ethos that promises great ‘leeway to express yourself’: ‘People have to be entirely comfortable to be who they are to come up with anything spontaneously, to have that real dynamic’
  • he “anti-corporate” culture of many game studios would seem to be exemplary of McRobbie’s (2002: 109) incisive critique that, in creative workplaces, ‘[w]hen the individual is most free to be chasing his or her dreams of self-expression, so also is postmodern power at its most effective’.
  • Another reason studios bend to a work as play model is because many companies have a recruitment and retention problem.
  • various disciplinary mechanisms are employed so to say to staff: ‘Oh my God, you don’t want to leave here!’ The campus-like Vancouver-area studio of EA provides a striking example. Employing 1000 people, the sleekly designed complex features a gym, pool tables, basketball courts, a soccer field, subsidised gourmet food, and snowboarding fieldtrips, among other “bonuses”.
  • Studio executives are anxious to not only attract new youthful employees but also prevent current team-members from leaving midway through the production schedule, or defecting to a competing studio or another industry.
  • The above-discussed dimensions of the labour of game development – the capture of human creativity, the high level of cooperation, the re-making of work as play – resonate strongly with the hypothesis of Paolo Virno (2004: 110) that post-Fordist production is, in a profound paradox, the ‘communism of capital’.
  • Our interviews showed that developers initially delighted by their “work as play” jobs often found that the very factors that first appear so attractive – individual autonomy, flexibility, “cool” corporate culture, and even playing games – had a dark side. We turn now to instances where the logic of work as play breaks down, revealing varieties of play slaves and a ratcheting of corporate drone
  • The length of the working day in game studios varies widely depending on company, rank, and stage of development
  • studios open their doors to extreme hours of digital drudgery: ‘forced workaholism’ is the diagnosis in IGDA’s recent study of Quality of Life in the Game Industry
  • Most North American developers are salaried, so the extraordinary overtime put in at game studios is unpaid labour.
  • The personal accounts we received give every indication that studio workplaces are, with varying degrees of intensity, obsessively hard-driving and punishingly disassociated from domesticity, sleep, and nourishment.
  • EA employees report that ‘work inside the company more resembles a fast-moving, round-the-clock auto assembly line’
  • In Canada, EA has been an active lobbyist against attempts to regulate hours in high-tech industry.
  • One computer science professor who spent a semester-long “residency” at EA reports that the game giant – which he describes as a ‘ruthless meritocracy’ – prefers to hire young students directly from university not only because of their up-to-date technological know-how but also because of their discounted salaries and heightened ‘idealism’
  • At least one manager we talked to was deeply critical of studios that get these young guys that come out of film school, game programming school, or art school and get them to work their asses off….If I had a dime for all the people I knew who are sort of resentful of their experience at their first or subsequent game industry job because the corporate culture was very subtly coercive: “You should be working here at 8:00 at night and, if you aren’t, then you’re slacking off!”
  • interest of game companies in extracting more labour for less from their workers. Another factor is the nature of the revenue model that keeps most third-party development studios afloat: a developer receives a payment when they meet a “milestone” set with their publisher, normally triggered when a developer dispatches a component part of the final game product. Developers with a hit game behind them may be able to negotiate tolerable deadlines, but vulnerable start-ups and small studios – in a deeply competitive business – often can’t. ‘Sometimes companies are just so intent on getting that contract that they’ll promise anything – at the expense of these poor programmers who have to make the bloody thing’.
  • This ruthless work regimen reflects and reinforces divisions based on age, gender, and parenthood. Those in long-term relationships, those who have children or want to start a family, or those who simply don’t want to reduce the time of life to time spent at work, are ostensibly excluded from the game sector, or will find it tremendously difficult to commit to the ludicrous hours that can be expected of them
  • Enduring excessive hours without complaint is tied to the game industry’s ‘hard work ethic’ (IGDA, 2004a: 31), which we would add has a machismo quality to it that joins the other manifestations of sexism that have functioned to exclude women from working in game studios
  • Here we catch a view of the demanding practices of self-regulation in game studios, an aspect of Empire’s search for ways of realising ‘unmediated command over subjectivity itself’ (Lazzarato, 1996: 135; see also McRobbie, 2004). Consider this developer’s remarks: When you work in this industry you are judged for what you’ve done. So you want to make a good name for yourself. You want people to consider you a hard worker, a good worker – a guy that can do a bit more than what’s expected. Because the thing with the game industry is that it is, really, a small business.
  • Stress is a major problem in development studios. Referring to the exhausting rhythm of work, one game artist comments: ‘I don’t think it’s good for you to work like that, that often. And to be creative all the time without a break – it just isn’t good for your brain, or for your creativity, potentially’
  • The turnover rate in the game industry is described as ‘nothing short of catastrophic’: over 50% plan to leave the industry within ten years, 35% within five years
  • ‘Normally, you sign a contract of employment with a company and any idea you have becomes theirs’
  • any studios are rife with quiet suspicion about ideas being ‘stolen’
  • The five had signed “non-compete” agreements and this legally blocked them from working for another North American games company for one year after terminating their employment. A court judged in favour of UbiSoft.
  • It seems that UbiSoft thinks of Montréal as a plantation – any worker who dares to escape will be hunted down by lawyers and forced out of business’
  • ‘No one is doing any original games’. Another start-up developer remarks: ‘the industry is making so much money selling established product, there seems to be very little incentive to break out of it and try new stuff’.
  • [p]retty much everyone would rather be working on their own project, some original and creative game’.
  • game workers’ disenchantment with the effects of corporate rationalisation on creativity is often what causes developers to leave their employer – often to launch a start-up.
  • In terms of precarity, one segment of game labour that stands out is “bug catchers”. Game testers, or Quality Assurance (QA) employees, are notorious for being the lowest paid and worst treated workers in the studio system:
  • Many testers make a minimum wage, and at larger developers, are in temporary, contract-based employment. ‘We’re treated no differently than the janitorial or the cafeteria staff, who make more money than us anyway. I’m not belittling other jobs but…’, one tester explains
  • A lot of the QA testers are very angry, because they’ll hear that their bug count isn’t as “high”. But it isn’t fair because certain areas of the game just don’t have any bugs’. One tester says his department is filled with ‘really angry people, because you work fourteen-hour days and we save each game probably millions of dollars
  • game studios begin to experiment in “outsourcing”.
  • as high-technology capitalism rips its course round the world in search of new markets and “cheap” labour, “talent” begins to incubate in the Global South, giving game-capital increased mobility
  • EA, for example, outsources development work to India (Overby, 2003); and EA, Nintendo, and Microsoft, among others, have outsourced game work to a Vietnamese firm
  • a Vietnamese programmer could make about $4000 a year, whereas ‘comparable US talent would earn $70,000-$100,000′
  • But the diffusion of game labour is presenting game corporations with not only discounted but also free labour.
  • Over the last decade or so, “authoring tools” have been increasingly packaged with computer games, helping to foster a vibrant participatory culture of game “modding”, or modification. “Modders” deploy a range of techniques, from changing characters’ appearances – “skins” – and weapons, to designing new scenarios, levels, or missions, up to radical departures that amount to building a whole new game – a “total conversion” – using various authoring tools
  • when young “hardcore” gamers spend their evenings modding a level of a computer game, or sculpting an avatar for a multiplayer virtual world – or, for that matter, contributing to their favourite developer’s online “community” forum – the boundaries between “play” and “content provision” subtly dissolve. They join the legion of ‘free labor’
  • a major source of value creation in the networked economy, as capital learns to digitally tap, outside all boundaries of work-time or -place, a diffuse “collective intelligence”
  • Now development companies often “buy back” successful mods, and hire the teams that created them en masse
  • What’s more, best-selling games like Counter-Strike have been developed by remote modding teams, establishing a profitable precedent of a “virtual studio” model of game development. In that aspect, and in the modes of distributed content provision evidenced by the mod community, free networked labour in the gaming sector is perhaps prototypical of work in what has been dubbed the coming ‘firms without factories’
  • ‘eighty-five hour’ work weeks at EA; of the normalisation of hyper-extended ‘crunch’ time; of the absence of compensation in the form of either ‘overtime’ pay or ‘compensation time’; of the ‘put up or shut up and leave human resources policy’ of EA; of the allegedly ‘illegal’ failure of EA to pay overtime; and of the rapid concentration of ownership in the game development industry.
  • The reverberations of ‘EA: The Human Story’ are only beginning to register as we write this article. At minimum, as one industry commentator put it, ‘the general perception of EA’s overall sliminess has increased exponentially’
  • More substantially, the game industry’s “work as play” mantra is suffering a devastating blow of truth, and game workers have started to rethink their conditions of labour.
  • managers regularly ‘falsified timesheets to avoid paying overtime’
  • IGDA’s gambit, for example, is that studios that reduce their hours will get ‘more productive and creative workers’
  • And only because it is being forced to, EA is promising workplace ‘reform’
  • Generally, though, game studios in North America are very far, culturally and politically, and often geographically, from the traditions of trade unionism
  • unionisation might provide game workers with a form of self-management that extends to greater control over game content, thereby responding to the desire – expressed so often by game workers – to work on ‘more creative projects’
  • four lines of counter-mobilisation involving the immaterial subjectivities that make and play virtual games: digital piracy, autonomous production, tactical games, and simulated counter-planning.
  • These resonate in many respects with the counter-globalisation movement, for example, in an opposition to the commodification of life forms, in a commitment to experiment in alternative modes of human cooperation, and in the elaboration of non-commercial applications of new media
  • The counter-mobilisation of immaterial labour that currently causes industry managers most anxiety is the growing network of game pirates
  • games have a piracy rate of nearly five times that of the music industry
  • pirated product often springs from within development studios themselves.
  • Empire sets in motion potentialities it cannot contain
  • Modders often import content for an altered game from some other pop culture artefact – either from another game, perhaps owned by a company other than the one that made the original game, or from another media, such as a film. In doing this, these modders are constructing a “commons” of images, characters, and themes, in violation of the corporate enclosures that divide them up into carefully policed proprietary domains.
  • Modding, like piracy, carries both potentialities and limitations. Usurping the corporate control over the direction of game development, modders are intriguing figures of autonomous production
  • In other words, modifications don’t necessarily modify much, often only amplifying the spirit of the original game
  • The wide diffusion of game-making know-how, and the availability of easy to use authoring devices, such as Flash, has led to a spate of alternative games that contribute to the circulation and provocation of struggles associated with feminist, counter-globalisation, and anti-war movements
  • But is it possible to envisage more radical horizons for interactive games where they might make a contribution to an “escape option” that would build another, more just and equitable, society? Perhaps.
  • As military training camps and management schools constantly demonstrate, networked simulation is not just a matter of entertainment.
  • But how might capacities for virtual rehearsal and planning be linked to radical social agendas?
  • ‘agoraXchange’, a collaborative open-source game development and art project, has the goal of creating a massively multiplayer online game simulating a future where there has been a radical change in political institutions.
  • The experiments of this playful multitude, as modest and preliminary as some may be, flow into the wider currents of tactical media, hacktivism, free and open-source software, and distributed computing generating tumults throughout the circuits of Empire
  • The ideology of work as fun has given game-capital an effective but increasingly brittle formula for containing and channeling the biopolitical powers of its immaterial workforce
  • but there may yet prove to be more “play” in the system than game-capital ever imagined.
  •  
    Quick summary: de Peuter and Dyer-Witherford explore the issues with the video game industry, and explain how capitalism controls what is made and limits game developers' creativity. 
Devin Hartley

Small Assignment #2 - 74 views

digh5000 smallassignment2 evaluation
  • Chris Milando
     
    I think the finality of essays is a really interesting topic, and something I think all students need to be thinking about seriously. Coming from the perspective of an English student, I would argue that about half of our program is centered around essay writing, and when we think about the incredibly short life-cycle of our essays, it comes as no surprise that we have difficulty finding jobs once we graduate. If we are not taught how to apply our work outside of the classroom, how can we ever know what we are capable of when we leave the security of our schools? Schools give us an audience (even if it is only an audience of one - the professor who grades our work), but once we graduate, it becomes very difficult to know who will have interest in reading our work, where to publish our work, and most importantly - what to do with our work.
    So I think that thinking critically about this topic goes beyond our understanding of how our work is different in its digital format (i.e. how they can function as databases for topic mining or distant reading through tools, etc.), but what we can (and have to) do with this new format.
    That being said, do you (or anyone else from the class) think that the finality of essays will be "fixed" in the way we will begin to read essays differently (by posting them online and allowing discussion and discourse to be created around them), or do you think that we can (or should) even change the way that they are written in the first place?
    Just as Mark Sample wants to do away with essays entirely, do you think that we can counter finality by re-inventing the way we communicate our work? Would an English or Film Studies student's work garner a bigger audience by vlogging their analysis? Kelly Schrum, in her article "A Tale of Two Goldfish Bowls . . . Or What's Right with Digital Storytelling" states that "Several students adapted this approach to weekly assignments, submitting vlogs in place of blog postings. The blog discussion on copyright was thoughtful and lively, but [one student]'s vlog on the topic accomplished what a text-blog could not".
    Tad Suiter's video discusses how vlogs can be used effectively (this link will make the video start right where he starts talking about it) http://youtu.be/rpe9c7BVPfo?t=4m19s, and I was thinking about this in relation to our own blog posts here on Diigo.
    As you can see, I've only responded to your first point (the finality of essays), partially because it was the topic I was the most interested in, but also because I wanted to demonstrate the problem I think online discussion is going to have (for our class and all classes in general).
    As much as I want to reply to everybody's Assignments, the problem is that I simply don't have time - and I doubt anybody else in this class does either. I think online discussion can work very effectively, but when we all have a bunch of readings to do, assignments to mark (for those of us who are TAs) and blog posts of our own to write, it's hard to read through everyone else's assignments and respond with thoughtful critiques of them all.
    While I really liked Cathy Davidson's idea of crowdsourcing grading to her students, I think that assignments would need to be very short in order for us to be able to carefully read through all of them.

    To really exemplify how problematic online discussion can be, I have to admit, it's probably not very likely that many (if any) will respond to this post and my question. As I mentioned, it's hard to find time to read through everyone's posts, and I think our own discussion will help us to both experience and observe these issues for busy students. Although I think online discussion is certainly the answer to eradicating the "finality" of essays, I don't think it's going to be possible within the format we're using. Maybe shorter posts would help, but I wonder if the answer is something like a vlog (as Schrum suggested)?

    I think it is also important to think about whether our work will continue to be read and discussed outside of our class, and even after this semester is over. Although theoretically, posting our work online does eradicate its finality (since it has the possibility of always being read and commented on), in practice, I think that most of our work will cease to be read once the class it was written for is over. I'm certain that none of us will continue to comment on these blog posts after this semester, and this is a problem.
    I think that our work needs to always continue to grow and change (to mimic the nature of the medium it is posted on - the internet), and we don't yet have a framework for online assignments to allow this (or, we're just not ready for this type of never-ending discourse and continual growth).

    So, I wonder what everyone thinks about these questions:
    Would continual online discussion work best if we had a word limit for our posts (like 150-200 words)?
    Do we need to adopt vlogs or something that will allow us to grab the attention of other readers/viewers more easily?
    Does the work for our classes need to transcend the audience of our classmates?
    Is it important for our work to never stop growing and changing (and thus, important to ensure that our work is never really "finalized")?
  • Chris Milando
     
    Alessandro, I certainly agree - I think having a week to test brief on-line discussion would be a great idea, especially if we can then discuss the benefits/drawbacks of our experience in-class as you suggested.

    I also agree that we certainly can't assume everyone will be interested in our undergrad papers, and I've been thinking about what kind of platform our work could/should be published on (with this in mind).
    I think that what we need is a database that functions similarly to Jstor, where students who have an interest in a certain topic can search any work that has been done on it (and this would allow much more work to be available - and much more quickly - since the peer review process would not stand in the way).
    The homepage for this site could feature the latest essays/vlogs/etc. posted, and a voting system (similar to Reddit's) that allows both comments and reviews to be written for each paper, as well as allowing the most voted works to appear at the top of a search.
    The reason I'm suggesting something like this is because you're exactly right - not everyone will be interested in what a 2nd or 3rd year student has to say, but some might, and those that are willbe searching for essays related to their topic of interest anyway, so I think this type of database would be the best way to ensure our essays have continued relevance long after they are graded, are open for discourse and criticism, are available for those interested in their topic and not promoted blindly to users who have no interest in them, and can help us to establish - as an academic community - what work is relevant, useful and well-written and what is not.
    A platform like this could even provide students with top-rated essays something to put on their CVs (which is important, because a large number of undergrad students graduate without anything published or worth mentioning on their CVs).
  • Chris Milando
     
    And Devin, not to worry - I only responded to your first point in your last post, so it's not a problem at all. If anything, I guess we're getting good experience in brief discussion and being able to focus on what's important!
    I'll email Professor Greenspan and see what he thinks - I hope he won't mind?
    I find that, as more and more (and more) essays, dissertations and books are being published each year, it's becoming impossible for any single student to read and engage with every source related to that topic, and I think that the same logic applies for online class discussion. It's simply impossible for any one student to read and respond to everything their other classmates post, so this allows discourse to form around what is most important without burdening us too much amidst our other responsibilities.

    I also definitely agree with you - I think vlogs would be useless unless they're used the right way (which means that they have to be both entertaining and doing more than they would in the form of written text). In order for us to understand how vlogs function, how to make them interesting and how to employ their unique capabilities, I think we would need to be trained a bit to ensure that we would be using them effectively.
    For our purposes, that makes it really difficult because (to my knowledge) we don't have any professors who could give us a crash-course on vlogging, but at the same time, if no one is available to teach us, maybe its best to just try it and (possibly) fail?
    After all, isn't DH all about trying new things and being open to failure and criticism?

    To respond to your last point, I think this question definitely needs to be answered before creating anything like the platform I suggested, and I'm not quite sure where I stand on this.
    On the one hand, I think it has to be open to everyone, as potential future employers (who want to see examples of our work) and scholars (who could use our work) need to have access to it. At the same time, if we make access open to everybody, I feel that the comment section would succumb very quickly to the trolls from Youtube and 4chan.
    That being said, I think it would make the most sense to allow everyone access to the essays/mutimedia work themselves, but only allow certain people the ability to contribute (i.e. the ability to vote, review and comment).
    Who gets the ability to contribute? I'm not sure. I guess students and faculty from around the world could log in (the same way we log in to Jstor), but I think that would exclude a lot of college/university alumni who may want to contribute as well, so I'm not sure what sort of access system would make the most sense.
Chris Milando

Debates in the Digital Humanities - 3 views

  • The alternativeness of careers in digital humanities has in fact been a subject of long debate and much concern; many early researchers in what was then termed “humanities computing” were located in liminal and academically precarious institutional spaces
  • how and whether this domain could become a discipline, with its own faculty positions and academic legitimation.
  • And although those faculty positions and degree programs are starting to appear, many jobs in what is now called “digital humanities” are still para-academic, though their funding and institutional position has been consolidated somewhat
  • ...48 more annotations...
  • The phrase “alternate careers” is thus remarkable at second glance not for suggesting that there are alternatives but for the centrality it still accords to those academic careers that are not alternate. This centrality is not just an effect of graduate study and not only perceptible within the academy; it shapes the way universities are understood as workplaces even by those who stand outside them.
  • strongly defined intellectual and professional career trajectory that, as Alan Liu astutely observes in The Laws of Cool, may no longer be characteristic of modern knowledge work: “to be a professional-managerial-technical worker now is to stake one’s authority on an even more precarious knowledge that has to be re-earned with every new technological change.
  • These “alternative” or “para-academic” jobs within the academy have a great deal to teach us about how academic labor is quantified, about different models of work and work product, and about the ways that aptitude, skill, expertise, and productivity are weighed in assessing different kinds of work.
  • the significant parameters were essentially these. My pretax income for the academic year was $12,500, and my formal work responsibilities were to prepare and teach two undergraduate writing courses of my own design. The time commitment for my teaching responsibilities was assumed to be approximately twenty hours per week. In addition, it was assumed that I would undertake my own research and make progress toward my PhD.
  • the research I conducted as a student (preparing for professional advancement through field exams, writing conference papers, and participating in the intellectual life of the department by attending public lectures and university seminars) was not considered work, or at least not compensable work.
  • Students are positioned as net gainers from, rather than contributors to, the reservoir of knowledge the university contains, and the fellowship stipends they receive are characterized as “aid” rather than as compensation
  • I was accountable for all my time to the PhD program I was in, not just for my paid duties or even for a standard forty-hour work week, but potentially all the hours not devoted to sleeping and eating.
  • this erosion of a boundary between the professional and personal space is a familiar and very common effect of graduate study, and (even more anecdotally) I would observe that the people who typically enter a graduate program are likely to have the kind of personality that lends itself to this erosion: highly motivated with a strong sense of duty and an established habit of hard work and deferral of personal pleasure (or an ability to experience hard work as pleasure)
  • I tended to feel that the research work required of me was effectively limitless: that no amount of effort could be sufficient to really complete it and that therefore no time could legitimately be spent on anything else.
  • Each hour of project work, in other words, stood on the back of a fairly substantial apparatus that was necessary to make that hour possible. Without the e-mail, the payroll, the servers, and so forth, project work wouldn’t be possible. However, for many collaborators and funding agencies, this model appeared not only counterintuitive but deeply troubling because it made our work look much more expensive than anyone else’s
  • Running in parallel to this entire narrative is another with an entirely different developmental trajectory. Since 2000, my partner and I have had a small consulting business through which we have worked on an eclectic range of projects, ranging from simple database development to digital publication to grant writing
  • Almost all our projects have some connection with digital tools, formats, or activities,4 but it is not our purely digital expertise that is most important in these projects but rather our digital humanities expertise: in the sense that our literacy in a range of humanities disciplines and our skills in writing, strategic planning, and information design are essential in making our digital expertise useful to our clients
  • one client said that what she found valuable about our intervention was that it mediated usefully between purely technical information on the one hand (which did not address her conceptual questions) and purely philosophical information on the other (which failed to address the practicalities of typesetting and work flow)
  • The value of this kind of consulting work—for both the consultant and the client—is the self-consciousness it provides concerning the nature of the work being done and the terms on which it is conducted
  • For the client, self-consciousness results from having to bring all of this to articulation, and the result is often a better (because more explicit, transparent, and widely shared) set of intellectual configurations within the client’s project or environmen
  • For instance, work processes might be explicitly documented; latent disagreements might be brought to the surface and resolved; methodological inconsistencies or lacunae might be examined and rationalized.
  • it is interesting to observe that digital humanities, as an institutional phenomenon, has evolved very substantially out of groups that were originally positioned as “service” units and staffed by people with advanced degrees in the humanities: in other words, people with substantial subject expertise who had gravitated toward a consulting role and found it congenial and intellectually inspiring. The research arising out of this domain, at its most rigorous and most characteristic, is on questions of method.
  • Mark selected text as Mark
  • our technical expertise (in this case, familiarity with markup languages and XML publishing) had an obvious relevance and importance, but arguably more important was the ability to understand and explain the editorial significance of technical decisions and to serve as a bridge between the two strands of the project: the project’s editorial work (conducted by senior humanities faculty) and the project’s technical implementation (overseen by professional staff at the MLA who manage the production of the editions in print and digital form but for whom the XML is largely unfamiliar terrain).
  • The discourse around the use of XML was substantially instrumental: it concerned the practicalities of supporting a digital interface and generating PDF output and similar issues.Treating this work as information modeling, however, has produced a subtle shift in these relationships.
  • Where in the print production process the editorial manuscript was taken as the most informationally rich artifact in the ecology (whose contents would be translated into an effective print carrier for those ideas), in the digital process the editorial manuscript is a precursor to that state: the XML encoding brings information structures that are latent or implicit in the manuscript into formal visibility.
  • what has proven most useful (and what students most remark on in their evaluations of the class) is the kind of embedded knowledge I represent: the understanding of methods, approaches, and strategies that arise out of real-world experience at a functioning digital publication project
  • The course I teach covers a number of highly technical subjects (schema writing, XML, metadata), but its emphasis is strongly on how we can understand the significance and contextual utility of these technologies within a set of larger strategic concerns. Although on paper I only became a plausible hire with the completion of my PhD, the credential that really grounds the teaching I do is actually the fifteen years I spent not completing that degree and working instead in the variety of roles detailed earlier.
  • for the typical humanities faculty member, most of these paradigms of work are equally alien; only the first will look truly familiar (the adjunct faculty position is familiar but not to be identified with).
  • what characterizes mainstream academic work is two qualities. The first is the unlimitedness of the responsibility: work interpenetrates life, and we do what is necessary. For instance, we attend conferences without there being a question of whether it’s our “own” time or our employer’s time;
  • The second, related characteristic is the way time is conceptualized as a function of work practice. Time for academics is not regulated in detail, only in blocks. (For nine months you are paid; for three months you are free to do other things; at all times you should be working on your next book.)Most digital humanities work, however—as performed by library staff, IT staff, and other para-academic staff who are not faculty—is conceptualized according to one of the other models: hourly, by FTE, or as an agenda of projects that granularizes and regulates the work in quantifiable ways. Increasingly, the use of project management tools to facilitate oversight and coordination of work within IT organizations has also opened up the opportunity to track time, and this has fostered an organizational culture in which detailed managerial knowledge of time spent on specific tasks and on overhead is considered virtuous and even essential.
  • The importance of qualitative rather than quantitative measures of work is similarly a kind of class marker: the cases in which specific metrics are typically applied (e.g., number of students and courses taught, quantity of committee work) are those that are least felt to be characteristically scholarly work. Quantifying scholarly output can only be done at the crudest level (e.g., number of books or articles published), and the relative and comparative nature of these assessments quickly becomes apparent: a monumental, groundbreaking book is worth much more (but how much more?) than a slighter intervention, and it takes a complex apparatus of review to establish, even approximately, the relative value of different scholarly productions.
  • In particular, I wonder whether the digital humanities may cease to operate as a locus of metaknowledge if (or, less optimistically, when) digital modes of scholarship are naturalized within the traditional disciplines.
  • the tension between quantitative and qualitative measures of productivity was a constant source of methodological self-consciousness.
  • This last formulation—accomplishing the same task with available resources—reverses the narrative of academic work that is on view at liberal arts colleges and research universities, in which a thoughtful person pursues his or her original ideas and is rewarded for completing and communicating them. In this narrative, the defining and motivating force is the individual mind, with its unique profile of subject knowledge and animating research vision.
  • The managerial consciousness turns this narrative on its head by suggesting that in fact the task and available resources are the forces that most significantly define our work and that the choice of person is almost a casual matter that could go one way or another without much effect on the outcome.
  • the effect of this model of work is to treat people as resources—as a kind of pool from which one can draw off a quantum of work when needed. The result of this fractionalization may be felt as a positive or negative effect: either of fragmented attention or of fascinating variety. But in either case it constitutes a displacement of autonomy concerning what to work on when and how long to take
  • What is the effect of this fungibility, this depersonalization of labor on the para-academic staff? What is my life like as a worker (and a self-conscious manager) in these conditions?
  • Our expectations of what work should be like are strongly colored by the cultural value and professional allure of research, and we expect to be valued for our individual contributions and expertise, not for our ability to contribute a seamless module to a work product. Our paradigm for professional output is authorship, even if actual authoring is something we rarely have enough time to accomplish.
  • But in 2025, what will the now-commonplace jobs (web programmer, digital project coordinator, programmer/analyst, and so forth) look like as professional identities, especially to people who may never have imagined themselves as scholars in the first place?
  • What are the larger effects of accounting for time and regulating it in these ways? One important effect is that time and work appear fungible and interconvertible. The calculus of time and effort by which we know the cost and value of an hour of an employee’s time is also the basis for assessing how those resources could be used otherwise. On the spreadsheet that tracks the project, that unit of funding (time, product) could be spent to purchase an equivalent quantum of time or product from some other source: from a vendor, from an undergraduate, from a consultant, from an automated process running on an expensive piece of equipment.
  • Will a new set of credentials arise through which these jobs can be trained for and aimed at, avoiding the sense of professional anomaly that (in my experience at least) produces such a useful form of outsiderism?
  • most PhD candidates the idea of accepting a job other than a tenure-track faculty position is tantamount to an admission of failure. The reason why Mr. Silva assumed that I was Professor Flanders—the reason that no alternative is visible to him—is that no alternative can be articulated by the profession itself.
  • And yet the vast preponderance of actual work involved in creating humanities scholarship and scholarly resources is not done by faculty.
  • As we already noted, for every hour of scholarly research in an office or library, countless other hours are spent building and maintaining the vast research apparatus of books, databases, libraries, servers, networks, cataloguing and metadata standards, thesauri, and systems of access.
  • If the academic mission, in its broadest sense, is worth doing, all parts of it are worth doing.
  • I think one of the most interesting effects of the digital humanities upon academic job roles is the pressure it puts on what we think of as our own proper work domains.
  • In the archetypal digital humanities collaboration, traditional faculty explore forms of work that would ordinarily look “technical” or even menial (such as text encoding, metadata creation, or transcription); programmers contribute to editorial decisions; and students coauthor papers with senior scholars in a kind of Bakhtinian carnival of overturned professional usages.
  • For technical staff, these collaborative relationships produce a much richer intellectual context for their work and also convey a sense of the complexity of humanities data and research problems, which in turn makes for better, more thoughtful technical work. For students, the opportunity to work on real-world projects with professional collaborators gives unparalleled exposure to real intellectual problems, job demands, and professional skills across a wide range of roles, which in turn may yield a more fully realized sense of the landscape of academic work.
  • With these benefits in mind, there are a few things that we can do to encourage these interactions and to develop a professional academic ecology that is less typecast, that obscures less thoroughly the diversity of working roles that contribute to the production of scholarship (digital or not):
  • Make it practically possible and professionally rewarding (or, at the very least, not damaging) for graduate students to hold jobs while pursuing advanced degrees. This would involve rethinking our sense of the timing of graduate study and its completion: instead of rushing students through coursework, exams, and dissertations only to launch them into a holding pattern (potentially for several years) as postdocs, finished but still enrolled students, or visiting assistant lecturers, graduate programs would need to allow a bit more time for the completion of the degree and ensure that students graduate with some diversity of skills and work experience.
  • Devote resources to creating meaningful job and internship opportunities at digital humanities research projects, scholarly publications, conferences, and other professional activities with the goal of integrating students as collaborators into these kinds of work at the outset.
  • Encourage and reward coauthoring of research by faculty, students, and para-academic staff. This involves actions on the part of departments (to create a welcoming intellectual climate for such work) and on the part of journals, conferences, and their peer review structures to encourage and solicit such work and to evaluate it appropriately.
  •  
    Julia Flanders explores what "work" means within academia, what is considered payable labour in comparison to what needs to be done first (that is not paid for and done on our own time) . She discusses means of redefining academic labour, what (and who else) it involves and strategies for changing the relationships between students, faculty and para-academic staff.
Chris Milando

» Napster, Udacity, and the Academy Clay Shirky - 3 views

  • How did the recording industry win the battle but lose the war? How did they achieve such a decisive victory over Napster, then fail to regain control of even legal distribution channels?
  • Hey kids, Alanis Morisette just recorded three kickin’ songs! You can have them, so long as you pay for the ten mediocrities she recorded at the same time.
  • Napster told us a different story. Napster said “You want just the three songs? Fine.
  • ...36 more annotations...
  • hey just couldn’t imagine—and I mean this in the most ordinarily descriptive way possible—could not imagine that the old way of doing things might fail.
  • Once you see this pattern—a new story rearranging people’s sense of the possible, with the incumbents the last to know—you see it everywhere. First, the people running the old system don’t notice the change. When they do, they assume it’s minor. Then that it’s a niche. Then a fad. And by the time they understand that the world has actually changed, they’ve squandered most of the time they had to adapt.
  • Higher education is now being disrupted; our MP3 is the massive open online course (or MOOC), and our Napster is Udacity, the education startup.
  • We have several advantages over the recording industry, of course. We are decentralized and mostly non-profit. We employ lots of smart people. We have previous examples to learn from, and our core competence is learning from the past. And armed with these advantages, we’re probably going to screw this up as badly as the music people did.
  • A massive open online class is usually a series of video lectures with associated written materials and self-scoring tests, open to anyone. That’s what makes them OOCs. The M part, though, comes from the world. As we learned from Wikipedia, demand for knowledge is so enormous that good, free online materials can attract extraordinary numbers of people from all over the world.
  • Last year, Introduction to Artificial Intelligence, an online course from Stanford taught by Peter Norvig and Sebastian Thrun, attracted 160,000 potential students, of whom 23,000 completed it, a scale that dwarfs anything possible on a physical campus.
  • The size of Thrun and Norvig’s course, and the attention attracted by Udacity (and similar organizations like Coursera, P2PU, and University of the People), have many academics worrying about the effect on higher education. The loudest such worrying so far has been The Trouble With Online Education,
  • As most critics do, Edmundson focussed on the issue of quality, asking and answering his own question: “[C]an online education ever be education of the very best sort?”
  • Higher education has a bad case of cost disease
  • But you know what? Those classes weren’t like jazz compositions. They didn’t create genuine intellectual community. They didn’t even create ersatz intellectual community. They were just great lectures: we showed up, we listened, we took notes, and we left, ready to discuss what we’d heard in smaller sections.
  • The large lecture isn’t a tool for producing intellectual joy; it’s a tool for reducing the expense of introductory classes.
  • “Why would anyone take an online class when they can buy a better education at UVA?” But who faces that choice? Are we to imagine an 18 year old who can set aside $250K and 4 years, but who would have a hard time choosing between a residential college and a series of MOOCs? Elite high school students will not be abandoning elite colleges any time soon; the issue isn’t what education of “the very best sort” looks like, but what the whole system looks like.
  • An organization with cost disease can use lower paid workers, increase the number of consumers per worker, subsidize production, or increase price. For live music, this means hiring less-talented musicians, selling more tickets per performance, writing grant applications, or, of course, raising ticket prices. For colleges, this means more graduate and adjunct instructors, increased enrollments and class size, fundraising, or, of course, raising tuition.
  • Cheap graduate students let a college lower the cost of teaching the sections while continuing to produce lectures as an artisanal product, from scratch, on site, real time.
  • The minute you try to explain exactly why we do it this way, though, the setup starts to seem a little bizarre. What would it be like to teach at a university where a you could only assign books you yourself had written? Where you could only ask your students to read journal articles written by your fellow faculty members? Ridiculous. Unimaginable.
  • e ask students to read the best works we can find, whoever produced them and where, but we only ask them to listen to the best lecture a local employee can produce that morning. Sometimes you’re at a place where the best lecture your professor can give is the best in the world. But mostly not.
  • As Ian Bogost says, MOOCs are marketing for elite schools.
  • Any sentence that begins “Let’s take Harvard as an example…” should immediately be followed up with “No, let’s not do that.”
  • ny institution that tries to create a cost-effective education will move down the list.
  • Outside the elite institutions, though, the other 75% of students—over 13 million of them—are enrolled in the four thousand institutions you haven’t heard of
  • And the only thing that kept this system from seeming strange was that we’ve never had a good way of publishing lectures.
  • Clayton State educates as many undergraduates as Harvard. Saint Leo educates twice as many. City College of San Francisco enrolls as many as the entire Ivy League combined. These are where most students are, and their experience is what college education is mostly like.
  • The fight over MOOCs isn’t about the value of college; a good chunk of the four thousand institutions you haven’t heard of provide an expensive but mediocre education.
  • The fight over MOOCs isn’t even about the value of online education. Hundreds of institutions already offer online classes for credit, and half a million students are already enrolled in them. If critics of online education were consistent, they would believe that the University of Virginia’s Bachelor of Interdisciplinary Studies or Rutger’s MLIS degree are abominations, or else they would have to believe that there is a credit-worthy way to do online education, one MOOCs could emulate. Neither argument is much in evidence.
  • the fight over MOOCs is really about the story we tell ourselves about higher education: what it is, who it’s for, how it’s delivered, who delivers it.
  • How will we teach complex thinking and skills? How will we turn adolescents into well-rounded members of the middle class? Who will certify that education is taking place? How will we instill reverence for Virgil? Who will subsidize the professor’s work?
  • The possibility MOOCs hold out isn’t replacement; anything that could replace the traditional college experience would have to work like one, and the institutions best at working like a college are already colleges. The possibility MOOCs hold out is that the educational parts of education can be unbundled. MOOCs expand the audience for education to people ill-served or completely shut out from the current system, in the same way phonographs expanded the audience for symphonies to people who couldn’t get to a concert hall, and PCs expanded the users of computing power to people who didn’t work in big companies.
  • Those earlier inventions systems started out markedly inferior to the high-cost alternative: records were scratchy, PCs were crashy. But first they got better, then they got better than that, and finally, they got so good, for so cheap, that they changed people’s sense of what was possible.
  • For people used to dealing with institutions that go out of their way to hide their flaws, this makes these systems look terrible at first. But anyone who has watched a piece of open source software improve, or remembers the Britannica people throwing tantrums about Wikipedia, has seen how blistering public criticism makes open systems better. And once you imagine educating a thousand people in a single class, it becomes clear that open courses, even in their nascent state, will be able to raise quality and improve certification faster than traditional institutions can lower cost or increase enrollment.
  • If this happens, Harvard will be fine. Yale will be fine, and Stanford, and Swarthmore, and Duke. But Bridgerland Applied Technology College? Maybe not fine. University of Arkansas at Little Rock? Maybe not fine. And Kaplan College, a more reliable producer of debt than education? Definitely not fine.
  • Udacity may or may not survive, but as with Napster, there’s no containing the story it tells: “It’s possible to educate a thousand people at a time, in a single class, all around the world, for free.”
  • In the US, an undergraduate education used to be an option, one way to get into the middle class. Now it’s a hostage situation, required to avoid falling out of it. And if some of the hostages having trouble coming up with the ransom conclude that our current system is a completely terrible idea, then learning will come unbundled from the pursuit of a degree just as as songs came unbundled from CDs.
  • Open systems are open.
    • Chris Milando
       
      I really like this point. I want to eventually host my own online course, and I think it would be great to have criticism! I come from a writing background, so I know how powerful and wonderful criticism can be. I was in a writers circle a few years ago and we used to completely tear down each other's work. But when we rewrote our stories (with their criticism in mind), they were /always/ much stronger than they had ever been. For something like a massive online course to work, it has to work /well/, so if criticsm can help bring it to the level of quality it needs (to provide the justification for its existence), then that is what we need to employ! I, for one, welcome criticism in everything that I do (so long as it is constructive). The only way to improve is through criticism, and as literary scholars - whose degrees are based on criticizing the work of others - I find it very odd (and wrong) that we cannot take criticism of our own work.
  • The cost of attending college is rising above inflation every year, while the premium for doing so shrinks. This obviously can’t last, but no one on the inside has any clear idea about how to change the way our institutions work while leaving our benefits and privileges intact.
  • In the academy, we lecture other people every day about learning from history. Now its our turn, and the risk is that we’ll be the last to know that the world has changed, because we can’t imagine—really cannot imagine—that story we tell ourselves about ourselves could start to fail.
1 - 10 of 10
Showing 20 items per page