Skip to main content

Home/ CTLT and Friends/ Group items tagged grading

Rss Feed Group items tagged

Gary Brown

Ethics? Let's Outsource Them! - Brainstorm - The Chronicle of Higher Education - 4 views

  • Many students are already buying their papers from term-paper factories located in India and other third world countries. Now we are sending those papers back there to be graded. I wonder how many people are both writing and grading student work, and whether, serendipitously, any of those people ever get the chance to grade their own writing.”
  • The great learning loop of outcomes assessment is neatly “closed,” with education now a perfect, completed circle of meaningless words.
  • With outsourced grading, it’s clearer than ever that the world of rubrics behaves like that wicked southern plant called kudzu, smothering everything it touches. Certainly teaching and learning are being covered over by rubrics, which are evolving into a sort of quasi-religious educational theory controlled by priests whose heads are so stuck in playing with statistics that they forget to try to look openly at what makes students turn into real, viable, educated adults and what makes great, or even good, teachers.
  • ...2 more annotations...
  • Writing an essay is an art, not a science. As such, people, not instruments, must take its measure, and judge it. Students have the right to know who is doing the measuring. Instead of going for outsourced grading, Ms. Whisenant should cause a ruckus over the size of her course with the administration at Houston. After all, if she can’t take an ethical stand, how can she dare to teach ethics?
  • "People need to get past thinking that grading must be done by the people who are teaching.” Sorry, Mr. Rajam, but what you should be saying is this: Teachers, including those who teach large classes and require teaching assistants and readers, need to get past thinking that they can get around grading.
  •  
    the outsourcing loop becomes a diatribe against rubrics...
  •  
    It's hard to see how either outsourced assessment or harvested assessment can be accomplished convincingly without rubrics. How else can the standards of the teacher be enacted by the grader? From there we are driven to consider how, in the absence of a rubric, the standards of the teacher can be enacted by the student. Is it "ethical" to use the Potter Stewart standard: "I'll know it when I see it"?
  •  
    Yes, who is the "priest" in the preceding rendering--one who shares principles of quality (rubrics), or one who divines a grade a proclaims who is a "real, viable, educated adult"?
Gary Brown

Outsourced Grading, With Supporters and Critics, Comes to College - Teaching - The Chro... - 3 views

shared by Gary Brown on 06 Apr 10 - Cached
  • Lori Whisenant knows that one way to improve the writing skills of undergraduates is to make them write more. But as each student in her course in business law and ethics at the University of Houston began to crank out—often awkwardly—nearly 5,000 words a semester, it became clear to her that what would really help them was consistent, detailed feedback.
  • She outsourced assignment grading to a company whose employees are mostly in Asia.
  • The graders working for EduMetry, based in a Virginia suburb of Washington, are concentrated in India, Singapore, and Malaysia, along with some in the United States and elsewhere. They do their work online and communicate with professors via e-mail.
  • ...8 more annotations...
  • The company argues that professors freed from grading papers can spend more time teaching and doing research.
  • "This is what they do for a living," says Ms. Whisenant. "We're working with professionals." 
  • Assessors are trained in the use of rubrics, or systematic guidelines for evaluating student work, and before they are hired are given sample student assignments to see "how they perform on those," says Ravindra Singh Bangari, EduMetry's vice president of assessment services.
  • Professors give final grades to assignments, but the assessors score the papers based on the elements in the rubric and "help students understand where their strengths and weaknesses are," says Tara Sherman, vice president of client services at EduMetry. "Then the professors can give the students the help they need based on the feedback."
  • The assessors use technology that allows them to embed comments in each document; professors can review the results (and edit them if they choose) before passing assignments back to students.
  • But West Hills' investment, which it wouldn't disclose, has paid off in an unexpected way. The feedback from Virtual-TA seems to make the difference between a student's remaining in an online course and dropping out.
  • Because Virtual-TA provides detailed comments about grammar, organization, and other writing errors in the papers, students have a framework for improvement that some instructors may not be able to provide, she says.
  • "People need to get past thinking that grading must be done by the people who are teaching," says Mr. Rajam, who is director of assurance of learning at George Washington University's School of Business. "Sometimes people get so caught up in the mousetrap that they forget about the mouse."
Gary Brown

Cross-Disciplinary Grading Techniques - ProfHacker - The Chronicle of Higher Education - 1 views

  • So far, the most useful tool to me, in physics, has been the rubric, which is used widely in grading open-ended assessments in the humanities.
  • This method has revolutionized the way I grade. No longer do I have to keep track of how many points are deducted from which type of misstep on what problem for how many students. In the past, I often would get through several tests before I realized that I wasn’t being consistent with the deduction of points, and then I’d have to go through and re-grade all the previous tests. Additionally, the rubric method encourages students to refer to a solution, which I post after the test is administered, and they are motivated to meet with me in person to discuss why they got a 2 versus a 3 on a given problem, for example.
  • his opens up the opportunity to talk with them personally about their problem-solving skills and how they can better them. The emphasis is moved away from point-by-point deductions and is redirected to a more holistic view of problem solving.
  •  
    In the heart of the home of the concept inventory--Physics
Joshua Yeidel

Cross-Disciplinary Grading Techniques - ProfHacker - The Chronicle of Higher Education - 0 views

  •  
    "So far, the most useful tool to me, in physics, has been the rubric, which is used widely in grading open-ended assessments in the humanities. "
  •  
    A focus on improving the grading experience, rather than the learning experience, but still a big step forward for (some) hard scientists.
Gary Brown

Thoughts on the "Problem" of Grade Inflation | Sener Learning Services - 0 views

  • grades have little correlation with adult life achievement, or accomplishment, with postgraduate earnings (at least for the first three years), with actual learning, even with future employment in many fields. In the latter case, they are used mostly as a screening device rather than as an indicator of merit. The screen has expanded for the same reasons that professional sports have expanded their pools of playoff teams (think major league baseball "wild cards", or soccer teams that finish 3rd and 4th place in their national leagues qualifying for (and recently winning) the UEFA Champions League). Grades are also in their present state because their original purposes are no longer valid (assuming that they ever were, which is in itself dubious). The need is no longer to extract the cream and exclude the rest; it is to figure out how to effectively educate as many learners as possible.
  •  
    An insightful blog post with citations of great use
Gary Brown

Remaking the Grade, From A to D - Commentary - The Chronicle of Higher Education - 0 views

shared by Gary Brown on 14 Sep 09 - Cached
  • I have found that faculty members sometimes conflate quiet compliance with proficiency. That sends the message to students—female students in particular—that the path to success is acquiescence rather than achievement.
  • I have found that faculty members sometimes conflate quiet compliance with proficiency. That sends the message to students—female students in particular—that the path to success is acquiescence rather than achievement.
  •  
    Reeves laments the problematic arithmetic of grades and underscores a phenomenon we have referenced as a focus on "Academic Manners." "I have found that faculty members sometimes conflate quiet compliance with proficiency. That sends the message to students-female students in particular-that the path to success is acquiescence rather than achievement."
Joshua Yeidel

Court fails Toronto professor's grading on a budget - 0 views

  •  
    "A University of Toronto professor who got students to grade their peers' work has seen the practice blocked by the Ontario Superior Court of Justice. The union that represents teaching assistants and sessional instructors at the university filed a grievance against the university when it discovered that psychology professor Steve Joordens was using specially designed software to have students grade and comment on one another's written work."
Gary Brown

Web Site Lets Students Bet on What Grades They'll Earn - Wired Campus - The Chronicle o... - 0 views

shared by Gary Brown on 11 Aug 10 - Cached
  • Students can make a small bet on how well they'll do in a course, with a starting limit of $25 on how much they can earn. The students contribute a chunk of the money, and Ultrinsic puts up the rest. If they make the grade, they win it all.
  • In 2009, they piloted the idea with a different model that put students in the same course in direct competition with each other. Last year, about 600 students from the University of Pennsylvania and New York University, the first two campuses where the company's most recent iteration became available, made wagers on Ultrinsic.
  •  
    Betting on the perception that school is a game.....
Gary Brown

Details | LinkedIn - 0 views

  • I'm interested to hear from you how you arrive at a grade A, B, B+ etc. I assume from reading the various postings here that you use numerical marking (ratio scale) for different criteria to reach a final grade? At our institute we have been using ordinal scales for some time but now find that these are too broad to do justice to the quality of the work that students submit. On the other hand, using a direct ratio scale seems a daunting task for a lot of people and according to the literature is difficult to deal with. I appreciate to hear about your opnions and experiences.
  •  
    A transparency in grading discussion that hits on an area of interest. Hmmm
Theron DesRosier

pagi: eLearning - 0 views

  • ePortfolio ePortfolios, the Harvesting Gradebook, Accountability, and Community (!!!) Harvesting gradebook Learning from the transformative grade book Implementing the transformed grade book Transformed gradebook worked example (!!) Best example: Calaboz ePortfolio (!!) Guide to Rating Integrative & Critical Thinking (!!!) Grant Wiggins, Authentic Education Hub and spoke model of course design (!!!) ePortfolio as the core learning application Case Studies of Electronic Portfolios for Learning
  •  
    Nils found this. It is a Spanish concept map on eLearning that includes CTLT and the Harvesting Gradebook.
Gary Brown

The Wired Campus - Duke Professor Uses 'Crowdsourcing' to Grade - The Chronicle of High... - 0 views

  • Learning is more than earning an A says Cathy N. Davidson, the professor, who recently returned to teach English and interdisciplinary studies after eight years in administration. But students don't always see it that way. Vying for an A by trying to figure out what a professor wants or through the least amount of work has made the traditional grading scale superficial, she says."You've got this real mismatch between the kind of participatory learning that’s happening online and outside of the classroom, and the top-down, hierarchical learning and rigid assessment schemes that we’re using in the classroom from grades K through 12 and all the way up to graduate school," Ms. Davidson says. "In school systems today, we’re putting more and more emphasis on quantitative assessment in an era when, out of the classroom, students are learning through an entirely different way of collaboration, customizing, and interacting."
  •  
    We need to contact Cathy Davidson and work together on this.
Nils Peterson

Change Magazine - The New Guys in Assessment Town - 0 views

  • if one of the institution’s general education goals is critical thinking, the system makes it possible to call up all the courses and programs that assess student performance on that outcome.
  • bringing together student learning outcomes data at the level of the institution, program, course, and throughout student support services so that “the data flows between and among these levels”
  • Like its competitors, eLumen maps outcomes vertically across courses and programs, but its distinctiveness lies in its capacity to capture what goes on in the classroom. Student names are entered into the system, and faculty use a rubric-like template to record assessment results for every student on every goal. The result is a running record for each student available only to the course instructor (and in a some cases to the students themselves, who can go to the system to  get feedback on recent assessments).
    • Nils Peterson
       
      sounds like harvesting gradebook. assess student work and roll up
    • Joshua Yeidel
       
      This system has some potential for formative use at the per-student leve.
  • ...7 more annotations...
  • “I’m a little wary.  It seems as if, in addition to the assessment feedback we are already giving to students, we might soon be asked to add a data-entry step of filling in boxes in a centralized database for all the student learning outcomes. This is worrisome to those of us already struggling under the weight of all that commenting and essay grading.”
    • Nils Peterson
       
      its either double work, or not being understood that the grading and the assessment can be the same activity. i suspect the former -- grading is being done with different metrics
    • Joshua Yeidel
       
      I am in the unusual position of seeing many papers _after_ they have been graded by a wide variety of teachers. Many of these contain little "assessment feedback" -- many teachers focus on "correcting" the papers and finding some letter or number to assign as a value.
  • “This is where we see many institutions struggling,” Galvin says. “Faculty simply don’t have the time for a deeper involvement in the mechanics of assessment.” Many have never seen a rubric or worked with one, “so generating accurate, objective data for analysis is a challenge.”  
    • Nils Peterson
       
      Rather than faculty using the community to help with assessment, they are outsourcing to a paid assessor -- this is the result of undertaking this thinking while also remaining in the institution-centric end of the spectrum we developed
  • I asked about faculty pushback. “Not so much,” Galvin says, “not after faculty understand that the process is not intended to evaluate their work.”
    • Nils Peterson
       
      red flag
  • the annual reports required by this process were producing “heaps of paper” while failing to track trends and developments over time. “It’s like our departments were starting anew every year,” Chaplot says. “We wanted to find a way to house the data that gave us access to what was done in the past,” which meant moving from discrete paper reports to an electronic database.
    • Joshua Yeidel
       
      It's not clear whether the "database" is housing measurements, narratives and reflections, or all of the above.
  • Can eLumen represent student learning in language? No, but it can quantify the number of boxes checked against number of boxes not checked.”
  • developing a national repository of resources, rubrics, outcomes statements, and the like that can be reviewed and downloaded by users
    • Nils Peterson
       
      in building our repository we could well open-source these tools, no need to lock them up
  • “These solutions cement the idea that assessment is an administrative rather than an educational enterprise, focused largely on accountability. They increasingly remove assessment decision making from the everyday rhythm of teaching and learning and the realm of the faculty.
    • Nils Peterson
       
      Over the wall assessment, see Transformative Assessment rubric for more detail
Joshua Yeidel

A Professor at Louisiana State Is Flunked Because of Her Grades - Teaching - The Chroni... - 1 views

  •  
    A fascinating look into the tangled web of misconceptions and cross-purposes in testing and grading, especially in an introductory non-major science course.
Gary Brown

No Tests, No Grades = More Graduates? - 0 views

  • At an alternative high school in Newark, students will make presentations instead of taking tests and receive written progress reports instead of grades. They will use few textbooks and divide their school weeks between the classroom and an internship,
  •  
    inch by inch new models make the news and subsequently make progress
Nils Peterson

Crowdsourcing Authority in the Classroom | DMLcentral - 0 views

  • I’m fascinated that the blogosphere was so annoyed with me for wanting to teach responsible judgment practices as part of my pedagogy. I think it is because grading, in a curious way, exemplifies our deepest convictions about excellence and authority, and specifically about the right of those with authority to define what constitutes excellence.  If we “crowdsource grading,” we are suggesting that those without authority can also determine excellence.  That is what happens in the non-refereed world of the internet, that’s what digital thinking is, and it is quite revolutionary. 
    • Nils Peterson
       
      THis is Cathy Davidson in a new blog post about crowdsourcing authority, responding to the critics of her earlier crowdsourcing grading.
Gary Brown

What's Wrong With the American University System - Culture - The Atlantic - 3 views

  • But when the young superstar sat down with the department chair, he seemed to have only one goal: to land a tenure-track position that involved as many sabbaticals and as little teaching as possible
  • Hacker and his coauthor, New York Times writer Claudia Dreifus, use this cautionary tale to launch their new book, a fierce critique of modern academia called Higher Education? "The question mark in our title," they write, "is the key to this book." To their minds, little of what takes place on college campuses today can be considered either "higher" or "education."
  • They blame a system that favors research over teaching and vocational training over liberal arts.
  • ...10 more annotations...
  • Tenure, they argue, does anything but protect intellectual freedom
  • Schools get status by bringing on professors who are star researchers, star scholars. That's all we really know about Caltech or MIT or Stanford. We don't really know about the quality of undergraduate teaching at any of these places. And it's the students who suffer.
  • Claudia and I were up at Harvard talking to students, and they said they get nothing from their classes, but that doesn't matter. They're smart already—they can breeze through college. The point is that they're going to be Harvard people when they come out.
  • So tenure is, in fact, the enemy of spontaneity, the enemy of intellectual freedom.
  • Good teaching can't be quantified at the college level.
  • or instance, Evergreen College, a sweet little state school in Olympia, Washington. We spent three days there and it was fantastic. They don't give grades, and they don't have academic departments. There are no faculty rankings. Almost all the classes we saw were taught by two professors—say, one from philosophy and one from psychology, teaching jointly on Henry and William James. Even though they don't give grades, the professors write out long evaluations for students. And the students have no problem getting into graduate schools.
  • I like Missouri Western State. It's a third-tier university, but the faculty realize they're going to stay there, they're not going to get hired away by other colleges, so they pitch in and take teaching seriously. At a school like that, you have a decent chance of finding a mentor who will write you a strong recommendation, better than you would at Harvard.
  • We believe the current criteria for admissions—particularly the SAT—are just so out of whack. It's like No Child Left Behind. It really is. It's one of the biggest crimes that's ever been perpetrated.
  • Professor X. He argued that some students just aren't ready for college. What's your view on that? Our view is that the primary obligation belongs to the teacher. Good teaching is not just imparting knowledge, like pouring milk into a jug. It's the job of the teacher to get students interested and turned on no matter what the subject is. Every student can be turned on if teachers really engage in this way. We saw it at Evergreen and other places that have this emphasis.
  • This is the hand I was dealt this semester. This is my job." Some people say to me, "Your students at Queens, are they any good?" I say, "I make them good." Every student is capable of college. I know some people have had difficult high school educations. But if you have good teachers who really care, it's remarkable how you can make up the difference.
  •  
    In case you haven't already seen this.  While don't deny higher education needs attention, I personal wish there'd be far more attention paid to lower education and regressive education (my own term for, redressing and improving the education of all U.S. citizens).  We are in the process of destroying our country and our world.  Education as at the very heart of any solution.
  •  
    More of the discussion in the news--the Atlantic
Nils Peterson

Half an Hour: Open Source Assessment - 0 views

  • When posed the question in Winnipeg regarding what I thought the ideal open online course would look like, my eventual response was that it would not look like a course at all, just the assessment.
    • Nils Peterson
       
      I remembered this Downes post on the way back from HASTAC. It is some of the roots of our Spectrum I think.
  • The reasoning was this: were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources.
  • In Holland I encountered a person from an organization that does nothing but test students. This is the sort of thing I long ago predicted (in my 1998 Future of Online Learning) so I wasn't that surprised. But when I pressed the discussion the gulf between different models of assessment became apparent.Designers of learning resources, for example, have only the vaguest of indication of what will be on the test. They have a general idea of the subject area and recommendations for reading resources. Why not list the exact questions, I asked? Because they would just memorize the answers, I was told. I was unsure how this varied from the current system, except for the amount of stuff that must be memorized.
    • Nils Peterson
       
      assumes a test as the form of assessment, rather than something more open ended.
  • ...8 more annotations...
  • As I think about it, I realize that what we have in assessment is now an exact analogy to what we have in software or learning content. We have proprietary tests or examinations, the content of which is held to be secret by the publishers. You cannot share the contents of these tests (at least, not openly). Only specially licensed institutions can offer the tests. The tests cost money.
    • Nils Peterson
       
      See our Where are you on the spectrum, Assessment is locked vs open
  • Without a public examination of the questions, how can we be sure they are reliable? We are forced to rely on 'peer reviews' or similar closed and expert-based evaluation mechanisms.
  • there is the question of who is doing the assessing. Again, the people (or machines) that grade the assessments work in secret. It is expert-based, which creates a resource bottleneck. The criteria they use are not always apparent (and there is no shortage of literature pointing to the randomness of the grading). There is an analogy here with peer-review processes (as compared to recommender system processes)
  • What constitutes achievement in a field? What constitutes, for example, 'being a physicist'?
  • This is a reductive theory of assessment. It is the theory that the assessment of a big thing can be reduced to the assessment of a set of (necessary and sufficient) little things. It is a standards-based theory of assessment. It suggests that we can measure accomplishment by testing for accomplishment of a predefined set of learning objectives.Left to its own devices, though, an open system of assessment is more likely to become non-reductive and non-standards based. Even if we consider the mastery of a subject or field of study to consist of the accomplishment of smaller components, there will be no widespread agreement on what those components are, much less how to measure them or how to test for them.Consequently, instead of very specific forms of evaluation, intended to measure particular competences, a wide variety of assessment methods will be devised. Assessment in such an environment might not even be subject-related. We won't think of, say, a person who has mastered 'physics'. Rather, we might say that they 'know how to use a scanning electron microscope' or 'developed a foundational idea'.
  • We are certainly familiar with the use of recognition, rather than measurement, as a means of evaluating achievement. Ludwig Wittgenstein is 'recognized' as a great philosopher, for example. He didn't pass a series of tests to prove this. Mahatma Gandhi is 'recognized' as a great leader.
  • The concept of the portfolio is drawn from the artistic community and will typically be applied in cases where the accomplishments are creative and content-based. In other disciplines, where the accomplishments resemble more the development of skills rather than of creations, accomplishments will resemble more the completion of tasks, like 'quests' or 'levels' in online games, say.Eventually, over time, a person will accumulate a 'profile' (much as described in 'Resource Profiles').
  • In other cases, the evaluation of achievement will resemble more a reputation system. Through some combination of inputs, from a more or less define community, a person may achieve a composite score called a 'reputation'. This will vary from community to community.
  •  
    Fine piece, transformative. "were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources."
Joshua Yeidel

A Measure of Learning Is Put to the Test - Faculty - The Chronicle of Higher Education - 1 views

  • "The CLA is really an authentic assessment process,"
    • Joshua Yeidel
       
      What is the meaning of "authentic" in this statement? It certainly isn't "situated in the real world" or "of intrinsic value".
  • add CLA-style assignments to their liberal-arts courses.
    • Joshua Yeidel
       
      Maybe the best way to prepare for the test, but the best way to develop analytical ability, et. al.?
  • the CLA typically reports scores on a "value added" basis, controlling for the scores that students earned on the SAT or ACT while in high school.
    • Joshua Yeidel
       
      If SAT and ACT are measuring the same things as CLA, then why not just use them? If they are measuring different things, why "control for" their scores?
  • ...5 more annotations...
  • improved models of instruction.
  • it measures analytical ability, problem-solving ability, critical thinking, and communication.
  • "If a college pays attention to learning and helps students develop their skills—whether they do that by participating in our programs or by doing things on their own—they probably should do better on the CLA,"
    • Joshua Yeidel
       
      Just in case anyone missed the message: pay attention to learning, and you'll _probably_ do better on the CLA. Get students to practice CLA tasks, and you _will_ do better on the CLA.
  • "Standardized tests of generic skills—I'm not talking about testing in the major—are so much a measure of what students bring to college with them that there is very little variance left out of which we might tease the effects of college," says Ms. Banta, who is a longtime critic of the CLA. "There's just not enough variance there to make comparative judgments about the comparative quality of institutions."
    • Joshua Yeidel
       
      It's not clear what "standardized tests" means in this comment. Does the "lack of variance" apply to all assessments (including, e.g., e-portfolios)?
  • Can the CLA fill both of those roles?
  •  
    A summary of the current state of "thinking" with regard to CLA. Many fallacies and contradictions are (unintentionally) exposed. At least CLA appears to be more about skills than content (though the question of how it is graded isn't even raised), but the "performance task" approach is the smallest possible step in that direction.
Joshua Yeidel

Higher Education: Assessment & Process Improvement Group News | LinkedIn - 2 views

  •  
    So here it is: by definition, the value-added component of the D.C. IMPACT evaluation system defines 50 percent of all teachers in grades four through eight as ineffective or minimally effective in influencing their students' learning. And given the imprecision of the value-added scores, just by chance some teachers will be categorized as ineffective or minimally effective two years in a row. The system is rigged to label teachers as ineffective or minimally effective as a precursor to firing them.
  •  
    How assessment of value-added actually works in one setting: the Washington, D.C. public schools. This article actually works the numbers to show that the system is set up to put teachers in the firing zone. Note the tyranny of numerical ratings (some of them subjective) converted into meanings like "minimally effective".
Gary Brown

Duncan: Rewarding Teachers for Master's Degrees Is Waste of Money - The Ticker - The Ch... - 1 views

  • Arne Duncan, said state and local governments should rethink their policies of giving pay raises to teachers who have master’s degrees because evidence suggests that the degree alone does not improve student achievement.
  •  
    distinguishes between outcome and impact and/ or illustrates the problems of grades/degrees as credible outcome.
1 - 20 of 44 Next › Last »
Showing 20 items per page