Skip to main content

Home/ CTLT and Friends/ Group items tagged rubrics

Rss Feed Group items tagged

Corinna Lo

IJ-SoTL - A Method for Collaboratively Developing and Validating a Rubric - 1 views

  •  
    "Assessing student learning outcomes relative to a valid and reliable standard that is academically-sound and employer-relevant presents a challenge to the scholarship of teaching and learning. In this paper, readers are guided through a method for collaboratively developing and validating a rubric that integrates baseline data collected from academics and professionals. The method addresses two additional goals: (1) to formulate and test a rubric as a teaching and learning protocol for a multi-section course taught by various instructors; and (2) to assure that students' learning outcomes are consistently assessed against the rubric regardless of teacher or section. Steps in the process include formulating the rubric, collecting data, and sequentially analyzing the techniques used to validate the rubric and to insure precision in grading papers in multiple sections of a course."
Theron DesRosier

Assessing Learning Outcomes at the University of Cincinnati: Comparing Rubric Assessmen... - 2 views

  •  
    "When the CLA results arrived eight months later, the UC team compared the outcomes of the two assessments. "We found no statistically significant correlation between the CLA scores and the portfolio scores," Escoe says. "In some ways, it's a disappointing finding. If we'd found a correlation, we could tell faculty that the CLA, as an instrument, is measuring the same things that we value and that the CLA can be embedded in a course. But that didn't happen." There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points "in a black box": if a student referred to a specific piece of evidence in a critical-thinking question, he or she simply received one point. In addition, she says, faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind-leading to results that would not correlate to a computer-scored test. In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. "
  •  
    Another institution trying to make sense of the CLA. This study compared student's CLA scores with criteria-based scores of their eportfolios. The study used a modified version of the VALUE rubrics developed by the AACU. Our own Gary Brown was on the team that developed the critical thinking rubric for the VALUE project.
  •  
    "The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. " This begs some questions: what meaning can we attach to these two non-correlated measures? What VSA requirements can rubric-based assessment NOT satisfy? Are those "requirements" really useful?
Nils Peterson

AAC&U News | April 2010 | Feature - 1 views

  • Comparing Rubric Assessments to Standardized Tests
  • First, the university, a public institution of about 40,000 students in Ohio, needed to comply with the Voluntary System of Accountability (VSA), which requires that state institutions provide data about graduation rates, tuition, student characteristics, and student learning outcomes, among other measures, in the consistent format developed by its two sponsoring organizations, the Association of Public and Land-grant Universities (APLU), and the Association of State Colleges and Universities (AASCU).
  • And finally, UC was accepted in 2008 as a member of the fifth cohort of the Inter/National Coalition for Electronic Portfolio Research, a collaborative body with the goal of advancing knowledge about the effect of electronic portfolio use on student learning outcomes.  
  • ...13 more annotations...
  • outcomes required of all UC students—including critical thinking, knowledge integration, social responsibility, and effective communication
  • “The wonderful thing about this approach is that full-time faculty across the university  are gathering data about how their  students are doing, and since they’ll be teaching their courses in the future, they’re really invested in rubric assessment—they really care,” Escoe says. In one case, the capstone survey data revealed that students weren’t doing as well as expected in writing, and faculty from that program adjusted their pedagogy to include more writing assignments and writing assessments throughout the program, not just at the capstone level. As the university prepares to switch from a quarter system to semester system in two years, faculty members are using the capstone survey data to assist their course redesigns, Escoe says.
  • the university planned a “dual pilot” study examining the applicability of electronic portfolio assessment of writing and critical thinking alongside the Collegiate Learning Assessment,
  • The rubrics the UC team used were slightly modified versions of those developed by AAC&U’s Valid Assessment of Learning in Undergraduate Education (VALUE) project. 
  • In the critical thinking rubric assessment, for example, faculty evaluated student proposals for experiential honors projects that they could potentially complete in upcoming years.  The faculty assessors were trained and their rubric assessments “normed” to ensure that interrater reliability was suitably high.
  • “It’s not some nitpicky, onerous administrative add-on. It’s what we do as we teach our courses, and it really helps close that assessment loop.”
  • There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points “in a black box”:
  • faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind—leading to results that would not correlate to a computer-scored test. 
  • “The CLA provides scores at the institutional level. It doesn’t give me a picture of how I can affect those specific students’ learning. So that’s where rubric assessment comes in—you can use it to look at data that’s compiled over time.”
  • Their portfolios are now more like real learning portfolios, not just a few artifacts, and we want to look at them as they go into their third and fourth years to see what they can tell us about students’ whole program of study.”  Hall and Robles are also looking into the possibility of forming relationships with other schools from NCEPR to exchange student e-portfolios and do a larger study on the value of rubric assessment of student learning.
  • “We’re really trying to stress that assessment is pedagogy,”
  • “We found no statistically significant correlation between the CLA scores and the portfolio scores,”
  • In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement.
    • Nils Peterson
       
      CLA did not provide information for continuous program improvement -- we've heard this argument before
  •  
    The lack of correlation might be rephrased--there appears to be no corrlation between what is useful for faculty who teach and what is useful for the VSA. A corollary question: Of what use is the VSA?
Joshua Yeidel

THINK Global School Blog - 3 views

  •  
    "A recent experiment we did asked the question: What happens if you combine lessons from web 2.0 and social media to the process of developing a rubric? The result? We've built what we call "Social Rubrics". Essentially this tool facilitates the process of building a rubric for teachers (and students) in a much more open and collaborative way." A plug-in for Elgg.
Gary Brown

Ethics? Let's Outsource Them! - Brainstorm - The Chronicle of Higher Education - 4 views

  • Many students are already buying their papers from term-paper factories located in India and other third world countries. Now we are sending those papers back there to be graded. I wonder how many people are both writing and grading student work, and whether, serendipitously, any of those people ever get the chance to grade their own writing.”
  • The great learning loop of outcomes assessment is neatly “closed,” with education now a perfect, completed circle of meaningless words.
  • With outsourced grading, it’s clearer than ever that the world of rubrics behaves like that wicked southern plant called kudzu, smothering everything it touches. Certainly teaching and learning are being covered over by rubrics, which are evolving into a sort of quasi-religious educational theory controlled by priests whose heads are so stuck in playing with statistics that they forget to try to look openly at what makes students turn into real, viable, educated adults and what makes great, or even good, teachers.
  • ...2 more annotations...
  • Writing an essay is an art, not a science. As such, people, not instruments, must take its measure, and judge it. Students have the right to know who is doing the measuring. Instead of going for outsourced grading, Ms. Whisenant should cause a ruckus over the size of her course with the administration at Houston. After all, if she can’t take an ethical stand, how can she dare to teach ethics?
  • "People need to get past thinking that grading must be done by the people who are teaching.” Sorry, Mr. Rajam, but what you should be saying is this: Teachers, including those who teach large classes and require teaching assistants and readers, need to get past thinking that they can get around grading.
  •  
    the outsourcing loop becomes a diatribe against rubrics...
  •  
    It's hard to see how either outsourced assessment or harvested assessment can be accomplished convincingly without rubrics. How else can the standards of the teacher be enacted by the grader? From there we are driven to consider how, in the absence of a rubric, the standards of the teacher can be enacted by the student. Is it "ethical" to use the Potter Stewart standard: "I'll know it when I see it"?
  •  
    Yes, who is the "priest" in the preceding rendering--one who shares principles of quality (rubrics), or one who divines a grade a proclaims who is a "real, viable, educated adult"?
Lorena O'English

Connecting Assessment | Remote Access - 3 views

  •  
    This is directed to K-12, but of interest - he discusses his rubric for social media interaction (note the link to a rubric for blogging in the first paragraph as well).
Joshua Yeidel

E. Jane Davidson on Evaluative Rubrics | AEA365 - 1 views

  •  
    Some context for rubrics like ours
Theron DesRosier

Virtual-TA - 2 views

  • We also developed a technology platform that allows our TAs to electronically insert detailed, actionable feedback directly into student assignments
  • Your instructors give us the schedule of assignments, when student assignments are due, when we might expect to receive them electronically, when the scored assignments will be returned, the learning outcomes on which to score the assignments, the rubrics to be used and the weights to be applied to different learning outcomes. We can use your rubrics to score assignments or design rubrics for sign-off by your faculty members.
  • review and embed feedback using color-coded pushpins (each color corresponds to a specific learning outcome) directly onto the electronic assignments. Color-coded pushpins provide a powerful visual diagnostic.
  • ...5 more annotations...
  • We do not have any contact with your students. Instructors retain full control of the process, from designing the assignments in the first place, to specifying learning outcomes and attaching weights to each outcome. Instructors also review the work of our TAs through a step called the Interim Check, which happens after 10% of the assignments have been completed. Faculty provide feedback, offer any further instructions and eventually sign-off on the work done, before our TAs continue with the remainder of the assignments
  • Finally, upon the request of the instructor, the weights he/she specified to the learning outcomes will be rubric-based scores which are used to generate a composite score for each student assignment
  • As an added bonus, our Virtual-TAs provide a detailed, summative report for the instructor on the overall class performance on the given assignment, which includes a look at how the class fared on each outcome, where the students did well, where they stumbled and what concepts, if any, need reinforcing in class the following week.
  • We can also, upon request, generate reports by Student Learning Outcomes (SLOs). This report can be used by the instructor to immediately address gaps in learning at the individual or classroom level.
  • Think of this as a micro-closing-of-the-loop that happens each week.  Contrast this with the broader, closing-the-loop that accompanies program-level assessment of learning, which might happen at the end of a whole academic year or later!
  •  
    I went to Virtual TA and Highlighted their language describing how it works.
Nils Peterson

Change Magazine - The New Guys in Assessment Town - 0 views

  • if one of the institution’s general education goals is critical thinking, the system makes it possible to call up all the courses and programs that assess student performance on that outcome.
  • bringing together student learning outcomes data at the level of the institution, program, course, and throughout student support services so that “the data flows between and among these levels”
  • Like its competitors, eLumen maps outcomes vertically across courses and programs, but its distinctiveness lies in its capacity to capture what goes on in the classroom. Student names are entered into the system, and faculty use a rubric-like template to record assessment results for every student on every goal. The result is a running record for each student available only to the course instructor (and in a some cases to the students themselves, who can go to the system to  get feedback on recent assessments).
    • Nils Peterson
       
      sounds like harvesting gradebook. assess student work and roll up
    • Joshua Yeidel
       
      This system has some potential for formative use at the per-student leve.
  • ...7 more annotations...
  • “I’m a little wary.  It seems as if, in addition to the assessment feedback we are already giving to students, we might soon be asked to add a data-entry step of filling in boxes in a centralized database for all the student learning outcomes. This is worrisome to those of us already struggling under the weight of all that commenting and essay grading.”
    • Nils Peterson
       
      its either double work, or not being understood that the grading and the assessment can be the same activity. i suspect the former -- grading is being done with different metrics
    • Joshua Yeidel
       
      I am in the unusual position of seeing many papers _after_ they have been graded by a wide variety of teachers. Many of these contain little "assessment feedback" -- many teachers focus on "correcting" the papers and finding some letter or number to assign as a value.
  • “This is where we see many institutions struggling,” Galvin says. “Faculty simply don’t have the time for a deeper involvement in the mechanics of assessment.” Many have never seen a rubric or worked with one, “so generating accurate, objective data for analysis is a challenge.”  
    • Nils Peterson
       
      Rather than faculty using the community to help with assessment, they are outsourcing to a paid assessor -- this is the result of undertaking this thinking while also remaining in the institution-centric end of the spectrum we developed
  • I asked about faculty pushback. “Not so much,” Galvin says, “not after faculty understand that the process is not intended to evaluate their work.”
    • Nils Peterson
       
      red flag
  • the annual reports required by this process were producing “heaps of paper” while failing to track trends and developments over time. “It’s like our departments were starting anew every year,” Chaplot says. “We wanted to find a way to house the data that gave us access to what was done in the past,” which meant moving from discrete paper reports to an electronic database.
    • Joshua Yeidel
       
      It's not clear whether the "database" is housing measurements, narratives and reflections, or all of the above.
  • Can eLumen represent student learning in language? No, but it can quantify the number of boxes checked against number of boxes not checked.”
  • developing a national repository of resources, rubrics, outcomes statements, and the like that can be reviewed and downloaded by users
    • Nils Peterson
       
      in building our repository we could well open-source these tools, no need to lock them up
  • “These solutions cement the idea that assessment is an administrative rather than an educational enterprise, focused largely on accountability. They increasingly remove assessment decision making from the everyday rhythm of teaching and learning and the realm of the faculty.
    • Nils Peterson
       
      Over the wall assessment, see Transformative Assessment rubric for more detail
Gary Brown

IJ-SoTL: Current Issue: Volumn 3, Number 2 - July 2009 - 0 views

  • A Method for Collaboratively Developing and Validating a Rubric Sandra Allen (Columbia College Chicago) & John Knight (University of Tennessee at Martin)
  •  
    at last a decent article on rubric development--a good place to jump off.
Gary Brown

Cross-Disciplinary Grading Techniques - ProfHacker - The Chronicle of Higher Education - 1 views

  • So far, the most useful tool to me, in physics, has been the rubric, which is used widely in grading open-ended assessments in the humanities.
  • This method has revolutionized the way I grade. No longer do I have to keep track of how many points are deducted from which type of misstep on what problem for how many students. In the past, I often would get through several tests before I realized that I wasn’t being consistent with the deduction of points, and then I’d have to go through and re-grade all the previous tests. Additionally, the rubric method encourages students to refer to a solution, which I post after the test is administered, and they are motivated to meet with me in person to discuss why they got a 2 versus a 3 on a given problem, for example.
  • his opens up the opportunity to talk with them personally about their problem-solving skills and how they can better them. The emphasis is moved away from point-by-point deductions and is redirected to a more holistic view of problem solving.
  •  
    In the heart of the home of the concept inventory--Physics
Corinna Lo

Scoring rubric development: validity and reliability. Moskal, Barbara M. & Jon A. Leydens - 1 views

  •  
    "One purpose of this article is to provide clear definitions of the terms "validity" and "reliability" and illustrate these definitions through examples. A second purpose is to clarify how these issues may be addressed in the development of scoring rubrics."
Nils Peterson

Through the Open Door: Open Courses as Research, Learning, and Engagement (EDUCAUSE Rev... - 0 views

  • openness in practice requires little additional investment, since it essentially concerns transparency of already planned course activities on the part of the educator.
    • Nils Peterson
       
      Search YouTube for "master class" Theron and I are looking at violin examples. The class is happening with student, master, and observers. What is added is video recording and posting to YouTube. YouTube provides additional community via comments and linked videos.
  • This second group of learners — those who wanted to participate but weren't interested in course credit — numbered over 2,300. The addition of these learners significantly enhanced the course experience, since additional conversations and readings extended the contributions of the instructors.
    • Nils Peterson
       
      These additional resources might also include peer reviews using a course rubric, or diverse feedback on the rubric itself.
  • Enough structure is provided by the course that if a learner is interested in the topic, he or she can build sufficient language and expertise to participate peripherally or directly.
  • ...4 more annotations...
  • Although courses are under pressure in the "unbundling" or fragmentation of information in general, the learning process requires coherence in content and conversations. Learners need some sense of what they are choosing to do, a sense of eventedness.5 Even in traditional courses, learners must engage in a process of forming coherent views of a topic.
    • Nils Peterson
       
      An assumption here that the learner needs kick starting. Its an assumtion that the learner is not a Margo Tamez making an Urgent Call for Help where the learner owns the problem. Is it a way of inviting a community to a party?
  • The community-as-curriculum model inverts the position of curriculum: rather than being a prerequisite for a course, curriculum becomes an output of a course.
  • They are now able, sometimes through the open access noted above and sometimes through access to other materials and guidance, to engage in their own learning outside of a classroom structure.
    • Nils Peterson
       
      A key point is the creation of open learners. Impediments to open learners need to be understood and overcome. Identity mangement is likely to be an important skill here.
  • Educators continue to play an important role in facilitating interaction, sharing information and resources, challenging assertions, and contributing to learners' growth of knowledge.
Gary Brown

Details | LinkedIn - 0 views

  • Although different members of the academic hierarchy take on different roles regarding student learning, student learning is everyone’s concern in an academic setting. As I specified in my article comments, universities would do well to use their academic support units, which often have evaluation teams (or a designated evaluator) to assist in providing boards the information they need for decision making. Perhaps boards are not aware of those serving in evaluation roles at the university or how those staff members can assist boards in their endeavors.
  • Gary Brown • We have been using the Internet to post program assessment plans and reports (the programs that support this initiative at least), our criteria (rubric) for reviewing them, and then inviting external stakeholders to join in the review process.
Gary Brown

Can We Promote Experimentation and Innovation in Learning as well as Accountability? In... - 0 views

  •  
    he VALUE project comes into the middle of this tension, as it proposes to create frameworks (or metarubrics) that provide flexible criteria for making valid judgments about student work that might result from a wide range of assessments and learning opportunities, over time. In this interview, Terrel Rhodes, Director of the VALUE project and Vice President of the Association for American Colleges and Universities (AAC&U) describes the assumptions and goals behind the Project. He especially addresses how electronic portfolios serve those goals as the locus of evaluation by educators, providing frameworks for judgments tailored to local contexts but calibrated to "Essential Learning Outcomes," with broad significance for student achievement. The aims and ambitions of the VALUE Project have the potential to move us further down the road toward a more systematic engagement with the expansion of learning. -Randy Bass
  •  
    This paragraph is the one with the most interesting set of assumptions. There are implications about "validity" Bass notes earlier and the role of numbers as "less robust" rather than, say, an interesting and important ingredient in that conversation. Mostly though I see the designation that the rubrics are "too broad to be useful" as a flag that these are not really rubrics, but, well, flags...
Joshua Yeidel

Blogging Rubric - 0 views

  •  
    it's not clear where this comes from, but it's interesting
Gary Brown

Outsourced Grading, With Supporters and Critics, Comes to College - Teaching - The Chro... - 3 views

shared by Gary Brown on 06 Apr 10 - Cached
  • Lori Whisenant knows that one way to improve the writing skills of undergraduates is to make them write more. But as each student in her course in business law and ethics at the University of Houston began to crank out—often awkwardly—nearly 5,000 words a semester, it became clear to her that what would really help them was consistent, detailed feedback.
  • She outsourced assignment grading to a company whose employees are mostly in Asia.
  • The graders working for EduMetry, based in a Virginia suburb of Washington, are concentrated in India, Singapore, and Malaysia, along with some in the United States and elsewhere. They do their work online and communicate with professors via e-mail.
  • ...8 more annotations...
  • The company argues that professors freed from grading papers can spend more time teaching and doing research.
  • "This is what they do for a living," says Ms. Whisenant. "We're working with professionals." 
  • Assessors are trained in the use of rubrics, or systematic guidelines for evaluating student work, and before they are hired are given sample student assignments to see "how they perform on those," says Ravindra Singh Bangari, EduMetry's vice president of assessment services.
  • Professors give final grades to assignments, but the assessors score the papers based on the elements in the rubric and "help students understand where their strengths and weaknesses are," says Tara Sherman, vice president of client services at EduMetry. "Then the professors can give the students the help they need based on the feedback."
  • The assessors use technology that allows them to embed comments in each document; professors can review the results (and edit them if they choose) before passing assignments back to students.
  • But West Hills' investment, which it wouldn't disclose, has paid off in an unexpected way. The feedback from Virtual-TA seems to make the difference between a student's remaining in an online course and dropping out.
  • Because Virtual-TA provides detailed comments about grammar, organization, and other writing errors in the papers, students have a framework for improvement that some instructors may not be able to provide, she says.
  • "People need to get past thinking that grading must be done by the people who are teaching," says Mr. Rajam, who is director of assurance of learning at George Washington University's School of Business. "Sometimes people get so caught up in the mousetrap that they forget about the mouse."
Joshua Yeidel

Cross-Disciplinary Grading Techniques - ProfHacker - The Chronicle of Higher Education - 0 views

  •  
    "So far, the most useful tool to me, in physics, has been the rubric, which is used widely in grading open-ended assessments in the humanities. "
  •  
    A focus on improving the grading experience, rather than the learning experience, but still a big step forward for (some) hard scientists.
Gary Brown

Views: The White Noise of Accountability - Inside Higher Ed - 2 views

  • We don’t really know what we are saying
  • “In education, accountability usually means holding colleges accountable for the learning outcomes produced.” One hopes Burck Smith, whose paper containing this sentence was delivered at an American Enterprise Institute conference last November, held a firm tongue-in-cheek with the core phrase.
  • Our adventure through these questions is designed as a prodding to all who use the term to tell us what they are talking about before they otherwise simply echo the white noise.
  • ...20 more annotations...
  • when our students attend three or four schools, the subject of these sentences is considerably weakened in terms of what happens to those students.
  • Who or what is one accountable to?
  • For what?
  • Why that particular “what” -- and not another “what”?
  • To what extent is the relationship reciprocal? Are there rewards and/or sanctions inherent in the relationship? How continuous is the relationship?
  • In the Socratic moral universe, one is simultaneously witness and judge. The Greek syneidesis (“conscience” and “consciousness”) means to know something with, so to know oneself with oneself becomes an obligation of institutions and systems -- to themselves.
  • Obligation becomes self-reflexive.
  • There are no external authorities here. We offer, we accept, we provide evidence, we judge. There is nothing wrong with this: it is indispensable, reflective self-knowledge. And provided we judge without excuses, we hold to this Socratic moral framework. As Peter Ewell has noted, the information produced under this rubric, particularly in the matter of student learning, is “part of our accountability to ourselves.”
  • But is this “accountability” as the rhetoric of higher education uses the white noise -- or something else?
  • in response to shrill calls for “accountability,” U.S. higher education has placed all its eggs in the Socratic basket, but in a way that leaves the basket half-empty. It functions as the witness, providing enormous amounts of information, but does not judge that information.
  • Every single “best practice” cited by Aldeman and Carey is subject to measurement: labor market histories of graduates, ratios of resource commitment to various student outcomes, proportion of students in learning communities or taking capstone courses, publicly-posted NSSE results, undergraduate research participation, space utilization rates, licensing income, faculty patents, volume of non-institutional visitors to art exhibits, etc. etc. There’s nothing wrong with any of these, but they all wind up as measurements, each at a different concentric circle of putatively engaged acceptees of a unilateral contract to provide evidence. By the time one plows through Aldeman and Carey’s banquet, one is measuring everything that moves -- and even some things that don’t.
  • Sorry, but basic capacity facts mean that consumers cannot vote with their feet in higher education.
  • If we glossed the Socratic notion on provision-of-information, the purpose is self-improvement, not comparison. The market approach to accountability implicitly seeks to beat Socrates by holding that I cannot serve as both witness and judge of my own actions unless the behavior of others is also on the table. The self shrinks: others define the reference points. “Accountability” is about comparison and competition, and an institution’s obligations are only to collect and make public those metrics that allow comparison and competition. As for who judges the competition, we have a range of amorphous publics and imagined authorities.
  • There are no formal agreements here: this is not a contract, it is not a warranty, it is not a regulatory relationship. It isn’t even an issue of becoming a Socratic self-witness and judge. It is, instead, a case in which one set of parties, concentrated in places of power, asks another set of parties, diffuse and diverse, “to disclose more and more about academic results,” with the second set of parties responding in their own terms and formulations. The environment itself determines behavior.
  • Ewell is right about the rules of the information game in this environment: when the provider is the institution, it will shape information “to look as good as possible, regardless of the underlying performance.”
  • U.S. News & World Report’s rankings
  • The messengers become self-appointed arbiters of performance, establishing themselves as the second party to which institutions and aggregates of institutions become “accountable.” Can we honestly say that the implicit obligation of feeding these arbiters constitutes “accountability”?
  • But if the issue is student learning, there is nothing wrong with -- and a good deal to be said for -- posting public examples of comprehensive examinations, summative projects, capstone course papers, etc. within the information environment, and doing so irrespective of anyone requesting such evidence of the distribution of knowledge and skills. Yes, institutions will pick what makes them look good, but if the public products resemble AAC&U’s “Our Students’ Best Work” project, they set off peer pressure for self-improvement and very concrete disclosure. The other prominent media messengers simply don’t engage in constructive communication of this type.
  • Ironically, a “market” in the loudest voices, the flashiest media productions, and the weightiest panels of glitterati has emerged to declare judgment on institutional performance in an age when student behavior has diluted the very notion of an “institution” of higher education. The best we can say is that this environment casts nothing but fog over the specific relationships, responsibilities, and obligations that should be inherent in something we call “accountability.” Perhaps it is about time that we defined these components and their interactions with persuasive clarity. I hope that this essay will invite readers to do so.
  • Clifford Adelman is senior associate at the Institute for Higher Education Policy. The analysis and opinions expressed in this essay are those of the author, and do not necessarily represent the positions or opinions of the institute, nor should any such representation be inferred.
  •  
    Perhaps the most important piece I've read recently. Yes must be our answer to Adelman's last challenge: It is time for us to disseminate what and why we do what we do.
Nils Peterson

CITE Journal -- Volume 2, Issue 4 - 0 views

  • The ability to aggregate data for assessment is counted as a plus for CS and a minus for GT
    • Nils Peterson
       
      This analysis preceeds the Harvesting concept.
  • The map includes the portfolio's ability to aid learners in planning, setting goals, and navigating the artifacts learners create and collect.
    • Nils Peterson
       
      Recently, when I have been thinking about program assessment I've been thinking how students might assess courses (before adding the couse to their transcript (aka portfolio) in terms of the student's learning needs for developing proficiency in the 6 WSU goals. Students might also do a course evaluation relative to the 6 goals to give instrutors and fellow students guideposts. SO, the notion here, portfolio as map, would be that the portfolio had a way for the learner to track/map progress toward a goal. Perhaps a series of radar charts associated with a series of artifacts. Learner reflection would lead to conclusion about what aspect of the rubric needed more practice in the creation of the next artifacts going into the portfolio.
1 - 20 of 32 Next ›
Showing 20 items per page