Skip to main content

Home/ CTLT and Friends/ Group items tagged assessments

Rss Feed Group items tagged

Joshua Yeidel

Key Steps in Outcome Management - 0 views

  •  
    First in a series from the Urban Institute on outcome management for non-profits, for an audience of non-evaluation-savvy leadership and staff. Lots to steal here if we ever create an Assessment Handbook for WSU.
Joshua Yeidel

A Measure of Learning Is Put to the Test - Faculty - The Chronicle of Higher Education - 1 views

  • "The CLA is really an authentic assessment process,"
    • Joshua Yeidel
       
      What is the meaning of "authentic" in this statement? It certainly isn't "situated in the real world" or "of intrinsic value".
  • it measures analytical ability, problem-solving ability, critical thinking, and communication.
  • the CLA typically reports scores on a "value added" basis, controlling for the scores that students earned on the SAT or ACT while in high school.
    • Joshua Yeidel
       
      If SAT and ACT are measuring the same things as CLA, then why not just use them? If they are measuring different things, why "control for" their scores?
  • ...5 more annotations...
  • improved models of instruction.
  • add CLA-style assignments to their liberal-arts courses.
    • Joshua Yeidel
       
      Maybe the best way to prepare for the test, but the best way to develop analytical ability, et. al.?
  • "If a college pays attention to learning and helps students develop their skills—whether they do that by participating in our programs or by doing things on their own—they probably should do better on the CLA,"
    • Joshua Yeidel
       
      Just in case anyone missed the message: pay attention to learning, and you'll _probably_ do better on the CLA. Get students to practice CLA tasks, and you _will_ do better on the CLA.
  • "Standardized tests of generic skills—I'm not talking about testing in the major—are so much a measure of what students bring to college with them that there is very little variance left out of which we might tease the effects of college," says Ms. Banta, who is a longtime critic of the CLA. "There's just not enough variance there to make comparative judgments about the comparative quality of institutions."
    • Joshua Yeidel
       
      It's not clear what "standardized tests" means in this comment. Does the "lack of variance" apply to all assessments (including, e.g., e-portfolios)?
  • Can the CLA fill both of those roles?
  •  
    A summary of the current state of "thinking" with regard to CLA. Many fallacies and contradictions are (unintentionally) exposed. At least CLA appears to be more about skills than content (though the question of how it is graded isn't even raised), but the "performance task" approach is the smallest possible step in that direction.
Joshua Yeidel

Jim Dudley on Letting Go of Rigid Adherence to What Evaluation Should Look Like | AEA365 - 1 views

  •  
    "Recently, in working with a board of directors of a grassroots organization, I was reminded of how important it is to "let go" of rigid adherence to typologies and other traditional notions of what an evaluation should look like. For example, I completed an evaluation that incorporated elements of all of the stages of program development - a needs assessment (e.g., how much do board members know about their programs and budget), a process evaluation (e.g., how well do the board members communicate with each other when they meet), and an outcome evaluation (e.g., how effective is their marketing plan for recruiting children and families for its programs)."
  •  
    Needs evaluation, process evaluation, outcomes evaluation -- all useful for improvement.
Gary Brown

Home | AALHE - 2 views

shared by Gary Brown on 22 Oct 10 - Cached
  • The Association for Assessment of Learning in Higher Education, Inc. (AALHE) is an organization of practitioners interested in using effective assessment practice to document and improve student learning.
  • it is designed to be a resource by all who are interested in the improvement of learning,
  •  
    Our membership begins November 1
Gary Brown

71 Presidents Pledge to Improve Their Colleges' Teaching and Learning - Faculty - The C... - 0 views

  • In a venture known as the Presidents' Alliance for Excellence in Student Learning and Accountability, they have promised to take specific steps to gather more evidence about student learning, to use that evidence to improve instruction, and to give the public more information about the quality of learning on their campuses.
  • The 71 pledges, officially announced on Friday, are essentially a dare to accreditors, parents, and the news media: Come visit in two years, and if we haven't done these things, you can zing us.
  • deepen an ethic of professional stewardship and self-regulation among college leaders
  • ...4 more annotations...
  • Beginning in 2011, all first-year students at Westminster will be required to create electronic portfolios that reflect their progress in terms of five campuswide learning goals. And the college will expand the number of seniors who take the Collegiate Learning Assessment, so that the test can be used to help measure the strength of each academic major.
  • "The crucial thing is that all of our learning assessments have been designed and driven by the faculty," says Pamela G. Menke, Miami Dade's associate provost for academic affairs. "The way transformation of learning truly occurs is when faculty members ask the questions, and when they're willing to use what they've found out to make change.
  • Other assessment models might point some things out, but they won't be useful if faculty members don't believe in them."
  • "In the long term, as more people join, I hope that the Web site will provide a resource for the kinds of innovations that seem to be successful," he says. "That process might be difficult. Teaching is an art, not a science. But there is still probably a lot that we can learn from each other."
Joshua Yeidel

Higher Education: Assessment & Process Improvement Group News | LinkedIn - 2 views

  •  
    So here it is: by definition, the value-added component of the D.C. IMPACT evaluation system defines 50 percent of all teachers in grades four through eight as ineffective or minimally effective in influencing their students' learning. And given the imprecision of the value-added scores, just by chance some teachers will be categorized as ineffective or minimally effective two years in a row. The system is rigged to label teachers as ineffective or minimally effective as a precursor to firing them.
  •  
    How assessment of value-added actually works in one setting: the Washington, D.C. public schools. This article actually works the numbers to show that the system is set up to put teachers in the firing zone. Note the tyranny of numerical ratings (some of them subjective) converted into meanings like "minimally effective".
Joshua Yeidel

Evaluating Teachers: The Important Role of Value-Added [pdf] - 1 views

  •  
    "We conclude that value-added data has an important role to play in teacher evaluation systems, but that there is much to be learned about how best to use value-added information in human resource decisions." No mention of the role of assessment in improvement.
S Spaeth

Google Gadget Ventures - 1 views

  •  
    Google Gadget Ventures is a new Google pilot program dedicated to helping developers create richer, more useful gadgets. Inspired by the success of iGoogle, which has been driven by the creation by 3rd-party developers of a broad range of gadgets, Gadget Ventures provides two types of funding: Grants of $5,000 to those who've built gadgets we'd like to see developed further. You're eligible to apply for a grant if you've developed a gadget that's in our gadgets directory and gets at least 250,000 weekly page views. To apply, you must submit a one-page proposal detailing how you'd use the grant to improve your gadget. Seed investments of $100,000 to developers who'd like to build a business around the gadgets platform. Only Google Gadget Venture grant recipients are eligible for this type of funding. Submitting a business plan detailing how you plan to build a viable business around the gadgets platform is a required part of the seed investment application process. It's our hope that Google Gadget Ventures will give developers the opportunity to create a new generation of gadgets to benefit users. ---------- Consider this form of authentic assessment and the metrics they apply.
Gary Brown

Law Schools Resist Proposal to Assess Them Based on What Students Learn - Curriculum - ... - 1 views

  • Law schools would be required to identify key skills and competencies and develop ways to test how well their graduates are learning them under controversial revisions to accreditation standards being proposed by the American Bar Association.
  • Several law deans said they have enough to worry about with budget cuts, a tough job market for their graduates, and the soaring cost of legal education without adding a potentially expensive assessment overhaul.
  • The proposed standards, which are still being developed, call on law schools to define learning outcomes that are consistent with their missions and to offer curricula that will achieve those outcomes. Different versions being considered offer varying degrees of specificity about what those skills should include.
  • ...2 more annotations...
  • "It is worth pausing to ask how the proponents of outcome measures can be so very confident that the actual performance of tasks deemed essential for the practice of law can be identified, measured, and evaluated," said Robert C. Post, dean of Yale Law School.
  • Phillip A. Bradley, senior vice president and general counsel for Duane Reade, a large drugstore chain, likened law schools to car companies that are "manufacturing something that nobody wants." Mr. Bradley said many law firms are developing core competencies they expect of their lawyers, but many law schools aren't delivering graduates who come close to meeting them.
  •  
    The homeopathic fallacy again, and as goes law school, so goes law....
Kimberly Green

Assessment Gap (From INSIDE HIGHER ED) - 0 views

  •  
    At the Middle States meeting, the Temple officials offered strategies for winning faculty involvement and follow through: Recognize differences among departments; publicize success stories; start with the basics; reward -- don't punish -- flaws that are revealed. I think it would be worth talking about these approaches and the extent to which OAI is / isn't or should / shouldn't build them in.
Gary Brown

Accrediting Agencies Confront New Challenges - Letters to the Editor - The Chronicle of... - 0 views

  • The Chronicle, December 17). In an era of global expansion in higher education, accreditation agencies are increasingly confronted with myriad challenges surrounding various forms of distance education (whether virtual, so-called branch campuses, or study abroad) and cross-institutional certification.
  • the American Academy for Liberal Education is particularly well placed to view this changing pedagogical and institutional landscape, both domestically and worldwide.
  • AALE goes several steps further in evaluating whether institutions meet an extensive set of pedagogical standards specifically related to liberal education—standards of effective reasoning, for instance, and broad and deep learning. This level of assessment requires extensive classroom visitations, conversations with students and faculty members, and the time to assess the climate of learning at every institution we visit.
  • ...1 more annotation...
  • Innovation and quality in higher education can only join hands when institutions aspire—and are held to—independent, third-party standards of assessment.
  •  
    a small but clear stress made for independent review
Jayme Jacobson

Evaluating the effect of peer feedback on the quality of online discourse - 0 views

  • Results indicate that continuous, anonymous, aggregated feedback had no effect on either the students' or the instructors' perception of discussion quality.
  •  
    Abstract: This study explores the effect on discussion quality of adding a feedback mechanism that presents users with an aggregate peer rating of the usefulness of the participant's contributions in online, asynchronous discussion. Participants in the study groups were able to specify the degree to which they thought any posted comment was useful to the discussion. Individuals were regularly presented with feedback (aggregated and anonymous) summarizing peers' assessment of the usefulness of their contribution, along with a summary of how the individuals rated their peers. Results indicate that continuous, anonymous, aggregated feedback had no effect on either the students' or the instructors' perception of discussion quality. This is kind of a show-stopper. It's just one study but when you look at the results there appears to be no effect whatsoever from peers giving feedback about the usefulness of discussion posts, nor any perceived improvement in the quality of the discussions as evaluated by faculty. It looks like we'll need to begin looking carefully at just what kinds of feedback will really make a difference. Following up on Corinna's earlier post http://blogs.hbr.org/cs/2010/03/twitters_potential_as_microfee.html about the effectiveness of short immediate feedback being more effective than lengthier feedback that actually hinders performance. The trick will be to figure out just what kinds of feedback will actually work in embedded situations. It's interesting that an assessment of utility wasn't useful...?
Gary Brown

Ethics? Let's Outsource Them! - Brainstorm - The Chronicle of Higher Education - 4 views

  • Many students are already buying their papers from term-paper factories located in India and other third world countries. Now we are sending those papers back there to be graded. I wonder how many people are both writing and grading student work, and whether, serendipitously, any of those people ever get the chance to grade their own writing.”
  • The great learning loop of outcomes assessment is neatly “closed,” with education now a perfect, completed circle of meaningless words.
  • With outsourced grading, it’s clearer than ever that the world of rubrics behaves like that wicked southern plant called kudzu, smothering everything it touches. Certainly teaching and learning are being covered over by rubrics, which are evolving into a sort of quasi-religious educational theory controlled by priests whose heads are so stuck in playing with statistics that they forget to try to look openly at what makes students turn into real, viable, educated adults and what makes great, or even good, teachers.
  • ...2 more annotations...
  • Writing an essay is an art, not a science. As such, people, not instruments, must take its measure, and judge it. Students have the right to know who is doing the measuring. Instead of going for outsourced grading, Ms. Whisenant should cause a ruckus over the size of her course with the administration at Houston. After all, if she can’t take an ethical stand, how can she dare to teach ethics?
  • "People need to get past thinking that grading must be done by the people who are teaching.” Sorry, Mr. Rajam, but what you should be saying is this: Teachers, including those who teach large classes and require teaching assistants and readers, need to get past thinking that they can get around grading.
  •  
    the outsourcing loop becomes a diatribe against rubrics...
  •  
    It's hard to see how either outsourced assessment or harvested assessment can be accomplished convincingly without rubrics. How else can the standards of the teacher be enacted by the grader? From there we are driven to consider how, in the absence of a rubric, the standards of the teacher can be enacted by the student. Is it "ethical" to use the Potter Stewart standard: "I'll know it when I see it"?
  •  
    Yes, who is the "priest" in the preceding rendering--one who shares principles of quality (rubrics), or one who divines a grade a proclaims who is a "real, viable, educated adult"?
Gary Brown

More thinking about the alignment project « The Weblog of (a) David Jones - 0 views

  • he dominant teaching experience for academics is teaching an existing course, generally one the academic has taught previously. In such a setting, academics spend most of their time fine tuning a course or making minor modifications to material or content (Stark, 2000)
  • many academic staff continue to employ inappropriate, teacher-centered, content focused strategies”. If the systems and processes of university teaching and learning practice do not encourage and enable everyday consideration of alignment, is it surprising that many academics don’t consider alignment?
  • student learning outcomes are significantly higher when there are strong links between those learning outcomes, assessment tasks, and instructional activities and materials.
  • ...11 more annotations...
  • Levander and Mikkola (2009) describe the full complexity of managing alignment at the degree level which makes it difficult for the individual teacher and the program coordinator to keep connections between courses in mind.
  • Make explicit the quality model.
  • Build in support for quality enhancement.
  • Institute a process for quality feasibility.
  • Cohen (1987) argues that limitations in learning are not mainly caused by ineffective teaching, but are instead mostly the result of a misalignment between what teachers teach, what they intend to teach, and what they assess as having been taught.
  • Raban (2007) observes that the quality management systems of most universities employ procedures that are retrospective and weakly integrated with long term strategic planning. He continues to argue that the conventional quality management systems used by higher education are self-defeating as they undermine the commitment and motivation of academic staff through an apparent lack of trust, and divert resources away from the core activities of teaching and research (Raban, 2007, p. 78).
  • Ensure participation of formal institutional leadership and integration with institutional priorities.
  • Action research perspective, flexible responsive.
  • Having a scholarly, not bureaucratic focus.
  • Modifying an institutional information system.
  • A fundamental enabler of this project is the presence of an information system that is embedded into the everyday practice of teaching and learning (for both students and staff) that encourages and enables consideration of alignment.
  •  
    a long blog, but underlying principles align with the Guide to Effective Assessment on many levels.
Joshua Yeidel

The Answer Sheet - A principal on standardized vs. teacher-written tests - 0 views

  •  
    High school principal George Wood eloquently contrasts standardized NCLB-style testing with his school's performance assessments.
Joshua Yeidel

Cross-Disciplinary Grading Techniques - ProfHacker - The Chronicle of Higher Education - 0 views

  •  
    "So far, the most useful tool to me, in physics, has been the rubric, which is used widely in grading open-ended assessments in the humanities. "
  •  
    A focus on improving the grading experience, rather than the learning experience, but still a big step forward for (some) hard scientists.
Nils Peterson

Higher Ed/: TLT's Harvesting Feedback Project - 0 views

  • It's a fascinating project, and to me the most interesting design element is one not actually highlighted here, viz. that the plan is to be able to rate any kind of work anywhere on the Internet. The era of "enclosed garden" portfolio systems may be drawing (thankfully) to an end.
    • Nils Peterson
       
      Interesting that David picked up this implication from the work, its something we didn't say but I think want to believe.
  • crowd-sourcing for assessment (you assess some of my students, I assess some of yours, for example) I wonder if the group has considered using Amazon's Mechanical Turk service as a cost-effective way of getting ratings from "the public."
    • Nils Peterson
       
      This is an interesting idea, i've started to follow up at Mechanical Turk and hope to develop a blog post
Peggy Collins

Blackboard vs. Moodle: North Carolina Community Colleges Assessment - 0 views

  •  
    feldstein summarizes the findings from North Carolina.
Joshua Yeidel

Editor's Note: Bending the Curve by Paul Glastris | Washington Monthly - 0 views

  • without measures of learning—education’s primary bottom line—there can be no real market discipline.
  • Instead, colleges can raise prices with relative impunity— and spend the extra money on everything but their students’ education. They can compete for fame and glory and stick students with the bill.
  •  
    Washington Monthly's College Guide includes an introduction by the editor which frames the issue of assessment in terms of the unsustainability of the current paradigm and its rising costs. He asserts that "the higher education market is missing a measure of value, quality divided by price". "Value measures would allow colleges to do what they can't do now: lower prices without being punished by the market." The introduction points to several other useful articles in the College Guide.
« First ‹ Previous 61 - 80 of 223 Next › Last »
Showing 20 items per page