Skip to main content

Home/ CTLT and Friends/ Group items tagged assessment

Rss Feed Group items tagged

Kimberly Green

Learning Assessments: Let the Faculty Lead the Way - Measuring Stick - The Chronicle of... - 0 views

  •  
    Pat Hutchings Nice summary of the barriers to faculty involvemetn in useful assessment, and a suggestion that we look at the specific cases where it has been successful, despite the barriers. This short piece might be a good prompt for discussion with chairs, deans, assessment committees. Do we have any really strong models?
  •  
    CTLT and Friends
Joshua Yeidel

Digication e-Portfolios: Highered - Assessment - 0 views

  •  
    "Our web-based assessment solution for tracking, comparing, and reporting on student progress and performance gives faculty and administrators the tools they need to assess a class, department, or institution based on your standards, goals, or objectives. The Digication AMS integrates tightly with our award winning e-Portfolio system, enabling students to record and showcase learning outcomes within customizable, media friendly templates."
  •  
    Could this start out as with program portfolios, and bgrow to include student work?
Gary Brown

Validity and Reliability in Higher Education Assessment - 2 views

  • Validity and Reliability in Higher Education Assessment
  • However, validity and reliability are not inherent features of assessments or assessment systems and must be monitored continuously throughout the design and implementation of an assessment system. Research studies of a theoretical or empirical nature addressing methodology for ensuring and testing validity and reliability in the higher education assessment process, results of validity and reliability studies, and novel approaches to the concepts of validity and reliability in higher education assessment are all welcome. To be most helpful in this academic exchange, empirical studies should be clear and explicit about their methodology so that others can replicate or advance their research.
  •  
    We should take this opportunity seriously and write up our work. Let me know if you want to join me.
Joshua Yeidel

Refining the Recipe for a Degree, Ingredient by Ingredient - Government - The Chronicle... - 1 views

  • Supporters of the Lumina project say it holds the promise of turning educational assessment from a process that some academics might view as a threat into one that holds a solution, while also creating more-rigorous expectations for student learning. Mr. Jones, the Utah State history-department chairman, recounted in an essay published in the American Historical Association's Perspectives on History how he once blithely told an accreditation team that "historians do not measure their effectiveness in outcomes." But he has changed his mind. The Lumina project, and others, help define what learning is achieved in the process of earning a degree, he said, moving beyond Americans' heavy reliance on the standardized student credit hour as the measure of an education. "The demand for outcomes assessment should be seized as an opportunity for us to actually talk about the habits of mind our discipline needs to instill in our students," Mr. Jones wrote. "It will do us a world of good, and it will save us from the spreadsheets of bureaucrats."
  •  
    Lumina Foundation pushes a Eurpopean-style process to define education goals state- and nation-wide, with mixed success. "Chemistry, history, math, and physics have been among the most successful", whileothers have had a hard time beginning. "Supporters of the Lumina project say it holds the promise of turning educational assessment from a process that some academics might view as a threat into one that holds a solution, while also creating more-rigorous expectations for student learning. Mr. Jones, the Utah State history-department chairman, recounted in an essay published in the American Historical Association's Perspectives on History how he once blithely told an accreditation team that "historians do not measure their effectiveness in outcomes." But he has changed his mind. The Lumina project, and others, help define what learning is achieved in the process of earning a degree, he said, moving beyond Americans' heavy reliance on the standardized student credit hour as the measure of an education. "The demand for outcomes assessment should be seized as an opportunity for us to actually talk about the habits of mind our discipline needs to instill in our students," Mr. Jones wrote. "It will do us a world of good, and it will save us from the spreadsheets of bureaucrats."
Theron DesRosier

Assessment 2.0 - 0 views

  •  
    This is a critique of 1.0 assessment with few suggestions for remedy. Modernising assessment in the age of Web 2.0
Theron DesRosier

OECD Feasibility Study for the International Assessment of Higher Education Learning Ou... - 3 views

  •  
    "What is AHELO? The OECD Assessment of Higher Education Learning Outcomes (AHELO) is a ground-breaking initiative to assess learning outcomes on an international scale by creating measures that would be valid for all cultures and languages. Between ten and thirty-thousand higher education students in over ten different countries will take part in a feasibility study to determine the bounds of this ambitious project, with an eye to the possible creation of a full-scale AHELO upon its completion."
Theron DesRosier

Tertiary21: 21st Century Assessment: The University of Farmville - 0 views

  • Carnegie Mellon University Professor Jesse Schell's talk on the future of gaming is thought provoking. It gives some interesting insights into what educational assessment might look like by mid 21st Century.
  •  
    An interesting perspective on the future of assessment using the analogy of game design.
Gary Brown

Disciplines Follow Their Own Paths to Quality - Faculty - The Chronicle of Higher Educa... - 2 views

  • But when it comes to the fundamentals of measuring and improving student learning, engineering professors naturally have more to talk about with their counterparts at, say, Georgia Tech than with the humanities professors at Villanova
    • Gary Brown
       
      Perhaps this is too bad....
  • But there is no nationally normed way to measure the particular kind of critical thinking that students of classics acquire
  • er colleagues have created discipline-specific critical-reasoning tests for classics and political science
  • ...5 more annotations...
  • Political science cultivates skills that are substantially different from those in classics, and in each case those skills can't be measured with a general-education test.
  • he wants to use tests of reasoning that are appropriate for each discipline
  • I believe Richard Paul has spent a lifetime articulating the characteristics of discipline based critical thinking. But anyway, I think it is interesting that an attempt is being made to develop (perhaps) a "national standard" for critical thinking in classics. In order to assess anything effectively we need a standard. Without a standard there are no criteria and therefore no basis from which to assess. But standards do not necessarily have to be established at the national level. This raises the issue of scale. What is the appropriate scale from which to measure the quality and effectiveness of an educational experience? Any valid approach to quality assurance has to be multi-scaled and requires multiple measures over time. But to be honest the issues of standards and scale are really just the tip of the outcomes iceberg.
    • Gary Brown
       
      Missing the notion that the variance is in the activity more than the criteria.  We hear little of embedding nationally normed and weighted assignments and then assessing the implementation and facilitation variables.... mirror, not lens.
  • the UW Study of Undergraduate Learning (UW SOUL). Results from the UW SOUL show that learning in college is disciplinary; therefore, real assessment of learning must occur (with central support and resources)in the academic departments. Generic approaches to assessing thinking, writing, research, quantitative reasoning, and other areas of learning may be measuring something, but they cannot measure learning in college.
  • It turns out there is a six week, or 210+ hour serious reading exposure to two or more domains outside ones own, that "turns on" cross domain mapping as a robust capability. Some people just happen to have accumulated, usually by unseen and unsensed happenstance involvements (rooming with an engineer, son of a dad changing domains/careers, etc.) this minimum level of basics that allows robust metaphor based mapping.
Joshua Yeidel

Improving Your Assessment Processes: Q&A with Linda Suskie - 0 views

  •  
    "Q: What elements help make institutional effectiveness assessment successful? Suskie: One factor really stands out: If institutional leaders really value assessment results and use them to inform important decisions on important goals, your institutional effectiveness efforts will be a resounding success."
Joshua Yeidel

News: 'You Can't Measure What We Teach' - Inside Higher Ed - 0 views

  •  
    "Despite those diverging starting points, the discussion revealed quite a bit more common ground than any of the panelists probably would have predicted. Let's be clear: Where they ended up was hardly a breakthrough on the scale of solving the Middle East puzzle. But there was general agreement among them that: * Any effort to try to measure learning in the humanities through what McCulloch-Lovell deemed "[Margaret] Spellings-type assessment" -- defined as tests or other types of measures that could be easily compared across colleges and neatly sum up many of the learning outcomes one would seek in humanities students -- was doomed to fail, and should. * It might be possible, and could be valuable, for humanists to reach broad agreement on the skills, abilities, and knowledge they might seek to instill in their students, and that agreement on those goals might be a starting point for identifying effective ways to measure how well students have mastered those outcomes. * It is incumbent on humanities professors and academics generally to decide for themselves how to assess whether their students are learning, less to satisfy external calls for accountability than because it is the right thing for academics, as professionals who care about their students, to do. "
  •  
    Assessment meeting at the accreditors -- driven by expectations of a demand for accountability, with not one mention of improvement.
Gary Brown

OECD Project Seeks International Measures for Assessing Educational Quality - Internati... - 0 views

  • The first phase of an ambitious international study that intends to assess and compare learning outcomes in higher-education systems around the world was announced here on Wednesday at the conference of the Council for Higher Education Accreditation.
  • Richard Yelland, of the OECD's Education Directorate, is leading the project, which he said expects to eventually offer faculty members, students, and governments "a more balanced assessment of higher-education quality" across the organization's 31 member countries.
  • learning outcomes are becoming a central focus worldwide
  • ...7 more annotations...
  • the feasibility study is adapting the Collegiate Learning Assessment, an instrument developed by the Council for Aid to Education in the United States, to an international context.
  • At least six nations are participating in the feasibility study.
  • 14 countries are expected to participate in the full project, with an average of 10 institutions per country and about 200 students per institution,
  • The project's target population will be students nearing the end of three-year or four-year degrees, and will eventually measure student knowledge in economics and engineering.
  • While the goal of the project is not to produce another global ranking of universities, the growing preoccupation with such lists has crystallized what Mr. Yelland described as the urgency of pinning down what exactly it is that most of the world's universities are teaching and how well they are doing
  • Judith S. Eaton, president of the Council for Higher Education Accreditation, said she was also skeptical about whether the project would eventually yield common international assessment mechanisms.
  • Ms. Eaton noted, the same sets of issues recur across borders and systems, about how best to enhance student learning and strengthen economic development and international competitiveness.
  •  
    Another day, another press, again thinking comparisons
Gary Brown

Schmidt - 3 views

  • There are a number of assessment methods by which learning can be evaluated (exam, practicum, etc.) for the purpose of recognition and accreditation, and there are a number of different purposes for the accreditation itself (i.e., job, social recognition, membership in a group, etc). As our world moves from an industrial to a knowledge society, new skills are needed. Social web technologies offer opportunities for learning, which build these skills and allow new ways to assess them.
  • This paper makes the case for a peer-based method of assessment and recognition as a feasible option for accreditation purposes. The peer-based method would leverage online communities and tools, for example digital portfolios, digital trails, and aggregations of individual opinions and ratings into a reliable assessment of quality. Recognition by peers can have a similar function as formal accreditation, and pathways to turn peer recognition into formal credits are outlined. The authors conclude by presenting an open education assessment and accreditation scenario, which draws upon the attributes of open source software communities: trust, relevance, scalability, and transparency.
  •  
    Kinship here, and familiar friends.
Theron DesRosier

Envisioning the Post-LMS Era: The Open Learning Network (EDUCAUSE Quarterly) | EDUCAUSE - 3 views

  •  
    A featured article in Educause Quarterly contains this quote: "The importance of authentic, web-enabled learner assessment is clearly behind Caulfield's notion of "loosely coupled assessment" (first coined in a blog post by Mike Caulfield July 31, 2007) and WSU's harvesting gradebook project, with which we claim shared intellectual roots."
Kimberly Green

Movie Clips and Copyright - 0 views

  •  
    Video clips -- sometimes the copyright question comes up, so this green light is good news. Video clips may lend themselves to scenario-based assessments -- instead of reading a long article, students could look at a digitally presented case to analyze and critique -- might open up a lot of possibilities for assessment activities. a latest round of rule changes, issued Monday by the U.S. Copyright Office, dealing with what is legal and what is not as far as decrypting and repurposing copyrighted content. One change in particular is making waves in academe: an exemption that allows professors in all fields and "film and media studies students" to hack encrypted DVD content and clip "short portions" into documentary films and "non-commercial videos." (The agency does not define "short portions.") This means that any professors can legally extract movie clips and incorporate them into lectures, as long as they are willing to decrypt them - a task made relatively easy by widely available programs known as "DVD rippers." The exemption also permits professors to use ripped content in non-classroom settings that are similarly protected under "fair use" - such as presentations at academic conferences.
Gary Brown

Measuring Student Learning: Many Tools - Measuring Stick - The Chronicle of Higher Educ... - 2 views

  • The issue that needs to be addressed and spectacularly has been avoided is whether controlled studies (one group does the articulation of and then measurement of outcomes, and a control group does what we have been doing before this mania took hold) can demonstrate or falsify the claim that outcomes assessment results in better-educated students. So far as I can tell, we instead gather data on whether we have in fact been doing outcomes assessment. Not the issue, people. jwp
  •  
    The challenge--not the control study this person calls for, but the perception that outcomes assessment produces outcomes....
Joshua Yeidel

Cheaters Never Win, at Least in Physics, a Professor Finds - Wired Campus - The Chronic... - 1 views

  •  
    "The professor said he did find a way to greatly reduce cheating on homework in his class. He switched to a "studio" model of teaching, in which students sit in small groups working through tutorials on computers while professors and teaching assistants roam the room answering questions, rather than a traditional lecture. With lectures, he detected cheating on about 11 percent of homework problems, but now he detects copying on only about 3 percent of them. It might help that he shares findings from his study to his students, showing them that cheaters are much more likely to get C's and D's on exams than those who work out homework problems on their own. "
  •  
    A physics professor figures out how to detect cheating on homework, (an assessment), triangulates with another assessment (OK, it's an exam) to determine the impact on student learning, and modifies his teaching practice, reducing cheating from 11% to 3%.
  •  
    This is a happy ending and an interesting useful outcome from assessment. There is also a kind of "duh" quality to it, but that is not a problem. What I'm pondering, though, is the time and energy expended for a 7% gain. What I hope follows is further assessment of the gains for the other 93% relative to learning and attitude toward learning, school, and the subject matter.....
Corinna Lo

IJ-SoTL - A Method for Collaboratively Developing and Validating a Rubric - 1 views

  •  
    "Assessing student learning outcomes relative to a valid and reliable standard that is academically-sound and employer-relevant presents a challenge to the scholarship of teaching and learning. In this paper, readers are guided through a method for collaboratively developing and validating a rubric that integrates baseline data collected from academics and professionals. The method addresses two additional goals: (1) to formulate and test a rubric as a teaching and learning protocol for a multi-section course taught by various instructors; and (2) to assure that students' learning outcomes are consistently assessed against the rubric regardless of teacher or section. Steps in the process include formulating the rubric, collecting data, and sequentially analyzing the techniques used to validate the rubric and to insure precision in grading papers in multiple sections of a course."
Gary Brown

Want Students to Take an Optional Test? Wave 25 Bucks at Them - Students - The Chronicl... - 0 views

  • cash, appears to be the single best approach for colleges trying to recruit students to volunteer for institutional assessments and other low-stakes tests with no bearing on their grades.
  • American Educational Research Association
  • A college's choice of which incentive to offer does not appear to have a significant effect on how students end up performing, but it can have a big impact on colleges' ability to round up enough students for the assessments, the study found.
  • ...6 more annotations...
  • "I cannot provide you with the magic bullet that will help you recruit your students and make sure they are performing to the maximum of their ability," Mr. Steedle acknowledged to his audience at the Denver Convention Center. But, he said, his study results make clear that some recruitment strategies are more effective than others, and also offer some notes of caution for those examining students' scores.
  • The study focused on the council's Collegiate Learning Assessment, or CLA, an open-ended test of critical thinking and writing skills which is annually administered by several hundred colleges. Most of the colleges that use the test try to recruit 100 freshmen and 100 seniors to take it, but doing so can be daunting, especially for colleges that administer it in the spring, right when the seniors are focused on wrapping up their work and graduating.
  • The incentives that spurred students the least were the opportunity to help their college as an institution assess student learning, the opportunity to compare themselves to other students, a promise they would be recognized in some college publication, and the opportunity to put participation in the test on their resume.
  • The incentives which students preferred appeared to have no significant bearing on their performance. Those who appeared most inspired by a chance to earn 25 dollars did not perform better on the CLA than those whose responses suggested they would leap at the chance to help out a professor.
  • What accounted for differences in test scores? Students' academic ability going into the test, as measured by characteristics such as their SAT scores, accounted for 34 percent of the variation in CLA scores among individual students. But motivation, independent of ability, accounted for 5 percent of the variation in test scores—a finding that, the paper says, suggests it is "sensible" for colleges to be concerned that students with low motivation are not posting scores that can allow valid comparisons with other students or valid assessments of their individual strengths and weaknesses.
  • A major limitation of the study was that Mr. Steedle had no way of knowing how the students who took the test were recruited. "If many of them were recruited using cash and prizes, it would not be surprising if these students reported cash and prizes as the most preferable incentives," his paper concedes.
  •  
    Since it is not clear if the incentive to participate in this study influenced the decision to participate, it remains similarly unclear if incentives to participate correlate with performance.
Gary Brown

Scholars Assess Their Progress on Improving Student Learning - Research - The Chronicle... - 0 views

  • International Society for the Scholarship of Teaching and Learning, which drew 650 people. The scholars who gathered here were cautiously hopeful about colleges' commitment to the study of student learning, even as the Carnegie Foundation winds down its own project. (Mr. Shulman stepped down as president last year, and the foundation's scholarship-of-teaching-and-learning program formally came to an end last week.) "It's still a fragile thing," said Pat Hutchings, the Carnegie Foundation's vice president, in an interview here. "But I think there's a huge amount of momentum." She cited recent growth in faculty teaching centers,
  • Mary Taylor Huber, director of the foundation's Integrative Learning Project, said that pressure from accrediting organizations, policy makers, and the public has encouraged colleges to pour new resources into this work.
  • The scholars here believe that it is much more useful to try to measure and improve student learning at the level of individual courses. Institutionwide tests like the Collegiate Learning Assessment have limited utility at best, they said.
  • ...6 more annotations...
  • Mr. Bass and Toru Iiyoshi, a senior strategist at the Massachusetts Institute of Technology's office of educational innovation and technology, pointed to an emerging crop of online multimedia projects where college instructors can share findings about their teaching. Those sites include Merlot and the Digital Storytelling Multimedia Archive.
  • "If you use a more generic instrument, you can give the accreditors all the data in the world, but that's not really helpful to faculty at the department level," said the society's president, Jennifer Meta Robinson, in an interview. (Ms. Robinson is also a senior lecturer in communication and culture at Indiana University at Bloomington.)
  • "We need to create 'middle spaces' for the scholarship of teaching and learning," said Randall Bass, assistant provost for teaching and learning initiatives at Georgetown University, during a conference session on Friday.
  • It is vital, Ms. Peseta said, for scholars' articles about teaching and learning to be engaging and human. But at the same time, she urged scholars not to dumb down their statistical analyses or the theoretical foundations of their studies. She even put in a rare good word for jargon.
  • No one had a ready answer. Ms. Huber, of the Carnegie Foundation, noted that a vast number of intervening variables make it difficult to assess the effectiveness of any educational project.
  • "Well, I guess we have a couple of thousand years' worth of evidence that people don't listen to each other, and that we don't build knowledge," Mr. Bass quipped. "So we're building on that momentum."
  •  
    Note our friends Randy Bass (AAEEBL) and Mary Huber are prominent.
Theron DesRosier

An Expert Surveys the Assessment Landscape - The Chronicle of Higher Education - 2 views

  • What we want is for assessment to become a public, shared responsibility, so there should be departmental leadership.
  •  
    "What we want is for assessment to become a public, shared responsibility, so there should be departmental leadership." George Kuh director of the National Institute for Learning Outcomes Assessment.
  •  
    Kuh also says, "So we're going to spend some time looking at the impact of the Voluntary System of Accountability. It's one thing for schools to sign up, it's another to post the information and to show that they're actually doing something with it. It's not about posting a score on a Web site-it's about doing something with the data." He doesn't take the next step and ask if it is even possible for schools to actually do anything with the data collected from the CLA or ask who has access to the criteria: Students? Faculty? Anyone?
‹ Previous 21 - 40 of 223 Next › Last »
Showing 20 items per page