Skip to main content

Home/ CTLT and Friends/ Group items tagged embedded assessment

Rss Feed Group items tagged

Theron DesRosier

Assessing Learning Outcomes at the University of Cincinnati: Comparing Rubric Assessmen... - 2 views

  •  
    "When the CLA results arrived eight months later, the UC team compared the outcomes of the two assessments. "We found no statistically significant correlation between the CLA scores and the portfolio scores," Escoe says. "In some ways, it's a disappointing finding. If we'd found a correlation, we could tell faculty that the CLA, as an instrument, is measuring the same things that we value and that the CLA can be embedded in a course. But that didn't happen." There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points "in a black box": if a student referred to a specific piece of evidence in a critical-thinking question, he or she simply received one point. In addition, she says, faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind-leading to results that would not correlate to a computer-scored test. In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. "
  •  
    Another institution trying to make sense of the CLA. This study compared student's CLA scores with criteria-based scores of their eportfolios. The study used a modified version of the VALUE rubrics developed by the AACU. Our own Gary Brown was on the team that developed the critical thinking rubric for the VALUE project.
  •  
    "The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. " This begs some questions: what meaning can we attach to these two non-correlated measures? What VSA requirements can rubric-based assessment NOT satisfy? Are those "requirements" really useful?
Gary Brown

Disciplines Follow Their Own Paths to Quality - Faculty - The Chronicle of Higher Educa... - 2 views

  • But when it comes to the fundamentals of measuring and improving student learning, engineering professors naturally have more to talk about with their counterparts at, say, Georgia Tech than with the humanities professors at Villanova
    • Gary Brown
       
      Perhaps this is too bad....
  • But there is no nationally normed way to measure the particular kind of critical thinking that students of classics acquire
  • er colleagues have created discipline-specific critical-reasoning tests for classics and political science
  • ...5 more annotations...
  • Political science cultivates skills that are substantially different from those in classics, and in each case those skills can't be measured with a general-education test.
  • he wants to use tests of reasoning that are appropriate for each discipline
  • I believe Richard Paul has spent a lifetime articulating the characteristics of discipline based critical thinking. But anyway, I think it is interesting that an attempt is being made to develop (perhaps) a "national standard" for critical thinking in classics. In order to assess anything effectively we need a standard. Without a standard there are no criteria and therefore no basis from which to assess. But standards do not necessarily have to be established at the national level. This raises the issue of scale. What is the appropriate scale from which to measure the quality and effectiveness of an educational experience? Any valid approach to quality assurance has to be multi-scaled and requires multiple measures over time. But to be honest the issues of standards and scale are really just the tip of the outcomes iceberg.
    • Gary Brown
       
      Missing the notion that the variance is in the activity more than the criteria.  We hear little of embedding nationally normed and weighted assignments and then assessing the implementation and facilitation variables.... mirror, not lens.
  • the UW Study of Undergraduate Learning (UW SOUL). Results from the UW SOUL show that learning in college is disciplinary; therefore, real assessment of learning must occur (with central support and resources)in the academic departments. Generic approaches to assessing thinking, writing, research, quantitative reasoning, and other areas of learning may be measuring something, but they cannot measure learning in college.
  • It turns out there is a six week, or 210+ hour serious reading exposure to two or more domains outside ones own, that "turns on" cross domain mapping as a robust capability. Some people just happen to have accumulated, usually by unseen and unsensed happenstance involvements (rooming with an engineer, son of a dad changing domains/careers, etc.) this minimum level of basics that allows robust metaphor based mapping.
Jayme Jacobson

Evaluating the effect of peer feedback on the quality of online discourse - 0 views

  • Results indicate that continuous, anonymous, aggregated feedback had no effect on either the students' or the instructors' perception of discussion quality.
  •  
    Abstract: This study explores the effect on discussion quality of adding a feedback mechanism that presents users with an aggregate peer rating of the usefulness of the participant's contributions in online, asynchronous discussion. Participants in the study groups were able to specify the degree to which they thought any posted comment was useful to the discussion. Individuals were regularly presented with feedback (aggregated and anonymous) summarizing peers' assessment of the usefulness of their contribution, along with a summary of how the individuals rated their peers. Results indicate that continuous, anonymous, aggregated feedback had no effect on either the students' or the instructors' perception of discussion quality. This is kind of a show-stopper. It's just one study but when you look at the results there appears to be no effect whatsoever from peers giving feedback about the usefulness of discussion posts, nor any perceived improvement in the quality of the discussions as evaluated by faculty. It looks like we'll need to begin looking carefully at just what kinds of feedback will really make a difference. Following up on Corinna's earlier post http://blogs.hbr.org/cs/2010/03/twitters_potential_as_microfee.html about the effectiveness of short immediate feedback being more effective than lengthier feedback that actually hinders performance. The trick will be to figure out just what kinds of feedback will actually work in embedded situations. It's interesting that an assessment of utility wasn't useful...?
Gary Brown

More thinking about the alignment project « The Weblog of (a) David Jones - 0 views

  • he dominant teaching experience for academics is teaching an existing course, generally one the academic has taught previously. In such a setting, academics spend most of their time fine tuning a course or making minor modifications to material or content (Stark, 2000)
  • many academic staff continue to employ inappropriate, teacher-centered, content focused strategies”. If the systems and processes of university teaching and learning practice do not encourage and enable everyday consideration of alignment, is it surprising that many academics don’t consider alignment?
  • student learning outcomes are significantly higher when there are strong links between those learning outcomes, assessment tasks, and instructional activities and materials.
  • ...11 more annotations...
  • Levander and Mikkola (2009) describe the full complexity of managing alignment at the degree level which makes it difficult for the individual teacher and the program coordinator to keep connections between courses in mind.
  • Make explicit the quality model.
  • Build in support for quality enhancement.
  • Institute a process for quality feasibility.
  • Cohen (1987) argues that limitations in learning are not mainly caused by ineffective teaching, but are instead mostly the result of a misalignment between what teachers teach, what they intend to teach, and what they assess as having been taught.
  • Raban (2007) observes that the quality management systems of most universities employ procedures that are retrospective and weakly integrated with long term strategic planning. He continues to argue that the conventional quality management systems used by higher education are self-defeating as they undermine the commitment and motivation of academic staff through an apparent lack of trust, and divert resources away from the core activities of teaching and research (Raban, 2007, p. 78).
  • Ensure participation of formal institutional leadership and integration with institutional priorities.
  • Action research perspective, flexible responsive.
  • Having a scholarly, not bureaucratic focus.
  • Modifying an institutional information system.
  • A fundamental enabler of this project is the presence of an information system that is embedded into the everyday practice of teaching and learning (for both students and staff) that encourages and enables consideration of alignment.
  •  
    a long blog, but underlying principles align with the Guide to Effective Assessment on many levels.
Joshua Yeidel

Silverlight & WPF Chart - 0 views

  •  
    Visifire is a set of open source data visualization components - powered by Microsoft® Silverlight™ & WPF...Visifire can also be embedded in any webpage as a standalone Silverlight App. Visifire is independent of server side technology. It can be used with ASP, ASP.Net, PHP, JSP, ColdFusion, Ruby on Rails or just simple HTML. No radar charts, but for some things, this might be useful.
1 - 5 of 5
Showing 20 items per page