Skip to main content

Home/ CTLT and Friends/ Group items tagged outcomes_assessment

Rss Feed Group items tagged

Judy Rumph

about | outcomes_assessment | planning | NYIT - 1 views

shared by Judy Rumph on 17 Aug 10 - Cached
  • The Assessment Committee of NYIT's Academic Senate is the institutional unit that brings together all program assessment activities at the university - for programs with and without professional accreditation, for programs at all locations, for programs given through all delivery mechanisms. The committee members come from all academic schools and numerous support departments. Its meetings are open and minutes are posted on the web site of the Academic Senate.
  •  
    This page made me think about the public face of our own assessment process and how that can influence perceptions about our process.
Nils Peterson

Half an Hour: Open Source Assessment - 0 views

  • When posed the question in Winnipeg regarding what I thought the ideal open online course would look like, my eventual response was that it would not look like a course at all, just the assessment.
    • Nils Peterson
       
      I remembered this Downes post on the way back from HASTAC. It is some of the roots of our Spectrum I think.
  • The reasoning was this: were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources.
  • In Holland I encountered a person from an organization that does nothing but test students. This is the sort of thing I long ago predicted (in my 1998 Future of Online Learning) so I wasn't that surprised. But when I pressed the discussion the gulf between different models of assessment became apparent.Designers of learning resources, for example, have only the vaguest of indication of what will be on the test. They have a general idea of the subject area and recommendations for reading resources. Why not list the exact questions, I asked? Because they would just memorize the answers, I was told. I was unsure how this varied from the current system, except for the amount of stuff that must be memorized.
    • Nils Peterson
       
      assumes a test as the form of assessment, rather than something more open ended.
  • ...8 more annotations...
  • As I think about it, I realize that what we have in assessment is now an exact analogy to what we have in software or learning content. We have proprietary tests or examinations, the content of which is held to be secret by the publishers. You cannot share the contents of these tests (at least, not openly). Only specially licensed institutions can offer the tests. The tests cost money.
    • Nils Peterson
       
      See our Where are you on the spectrum, Assessment is locked vs open
  • Without a public examination of the questions, how can we be sure they are reliable? We are forced to rely on 'peer reviews' or similar closed and expert-based evaluation mechanisms.
  • there is the question of who is doing the assessing. Again, the people (or machines) that grade the assessments work in secret. It is expert-based, which creates a resource bottleneck. The criteria they use are not always apparent (and there is no shortage of literature pointing to the randomness of the grading). There is an analogy here with peer-review processes (as compared to recommender system processes)
  • What constitutes achievement in a field? What constitutes, for example, 'being a physicist'?
  • This is a reductive theory of assessment. It is the theory that the assessment of a big thing can be reduced to the assessment of a set of (necessary and sufficient) little things. It is a standards-based theory of assessment. It suggests that we can measure accomplishment by testing for accomplishment of a predefined set of learning objectives.Left to its own devices, though, an open system of assessment is more likely to become non-reductive and non-standards based. Even if we consider the mastery of a subject or field of study to consist of the accomplishment of smaller components, there will be no widespread agreement on what those components are, much less how to measure them or how to test for them.Consequently, instead of very specific forms of evaluation, intended to measure particular competences, a wide variety of assessment methods will be devised. Assessment in such an environment might not even be subject-related. We won't think of, say, a person who has mastered 'physics'. Rather, we might say that they 'know how to use a scanning electron microscope' or 'developed a foundational idea'.
  • We are certainly familiar with the use of recognition, rather than measurement, as a means of evaluating achievement. Ludwig Wittgenstein is 'recognized' as a great philosopher, for example. He didn't pass a series of tests to prove this. Mahatma Gandhi is 'recognized' as a great leader.
  • The concept of the portfolio is drawn from the artistic community and will typically be applied in cases where the accomplishments are creative and content-based. In other disciplines, where the accomplishments resemble more the development of skills rather than of creations, accomplishments will resemble more the completion of tasks, like 'quests' or 'levels' in online games, say.Eventually, over time, a person will accumulate a 'profile' (much as described in 'Resource Profiles').
  • In other cases, the evaluation of achievement will resemble more a reputation system. Through some combination of inputs, from a more or less define community, a person may achieve a composite score called a 'reputation'. This will vary from community to community.
  •  
    Fine piece, transformative. "were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources."
Corinna Lo

Blackboard Outcomes Assessment Webcast - Moving Beyond Accreditation: Using Institution... - 0 views

  •  
    The first 12 minutes of the webcast is worth watching. He opened up with a story of the investigation of cholera outbreak during Victorian era in London, and brought that into how it related to student success. He then summarized what the key methods of measurement were, and some lessons Learned: An "interdisciplinary" led to unconventional, yet innovative methods of investigation. The researchers relied on multiple forms of measurement to come to their conclusion. The visualization of their data was important to proving their case to others.
Corinna Lo

News: The Challenge of Comparability - Inside Higher Ed - 0 views

  •  
    But when it came to defining sets of common learning outcomes for specific degree programs -- Transparency by Design's most distinguishing characteristic -- commonality was hard to come by. Questions to apply to any institution could be: 1) For any given program, what specific student learning outcomes are graduates expected to demonstrate? 2) By what standards and measurements are students being evaluated? 3) How well have graduating students done relative to these expectations? Comparability of results (the 3rd question) depends on transparency of goals and expectations (the 1st question) and transparency of measures (the 2nd question).
1 - 5 of 5
Showing 20 items per page