Skip to main content

Home/ Groups/ Assessment literacy
1More

A scholarly approach to solving the feedback dilemma in practice - Assessment & Evaluat... - 1 views

  •  
    Paywall hack: paste ".sci-hub.io" after the .com part of the URL
1More

Assessment rubrics: towards clearer and more replicable design, research and practice - 0 views

  •  
    Paywall hack: paste ".sci-hub.io" after the .com part of the URL
1More

Language Testing July 2013, 30 (3) (Special Issue on Language Assessment Literacy) - 1 views

  •  
    Paywall hack: paste ".sci-hub.io" after the .com part of the URL
1More

Taylor & Francis Online :: The quality of written peer feedback on undergraduates' draf... - 0 views

  •  
    Paywall hack: paste ".sci-hub.io" after the .com part of the URL
1More

All of the Above - 20 ways to cheat Multiple Choice questions - 0 views

  •  
    "All of the Above - 20 ways to cheat Multiple Choice questions"
5More

Performance assessments may not be 'reliable' or 'valid.' So what? | Dangerou... - 0 views

  • Most of us recognize that more of our students need to be doing deeper, more complex thinking work more often. But if we want students to be critical thinkers and problem solvers and effective communicators and collaborators, that cognitively-complex work is usually more divergent rather than convergent. It is more amorphous and fuzzy and personal. It is often multi-stage and multimodal. It is not easily reduced to a number or rating or score. However, this does NOT mean that kind of work is incapable of being assessed. When a student creates something – digital or physical (or both) – we have ways of determining the quality and contribution of that product or project. When a student gives a presentation that compels others to laugh, cry, and/or take action, we have ways of identifying what made that an excellent talk. When a student makes and exhibits a work of art – or sings, plays, or composes a musical selection – or displays athletic skill – or writes a computer program – we have ways of telling whether it was done well. When a student engages in a service learning project that benefits the community, we have ways of knowing whether that work is meaningful and worthwhile. When a student presents a portfolio of work over time, we have ways of judging that. And so on…
  • If we continue to insist on judging performance assessments with the ‘validity’ and ‘reliability’ criteria traditionally used by statisticians and psychometricians, we never – NEVER – will move much beyond factual recall and procedural regurgitation to achieve the kinds of higher-level student work that we need more of.
  • “What score should we give the Mona Lisa? And what would the ‘objective’ rating criteria be?”
  •  
    "Most of us recognize that more of our students need to be doing deeper, more complex thinking work more often. But if we want students to be critical thinkers and problem solvers and effective communicators and collaborators, that cognitively-complex work is usually more divergent rather than convergent. It is more amorphous and fuzzy and personal. It is often multi-stage and multimodal. It is not easily reduced to a number or rating or score. However, this does NOT mean that kind of work is incapable of being assessed. "
  •  
    What I'm not sure people realise is that at some point reliability and validity can become mutually exclusive. The author describes a situation where reliability has triumphed over validity, which is very wrong, as any psychometrician can tell you.
‹ Previous 21 - 40 of 245 Next › Last »
Showing 20 items per page