Skip to main content

Home/ CTLT and Friends/ Group items tagged evaluation

Rss Feed Group items tagged

Theron DesRosier

CDC Evaluation Working Group: Framework - 2 views

  • Framework for Program Evaluation
  • Purposes The framework was developed to: Summarize and organize the essential elements of program evaluation Provide a common frame of reference for conducting evaluations Clarify the steps in program evaluation Review standards for effective program evaluation Address misconceptions about the purposes and methods of program evaluation
  • Assigning value and making judgments regarding a program on the basis of evidence requires answering the following questions: What will be evaluated? (i.e. what is "the program" and in what context does it exist) What aspects of the program will be considered when judging program performance? What standards (i.e. type or level of performance) must be reached for the program to be considered successful? What evidence will be used to indicate how the program has performed? What conclusions regarding program performance are justified by comparing the available evidence to the selected standards? How will the lessons learned from the inquiry be used to improve public health effectiveness?
  • ...3 more annotations...
  • These questions should be addressed at the beginning of a program and revisited throughout its implementation. The framework provides a systematic approach for answering these questions.
  • Steps in Evaluation Practice Engage stakeholders Those involved, those affected, primary intended users Describe the program Need, expected effects, activities, resources, stage, context, logic model Focus the evaluation design Purpose, users, uses, questions, methods, agreements Gather credible evidence Indicators, sources, quality, quantity, logistics Justify conclusions Standards, analysis/synthesis, interpretation, judgment, recommendations Ensure use and share lessons learned Design, preparation, feedback, follow-up, dissemination Standards for "Effective" Evaluation Utility Serve the information needs of intended users Feasibility Be realistic, prudent, diplomatic, and frugal Propriety Behave legally, ethically, and with due regard for the welfare of those involved and those affected Accuracy Reveal and convey technically accurate information
  • The challenge is to devise an optimal — as opposed to an ideal — strategy.
  •  
    Framework for Program Evaluation by the CDC This is a good resource for program evaluation. Click through "Steps and Standards" for information on collecting credible evidence and engaging stakeholders.
Gary Brown

Sincerity in evaluation - highlights and lowlights « Genuine Evaluation - 3 views

  • Principles of Genuine Evaluation When we set out to explore the notion of ‘Genuine Evaluation’, we identified 5 important aspects of it: VALUE-BASED -transparent and defensible values (criteria of merit and worth and standards of performance) EMPIRICAL – credible evidence about what has happened and what has caused this, USABLE – reported in such a way that it can be understood and used by those who can and should use it (which doesn’t necessarily mean it’s used or used well, of course) SINCERE – a commitment by those commissioning evaluation to respond to information about both success and failure (those doing evaluation can influence this but not control it) HUMBLE – acknowledges its limitations From now until the end of the year, we’re looking at each of these principles and collecting some of the highlights and lowlights  from 2010 (and previously).
  • Sincerity of evaluation is something that is often not talked about in evaluation reports, scholarly papers, or formal presentations, only discussed in the corridors and bars afterwards.  And yet it poses perhaps the greatest threat to the success of individual evaluations and to the whole enterprise of evaluation.
Joshua Yeidel

Jim Dudley on Letting Go of Rigid Adherence to What Evaluation Should Look Like | AEA365 - 1 views

  •  
    "Recently, in working with a board of directors of a grassroots organization, I was reminded of how important it is to "let go" of rigid adherence to typologies and other traditional notions of what an evaluation should look like. For example, I completed an evaluation that incorporated elements of all of the stages of program development - a needs assessment (e.g., how much do board members know about their programs and budget), a process evaluation (e.g., how well do the board members communicate with each other when they meet), and an outcome evaluation (e.g., how effective is their marketing plan for recruiting children and families for its programs)."
  •  
    Needs evaluation, process evaluation, outcomes evaluation -- all useful for improvement.
Gary Brown

Evaluations That Make the Grade: 4 Ways to Improve Rating the Faculty - Teaching - The ... - 1 views

  • For students, the act of filling out those forms is sometimes a fleeting, half-conscious moment. But for instructors whose careers can live and die by student evaluations, getting back the forms is an hour of high anxiety
  • "They have destroyed higher education." Mr. Crumbley believes the forms lead inexorably to grade inflation and the dumbing down of the curriculum.
  • Texas enacted a law that will require every public college to post each faculty member's student-evaluation scores on a public Web site.
  • ...10 more annotations...
  • The IDEA Center, an education research group based at Kansas State University, has been spreading its particular course-evaluation gospel since 1975. The central innovation of the IDEA system is that departments can tailor their evaluation forms to emphasize whichever learning objectives are most important in their discipline.
  • (Roughly 350 colleges use the IDEA Center's system, though in some cases only a single department or academic unit participates.)
  • The new North Texas instrument that came from these efforts tries to correct for biases that are beyond an instructor's control. The questionnaire asks students, for example, whether the classroom had an appropriate size and layout for the course. If students were unhappy with the classroom, and if it appears that their unhappiness inappropriately colored their evaluations of the instructor, the system can adjust the instructor's scores accordingly.
  • The survey instrument, known as SALG, for Student Assessment of their Learning Gains, is now used by instructors across the country. The project's Web site contains more than 900 templates, mostly for courses in the sciences.
  • "So the ability to do some quantitative analysis of these comments really allows you to take a more nuanced and effective look at what these students are really saying."
  • Mr. Frick and his colleagues found that his new course-evaluation form was strongly correlated with both students' and instructors' own measures of how well the students had mastered each course's learning goals.
  • Elaine Seymour, who was then director of ethnography and evaluation research at the University of Colorado at Boulder, was assisting with a National Science Foundation project to improve the quality of science instruction at the college level. She found that many instructors were reluctant to try new teaching techniques because they feared their course-evaluation ratings might decline.
  • "Students are the inventory," Mr. Crumbley says. "The real stakeholders in higher education are employers, society, the people who hire our graduates. But what we do is ask the inventory if a professor is good or bad. At General Motors," he says, "you don't ask the cars which factory workers are good at their jobs. You check the cars for defects, you ask the drivers, and that's how you know how the workers are doing."
  • William H. Pallett, president of the IDEA Center, says that when course rating surveys are well-designed and instructors make clear that they care about them, students will answer honestly and thoughtfully.
  • In Mr. Bain's view, student evaluations should be just one of several tools colleges use to assess teaching. Peers should regularly visit one another's classrooms, he argues. And professors should develop "teaching portfolios" that demonstrate their ability to do the kinds of instruction that are most important in their particular disciplines. "It's kind of ironic that we grab onto something that seems fixed and fast and absolute, rather than something that seems a little bit messy," he says. "Making decisions about the ability of someone to cultivate someone else's learning is inherently a messy process. It can't be reduced to a formula."
  •  
    Old friends at the Idea Center, and an old but persistent issue.
Gary Brown

Empowerment Evaluation - 1 views

  • Empowerment Evaluation in Stanford University's School of Medicine
  • Empowerment evaluation provides a method for gathering, analyzing, and sharing data about a program and its outcomes and encourages faculty, students, and support personnel to actively participate in system changes.
  • It assumes that the more closely stakeholders are involved in reflecting on evaluation findings, the more likely they are to take ownership of the results and to guide curricular decision making and reform.
  • ...8 more annotations...
  • The steps of empowerment evaluation
  • designating a “critical friend” to communicate areas of potential improvement,
  • collecting evaluation data,
  • encouraging a cycle of reflection and action
  • establishing a culture of evidence
  • developing reflective educational practitioners.
  • cultivating a community of learners
  • yearly cycles of improvement at the Stanford University School of Medicine
  •  
    The findings were presented in Academic Medicine, a medical education journal, earlier this year
Joshua Yeidel

Internal Evaluation Week: Debbie Cohen on Working with External Evaluators | AEA365 - 3 views

  •  
    "Here are tips related to internal and external evaluators working together."
  •  
    Reading "point" for "internal evaluator" and "OAI contact" for "external evaluator"? Four "Hot Tips" that may seem obvious, but shouldn't be glossed over.
Nils Peterson

Jeff Sheldon on the Readiness for Organizational Learning and Evaluation instrument | A... - 4 views

shared by Nils Peterson on 01 Nov 10 - No Cached
  • The ROLE consists of 78 items grouped into six major constructs: 1) Culture, 2) Leadership, 3) Systems and Structures, 4) Communication, 5) Teams, and 6) Evaluation.
    • Nils Peterson
       
      You can look up the book in Amazon and then view inside and search for Appendix A and read the items in the survey. http://www.amazon.com/Evaluation-Organizations-Systematic-Enhancing-Performance/dp/0738202681#reader_0738202681 This might be useful to OAI in assessing readiness (or understanding what in the university culture challenges readiness) OR it might inform our revision (or justify staying out) of our rubric. An initial glance would indicate that there are some cultural constructs in the university that are counter-indicated by the analysis of the ROLE instrument.
  •  
    " Readiness for Organizational Learning and Evaluation (ROLE). The ROLE (Preskill & Torres, 2000) was designed to help us determine the level of readiness for implementing organizational learning, evaluation practices, and supporting processes"
  •  
    An interesting possibility for a Skylight survey (but more reading needed)
Joshua Yeidel

Digication :: NCCC Art Department Program Evaluation :: Purpose of Evaluation - 0 views

  •  
    An eportfolio for program evaluation by the Northwest Connecticut Community College Art Department. Slick, well-organized, and pretty using Digication as a platform and host. A fine portfolio, which could well be a model for our programs, except that there is not a single direct measure of student learning outcomes.
Joshua Yeidel

Outcomes and Distributions in Program Evaluation - 2 views

  •  
    "The key here is to understand that looking only at the total outcome of a program limits your ability to use evaluation data for program improvement."
  •  
    Eric Graig discusses the need to slice and dice the data.
Nils Peterson

Half an Hour: Open Source Assessment - 0 views

  • When posed the question in Winnipeg regarding what I thought the ideal open online course would look like, my eventual response was that it would not look like a course at all, just the assessment.
    • Nils Peterson
       
      I remembered this Downes post on the way back from HASTAC. It is some of the roots of our Spectrum I think.
  • The reasoning was this: were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources.
  • In Holland I encountered a person from an organization that does nothing but test students. This is the sort of thing I long ago predicted (in my 1998 Future of Online Learning) so I wasn't that surprised. But when I pressed the discussion the gulf between different models of assessment became apparent.Designers of learning resources, for example, have only the vaguest of indication of what will be on the test. They have a general idea of the subject area and recommendations for reading resources. Why not list the exact questions, I asked? Because they would just memorize the answers, I was told. I was unsure how this varied from the current system, except for the amount of stuff that must be memorized.
    • Nils Peterson
       
      assumes a test as the form of assessment, rather than something more open ended.
  • ...8 more annotations...
  • As I think about it, I realize that what we have in assessment is now an exact analogy to what we have in software or learning content. We have proprietary tests or examinations, the content of which is held to be secret by the publishers. You cannot share the contents of these tests (at least, not openly). Only specially licensed institutions can offer the tests. The tests cost money.
    • Nils Peterson
       
      See our Where are you on the spectrum, Assessment is locked vs open
  • Without a public examination of the questions, how can we be sure they are reliable? We are forced to rely on 'peer reviews' or similar closed and expert-based evaluation mechanisms.
  • there is the question of who is doing the assessing. Again, the people (or machines) that grade the assessments work in secret. It is expert-based, which creates a resource bottleneck. The criteria they use are not always apparent (and there is no shortage of literature pointing to the randomness of the grading). There is an analogy here with peer-review processes (as compared to recommender system processes)
  • What constitutes achievement in a field? What constitutes, for example, 'being a physicist'?
  • This is a reductive theory of assessment. It is the theory that the assessment of a big thing can be reduced to the assessment of a set of (necessary and sufficient) little things. It is a standards-based theory of assessment. It suggests that we can measure accomplishment by testing for accomplishment of a predefined set of learning objectives.Left to its own devices, though, an open system of assessment is more likely to become non-reductive and non-standards based. Even if we consider the mastery of a subject or field of study to consist of the accomplishment of smaller components, there will be no widespread agreement on what those components are, much less how to measure them or how to test for them.Consequently, instead of very specific forms of evaluation, intended to measure particular competences, a wide variety of assessment methods will be devised. Assessment in such an environment might not even be subject-related. We won't think of, say, a person who has mastered 'physics'. Rather, we might say that they 'know how to use a scanning electron microscope' or 'developed a foundational idea'.
  • We are certainly familiar with the use of recognition, rather than measurement, as a means of evaluating achievement. Ludwig Wittgenstein is 'recognized' as a great philosopher, for example. He didn't pass a series of tests to prove this. Mahatma Gandhi is 'recognized' as a great leader.
  • The concept of the portfolio is drawn from the artistic community and will typically be applied in cases where the accomplishments are creative and content-based. In other disciplines, where the accomplishments resemble more the development of skills rather than of creations, accomplishments will resemble more the completion of tasks, like 'quests' or 'levels' in online games, say.Eventually, over time, a person will accumulate a 'profile' (much as described in 'Resource Profiles').
  • In other cases, the evaluation of achievement will resemble more a reputation system. Through some combination of inputs, from a more or less define community, a person may achieve a composite score called a 'reputation'. This will vary from community to community.
  •  
    Fine piece, transformative. "were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources."
Theron DesRosier

Education Data Model (National Forum on Education Statistics). Strategies for building ... - 0 views

  •  
    "The National Education Data Model is a conceptual but detailed representation of the education information domain focused at the student, instructor and course/class levels. It delineates the relationships and interdependencies between the data elements necessary to document, operate, track, evaluate, and improve key aspects of an education system. The NEDM strives to be a shared understanding among all education stakeholders as to what information needs to be collected and managed at the local level in order to enable effective instruction of students and superior leadership of schools. It is a comprehensive, non-proprietary inventory and a map of education information that can be used by schools, LEAs, states, vendors, and researchers to identify the information required for teaching, learning, administrative systems, and evaluation of education programs and approaches. "
Gary Brown

As Colleges Switch to Online Course Evaluations, Students Stop Filling Them Out - The T... - 1 views

  • Colleges thought they were enhancing efficiency when they moved their course evaluations online, but an unintended consequence of the shift to evaluations not filled out in class is that students started skipping them altogether, The Boston Globe reported today.
  •  
    the letters are more interesting than the article--especially the Boston Globe link where Mazur weighs in.
  •  
    The whole issue of online evals is a big one, esp relevant as part of prog eval. Perhaps a design circle topic? (...I couldn't find Mazur's comments)
Gary Brown

Online Evaluations Show Same Results, Lower Response Rate - Wired Campus - The Chronicl... - 1 views

  • Students give the same responses on paper as on online course evaluations but are less likely to respond to online surveys, according to a recent study.
  • The only meaningful difference between student ratings completed online and on paper was that students who took online surveys gave their professors higher ratings for using educational technology to promote learning.
  • Seventy-eight percent of students enrolled in classes with paper surveys responded to them, but  only 53 percent of students enrolled in classes with online surveys responded.
  • ...2 more annotations...
  • "If you have lower response rates, you're less inclined to make summative decisions about a faculty member's performance,"
  • While the majority of instructors still administer paper surveys, the number using online surveys increased from 1.08 percent in 2002 to 23.23 percent in 2008.
  •  
    replication of our own studies
Gary Brown

Details | LinkedIn - 0 views

  • Although different members of the academic hierarchy take on different roles regarding student learning, student learning is everyone’s concern in an academic setting. As I specified in my article comments, universities would do well to use their academic support units, which often have evaluation teams (or a designated evaluator) to assist in providing boards the information they need for decision making. Perhaps boards are not aware of those serving in evaluation roles at the university or how those staff members can assist boards in their endeavors.
  • Gary Brown • We have been using the Internet to post program assessment plans and reports (the programs that support this initiative at least), our criteria (rubric) for reviewing them, and then inviting external stakeholders to join in the review process.
Gary Brown

GAO - Generally Accepted Government Auditing Standards - 1 views

  • Our evaluator colleagues who work at GAO, and many others working in agencies and organizations that are responsible for oversight of, and focus on accountability for, government programs, often refer to the Yellow Book Standards. These agencies or organizations emphasize the importance of their independence from program officials and enjoy significant protections for their independence through statutory provisions, organizational location apart from program offices, direct reporting channels to the highest level official in their agency and governing legislative bodies, heightened tenure protections, and traditions emphasizing their independence.
  •  
    Good to have on the radar as DOE challenges the efficacy of accreditation, and not incidentally underpinning a principle of good evaluation.
Joshua Yeidel

Evaluating Teachers: The Important Role of Value-Added [pdf] - 1 views

  •  
    "We conclude that value-added data has an important role to play in teacher evaluation systems, but that there is much to be learned about how best to use value-added information in human resource decisions." No mention of the role of assessment in improvement.
Joshua Yeidel

Evaluations That Make the Grade: 4 Ways to Improve Rating the Faculty - Teaching - The ... - 0 views

  •  
    Four (somewhat) different types of course evals, closing with a tip of the hat to multiple measures.
Gary Brown

Don't Shrug Off Student Evaluations - The Chronicle Review - The Chronicle of Higher Ed... - 0 views

  • On their most basic level, student evaluations are important because they open the doors of our classrooms. It is one of the remarkable ironies of academe that while we teachers seek to open the minds of our students—to shine a light on hypocrisy, illusion, corruption, and distortion; to tell the truth of our disciplines as we see it—some of us want that classroom door to be closed to the outside world. It is as if we were living in some sort of academic version of the Da Vinci code: Only insiders can know the secret handshake.
  •  
    A Chronicle version that effectively surveys the issues. Maybe nothing new, but a few nuggets.
Gary Brown

Read methods online for free - Methodspace - home of the Research Methods community - 1 views

  • Read methods online
  • Book of the month What Counts as Credible Evidence in Applied Research and Evaluation Practice?
  •  
    This site may be valuable for professional development. We have reason to explore what the evaluation community holds as "credible" evidence, which is the chapter the group is reading this month.
Joshua Yeidel

Key Steps in Outcome Management - 0 views

  •  
    First in a series from the Urban Institute on outcome management for non-profits, for an audience of non-evaluation-savvy leadership and staff. Lots to steal here if we ever create an Assessment Handbook for WSU.
1 - 20 of 74 Next › Last »
Showing 20 items per page