Skip to main content

Home/ CTLT and Friends/ Group items tagged methods

Rss Feed Group items tagged

Corinna Lo

IJ-SoTL - A Method for Collaboratively Developing and Validating a Rubric - 1 views

  •  
    "Assessing student learning outcomes relative to a valid and reliable standard that is academically-sound and employer-relevant presents a challenge to the scholarship of teaching and learning. In this paper, readers are guided through a method for collaboratively developing and validating a rubric that integrates baseline data collected from academics and professionals. The method addresses two additional goals: (1) to formulate and test a rubric as a teaching and learning protocol for a multi-section course taught by various instructors; and (2) to assure that students' learning outcomes are consistently assessed against the rubric regardless of teacher or section. Steps in the process include formulating the rubric, collecting data, and sequentially analyzing the techniques used to validate the rubric and to insure precision in grading papers in multiple sections of a course."
Gary Brown

Read methods online for free - Methodspace - home of the Research Methods community - 1 views

  • Read methods online
  • Book of the month What Counts as Credible Evidence in Applied Research and Evaluation Practice?
  •  
    This site may be valuable for professional development. We have reason to explore what the evaluation community holds as "credible" evidence, which is the chapter the group is reading this month.
Theron DesRosier

Half an Hour: The New Nature of Knowledge - 0 views

  •  
    The very forms of reason and enquiry employed in the classroom must change. Instead of seeking facts and underlying principles, students need to be able to recognize patterns and use things in novel ways. Instead of systematic methodical enquiry, such as might be characterized by Hempel's Deductive-Nomological method, students need to learn active and participative forms of enquiry. instead of deference to authority, students need to embrace diversity and recognize (and live with) multiple perspectives and points of view. I think that there is a new type of knowledge, that we recognize it - and are forced to recognize it - only because new technologies have enabled many perspectives, many points of view, to be expressed, to interact, to forge new realities, and that this form of knowledge is emerged from our cooperative interactions with each other, and not found in the doctrines or dictates of any one of us.
Corinna Lo

Blackboard Outcomes Assessment Webcast - Moving Beyond Accreditation: Using Institution... - 0 views

  •  
    The first 12 minutes of the webcast is worth watching. He opened up with a story of the investigation of cholera outbreak during Victorian era in London, and brought that into how it related to student success. He then summarized what the key methods of measurement were, and some lessons Learned: An "interdisciplinary" led to unconventional, yet innovative methods of investigation. The researchers relied on multiple forms of measurement to come to their conclusion. The visualization of their data was important to proving their case to others.
Peggy Collins

Official Google Docs Blog: Electronic Portfolios with Google Apps - 0 views

  •  
    looks like Google has officially adopted Helen Barrett's method of e-portfolios with Google apps. Posted on "Googlel Docs Blog"
  •  
    looks like google has officially adopted Helen Barrett's method of e-portfolios with Google apps. Posted on "Google Docs Blog"
Gary Brown

Matthew Lombard - 0 views

  • Which measure(s) of intercoder reliability should researchers use? [TOP] There are literally dozens of different measures, or indices, of intercoder reliability. Popping (1988) identified 39 different "agreement indices" for coding nominal categories, which excludes several techniques for interval and ratio level data. But only a handful of techniques are widely used. In communication the most widely used indices are: Percent agreement Holsti's method Scott's pi (p) Cohen's kappa (k) Krippendorff's alpha (a)
  • 5. Which measure(s) of intercoder reliability should researchers use? [TOP] There are literally dozens of different measures, or indices, of intercoder reliability. Popping (1988) identified 39 different "agreement indices" for coding nominal categories, which excludes several techniques for interval and ratio level data. But only a handful of techniques are widely used. In communication the most widely used indices are: Percent agreement Holsti's method Scott's pi (p) Cohen's kappa (k) Krippendorff's alpha (a) Just some of the indices proposed, and in some cases widely used, in other fields are Perreault and Leigh's (1989) Ir measure; Tinsley and Weiss's (1975) T index; Bennett, Alpert, and Goldstein's (1954) S index; Lin's (1989) concordance coefficient; Hughes and Garrett’s (1990) approach based on Generalizability Theory, and Rust and Cooil's (1994) approach based on "Proportional Reduction in Loss" (PRL). It would be nice if there were one universally accepted index of intercoder reliability. But despite all the effort that scholars, methodologists and statisticians have devoted to developing and testing indices, there is no consensus on a single, "best" one. While there are several recommendations for Cohen's kappa (e.g., Dewey (1983) argued that despite its drawbacks, kappa should still be "the measure of choice") and this index appears to be commonly used in research that involves the coding of behavior (Bakeman, 2000), others (notably Krippendorff, 1978, 1987) have argued that its characteristics make it inappropriate as a measure of intercoder agreement.
  • 5. Which measure(s) of intercoder reliability should researchers use? [TOP] There are literally dozens of different measures, or indices, of intercoder reliability. Popping (1988) identified 39 different "agreement indices" for coding nominal categories, which excludes several techniques for interval and ratio level data. But only a handful of techniques are widely used. In communication the most widely used indices are: Percent agreement Holsti's method Scott's pi (p) Cohen's kappa (k) Krippendorff's alpha (a) Just some of the indices proposed, and in some cases widely used, in other fields are Perreault and Leigh's (1989) Ir measure; Tinsley and Weiss's (1975) T index; Bennett, Alpert, and Goldstein's (1954) S index; Lin's (1989) concordance coefficient; Hughes and Garrett’s (1990) approach based on Generalizability Theory, and Rust and Cooil's (1994) approach based on "Proportional Reduction in Loss" (PRL). It would be nice if there were one universally accepted index of intercoder reliability. But despite all the effort that scholars, methodologists and statisticians have devoted to developing and testing indices, there is no consensus on a single, "best" one. While there are several recommendations for Cohen's kappa (e.g., Dewey (1983) argued that despite its drawbacks, kappa should still be "the measure of choice") and this index appears to be commonly used in research that involves the coding of behavior (Bakeman, 2000), others (notably Krippendorff, 1978, 1987) have argued that its characteristics make it inappropriate as a measure of intercoder agreement.
  •  
    for our formalizing of assessment work
  •  
    inter-rater reliability
Gary Brown

Schmidt - 3 views

  • There are a number of assessment methods by which learning can be evaluated (exam, practicum, etc.) for the purpose of recognition and accreditation, and there are a number of different purposes for the accreditation itself (i.e., job, social recognition, membership in a group, etc). As our world moves from an industrial to a knowledge society, new skills are needed. Social web technologies offer opportunities for learning, which build these skills and allow new ways to assess them.
  • This paper makes the case for a peer-based method of assessment and recognition as a feasible option for accreditation purposes. The peer-based method would leverage online communities and tools, for example digital portfolios, digital trails, and aggregations of individual opinions and ratings into a reliable assessment of quality. Recognition by peers can have a similar function as formal accreditation, and pathways to turn peer recognition into formal credits are outlined. The authors conclude by presenting an open education assessment and accreditation scenario, which draws upon the attributes of open source software communities: trust, relevance, scalability, and transparency.
  •  
    Kinship here, and familiar friends.
Joshua Yeidel

5 Non-Western Teaching Strategies - Commentary - The Chronicle of Higher Education - 1 views

  •  
    Methods and attitudes from other cultures can enliven the classroom.
Matthew Tedder

Accountable Talk: (Un)intended Consequences - 2 views

  •  
    Nutty method of teacher evaluation
Gary Brown

Cross-Disciplinary Grading Techniques - ProfHacker - The Chronicle of Higher Education - 1 views

  • So far, the most useful tool to me, in physics, has been the rubric, which is used widely in grading open-ended assessments in the humanities.
  • This method has revolutionized the way I grade. No longer do I have to keep track of how many points are deducted from which type of misstep on what problem for how many students. In the past, I often would get through several tests before I realized that I wasn’t being consistent with the deduction of points, and then I’d have to go through and re-grade all the previous tests. Additionally, the rubric method encourages students to refer to a solution, which I post after the test is administered, and they are motivated to meet with me in person to discuss why they got a 2 versus a 3 on a given problem, for example.
  • his opens up the opportunity to talk with them personally about their problem-solving skills and how they can better them. The emphasis is moved away from point-by-point deductions and is redirected to a more holistic view of problem solving.
  •  
    In the heart of the home of the concept inventory--Physics
Corinna Lo

How People Learn: Brain, Mind, Experience, and School: Expanded Edition - 0 views

  •  
    This book offers exciting new research about the mind and the brain that provides answers to a number of compelling questions. When do infants begin to learn? How do experts learn and how is this different from non-experts? What can teachers and schools do-with curricula, classroom settings, and teaching methods--to help children learn most effectively? New evidence from many branches of science has significantly added to our understanding of what it means to know, from the neural processes that occur during learning to the influence of culture on what people see and absorb. You can read the entire book online for free.
Theron DesRosier

The Problem with the Data-Information-Knowledge-Wisdom Hierarchy - The Conversation - H... - 3 views

  •  
    "But knowledge is not a result merely of filtering or algorithms. It results from a far more complex process that is social, goal-driven, contextual, and culturally-bound. We get to knowledge - especially "actionable" knowledge - by having desires and curiosity, through plotting and play, by being wrong more often than right, by talking with others and forming social bonds, by applying methods and then backing away from them, by calculation and serendipity, by rationality and intuition, by institutional processes and social roles."
  •  
    An interresting take on assumptions about knowledge.
  •  
    Really interesting quote, Theron. I wonder if it's a chunk that could be used as a prompt for a faculty discussion, to open up the dialogue about what is learning. And then how does a program design a curriculum and syllabi / assignments to teach and assess, towards a much broader understanding of knowledge (and skills)?
Corinna Lo

A comparison of consensus, consistency, and measurement approaches to estimating interr... - 2 views

  •  
    "The three general categories for computing interrater reliability introduced and described in this paper are: 1) consensus estimates, 2) consistency estimates, and 3) measurement estimates. The assumptions, interpretation, advantages, and disadvantages of estimates from each of these three categories are discussed, along with several popular methods of computing interrater reliability coefficients that fall under the umbrella of consensus, consistency, and measurement estimates. Researchers and practitioners should be aware that different approaches to estimating interrater reliability carry with them different implications for how ratings across multiple judges should be summarized, which may impact the validity of subsequent study results."
Theron DesRosier

Revolution in the Classroom - The Atlantic (August 12, 2009) - 0 views

  •  
    An article in the Atlantic today by Clayton Christensen discusses "Revolution in the Classroom" In a paragraph on data collection he says the following: Creating effective methods for measuring student progress is crucial to ensuring that material is actually being learned. And implementing such assessments using an online system could be incredibly potent: rather than simply testing students all at once at the end of an instructional module, this would allow continuous verification of subject mastery as instruction was still underway. Teachers would be able to receive constant feedback about progress or the lack thereof and then make informed decisions about the best learning path for each student. Thus, individual students could spend more or less time, as needed, on certain modules. And as long as the end result - mastery - was the same for all, the process and time allotted for achieving it need not be uniform." The "module" focus is a little disturbing but the rest is helpful.
Peggy Collins

How to Outlive the Profession of English: Research and Methods (Syllabus) | HASTAC - 0 views

  •  
    from a academic in TX this link to an interesting syllabus for an English course
Nils Peterson

Google for Government? Broad Representations of Large N DataSets | Computational Legal ... - 0 views

  • We are just two graduate students working on a shoestring budget.
  •  
    We agree with both President Obama and Senator Coburn that universal accessibility of such information is worthwhile goal. However, we believe this is only a first step. In a deep sense, our prior post is designed to serve as a demonstration project. We are just two graduate students working on a shoestring budget. With the resources of the federal government, however, it would certainly be possible to create a series of simple interfaces designed to broadly represent of large amounts of information. While these interfaces should rely upon the best available analytical methods, such methods could probably be built-in behind the scenes. At a minimum, government agencies should follow the suggestion of David G. Robinson and his co-authors who argue the federal government "should require that federal websites themselves use the same open systems for accessing the underlying data as they make available to the public at large."
  •  
    an interesting example of work with large data sets, but also, a research group that is working "off-shore" from their campus and in a blog in ways that seem to parallel WSUCTLT
Theron DesRosier

CDC Evaluation Working Group: Framework - 2 views

  • Framework for Program Evaluation
  • Purposes The framework was developed to: Summarize and organize the essential elements of program evaluation Provide a common frame of reference for conducting evaluations Clarify the steps in program evaluation Review standards for effective program evaluation Address misconceptions about the purposes and methods of program evaluation
  • Assigning value and making judgments regarding a program on the basis of evidence requires answering the following questions: What will be evaluated? (i.e. what is "the program" and in what context does it exist) What aspects of the program will be considered when judging program performance? What standards (i.e. type or level of performance) must be reached for the program to be considered successful? What evidence will be used to indicate how the program has performed? What conclusions regarding program performance are justified by comparing the available evidence to the selected standards? How will the lessons learned from the inquiry be used to improve public health effectiveness?
  • ...3 more annotations...
  • These questions should be addressed at the beginning of a program and revisited throughout its implementation. The framework provides a systematic approach for answering these questions.
  • Steps in Evaluation Practice Engage stakeholders Those involved, those affected, primary intended users Describe the program Need, expected effects, activities, resources, stage, context, logic model Focus the evaluation design Purpose, users, uses, questions, methods, agreements Gather credible evidence Indicators, sources, quality, quantity, logistics Justify conclusions Standards, analysis/synthesis, interpretation, judgment, recommendations Ensure use and share lessons learned Design, preparation, feedback, follow-up, dissemination Standards for "Effective" Evaluation Utility Serve the information needs of intended users Feasibility Be realistic, prudent, diplomatic, and frugal Propriety Behave legally, ethically, and with due regard for the welfare of those involved and those affected Accuracy Reveal and convey technically accurate information
  • The challenge is to devise an optimal — as opposed to an ideal — strategy.
  •  
    Framework for Program Evaluation by the CDC This is a good resource for program evaluation. Click through "Steps and Standards" for information on collecting credible evidence and engaging stakeholders.
Gary Brown

The Chimera of College Brands - Commentary - The Chronicle of Higher Education - 1 views

  • What you get from a college, by contrast, varies wildly from department to department, professor to professor, and course to course. The idea implicit in college brands—that every course reflects certain institutional values and standards—is mostly a fraud. In reality, there are both great and terrible courses at the most esteemed and at the most denigrated institutions.
  • With a grant from the nonprofit Lumina Foundation for Education, physics and history professors from a range of Utah two- and four-year institutions are applying the "tuning" methods developed as part of the sweeping Bologna Process reforms in Europe.
  • The group also created "employability maps" by surveying employers of recent physics graduates—including General Electric, Simco Electronics, and the Air Force—to find out what knowledge and skills are needed for successful science careers.
  • ...3 more annotations...
  • If a student finishes and can't do what's advertised, they'll say, 'I've been shortchanged.'
  • Kathryn MacKay, an associate professor of history at Weber State University, drew on recent work from the American Historical Association to define learning goals in historical knowledge, thinking, and skills.
  • In the immediate future, as the higher-education market continues to globalize and the allure of prestige continues to grow, the value of university brands is likely to rise. But at some point, the countervailing forces of empiricism will begin to take hold. The openness inherent to tuning and other, similar processes will make plain that college courses do not vary in quality in anything like the way that archaic, prestige- and money-driven brands imply. Once you've defined the goals, you can prove what everyone knows but few want to admit: From an educational standpoint, institutional brands are largely an illusion for which students routinely overpay.
  •  
    The argumet for external stakeholders is underscored, among other implications.
Nils Peterson

Half an Hour: Open Source Assessment - 0 views

  • When posed the question in Winnipeg regarding what I thought the ideal open online course would look like, my eventual response was that it would not look like a course at all, just the assessment.
    • Nils Peterson
       
      I remembered this Downes post on the way back from HASTAC. It is some of the roots of our Spectrum I think.
  • The reasoning was this: were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources.
  • In Holland I encountered a person from an organization that does nothing but test students. This is the sort of thing I long ago predicted (in my 1998 Future of Online Learning) so I wasn't that surprised. But when I pressed the discussion the gulf between different models of assessment became apparent.Designers of learning resources, for example, have only the vaguest of indication of what will be on the test. They have a general idea of the subject area and recommendations for reading resources. Why not list the exact questions, I asked? Because they would just memorize the answers, I was told. I was unsure how this varied from the current system, except for the amount of stuff that must be memorized.
    • Nils Peterson
       
      assumes a test as the form of assessment, rather than something more open ended.
  • ...8 more annotations...
  • As I think about it, I realize that what we have in assessment is now an exact analogy to what we have in software or learning content. We have proprietary tests or examinations, the content of which is held to be secret by the publishers. You cannot share the contents of these tests (at least, not openly). Only specially licensed institutions can offer the tests. The tests cost money.
    • Nils Peterson
       
      See our Where are you on the spectrum, Assessment is locked vs open
  • Without a public examination of the questions, how can we be sure they are reliable? We are forced to rely on 'peer reviews' or similar closed and expert-based evaluation mechanisms.
  • there is the question of who is doing the assessing. Again, the people (or machines) that grade the assessments work in secret. It is expert-based, which creates a resource bottleneck. The criteria they use are not always apparent (and there is no shortage of literature pointing to the randomness of the grading). There is an analogy here with peer-review processes (as compared to recommender system processes)
  • What constitutes achievement in a field? What constitutes, for example, 'being a physicist'?
  • This is a reductive theory of assessment. It is the theory that the assessment of a big thing can be reduced to the assessment of a set of (necessary and sufficient) little things. It is a standards-based theory of assessment. It suggests that we can measure accomplishment by testing for accomplishment of a predefined set of learning objectives.Left to its own devices, though, an open system of assessment is more likely to become non-reductive and non-standards based. Even if we consider the mastery of a subject or field of study to consist of the accomplishment of smaller components, there will be no widespread agreement on what those components are, much less how to measure them or how to test for them.Consequently, instead of very specific forms of evaluation, intended to measure particular competences, a wide variety of assessment methods will be devised. Assessment in such an environment might not even be subject-related. We won't think of, say, a person who has mastered 'physics'. Rather, we might say that they 'know how to use a scanning electron microscope' or 'developed a foundational idea'.
  • We are certainly familiar with the use of recognition, rather than measurement, as a means of evaluating achievement. Ludwig Wittgenstein is 'recognized' as a great philosopher, for example. He didn't pass a series of tests to prove this. Mahatma Gandhi is 'recognized' as a great leader.
  • The concept of the portfolio is drawn from the artistic community and will typically be applied in cases where the accomplishments are creative and content-based. In other disciplines, where the accomplishments resemble more the development of skills rather than of creations, accomplishments will resemble more the completion of tasks, like 'quests' or 'levels' in online games, say.Eventually, over time, a person will accumulate a 'profile' (much as described in 'Resource Profiles').
  • In other cases, the evaluation of achievement will resemble more a reputation system. Through some combination of inputs, from a more or less define community, a person may achieve a composite score called a 'reputation'. This will vary from community to community.
  •  
    Fine piece, transformative. "were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources."
Gary Brown

Making College Degrees Easier to Interpret - Measuring Stick - The Chronicle of Higher ... - 1 views

  • Over the past few decades, the central purpose of undergraduate education in the United States has steadily evolved away from elite studies in the liberal arts and toward course work that prepares students for successful careers in their chosen fields. 
  • how do employers determine the values of the college degrees held by young job applicants? 
  • There is essentially no method to determine which of the three graduates have the knowledge and skills that match the advertised position. Grades and academic standards often vary so much by institution, department, and instructor that transcripts are written off as arbitrary and meaningless by those making hiring decisions. Outside fields with licensure exams like accounting and nursing, employers often hire workers based on connections, intuition, and the sometimes-misleading reputations of applicants’ alma maters. 
  • ...3 more annotations...
  • This system doesn’t allow labor markets to function efficiently.
  • To rectify this broken hiring system, academia and industry should form stronger partnerships to better determine which skills and knowledge students in various fields need to master
  • The traditional college transcript is simply too impenetrable for anyone outside—or inside—academia to comprehend.
  •  
    The purposes are problematic, but the solution points to one of our approaches.  Where is John Dewey when we need him?
1 - 20 of 39 Next ›
Showing 20 items per page