Skip to main content

Home/ CTLT and Friends/ Group items tagged Evidence

Rss Feed Group items tagged

Gary Brown

Reviewers Unhappy with Portfolio 'Stuff' Demand Evidence -- Campus Technology - 1 views

  • An e-mail comment from one reviewer: “In reviewing about 100-some-odd accreditation reports in the last few months, it has been useful in our work here at Washington State University to distinguish ‘stuff’ from evidence. We have adopted an understanding that evidence is material or data that has been analyzed and that can be used, as dictionary definitions state, as ‘proof.’ A student gathers ‘stuff’ in the ePortfolio, selects, reflects, etc., and presents evidence that makes a case (or not)… The use of this distinction has been indispensable here. An embarrassing amount of academic assessment work culminates in the presentation of ‘stuff’ that has not been analyzed--student evaluations, grades, pass rates, retention, etc. After reading these ‘self studies,’ we ask the stumping question--fine, but what have you learned? Much of the ‘evidence’ we review has been presented without thought or with the general assumption that it is somehow self-evident… But too often that kind of evidence has not focused on an issue or problem or question. It is evidence that provides proof of nothing.
  •  
    a bit of a context shift, but....
Theron DesRosier

CDC Evaluation Working Group: Framework - 2 views

  • Framework for Program Evaluation
  • Purposes The framework was developed to: Summarize and organize the essential elements of program evaluation Provide a common frame of reference for conducting evaluations Clarify the steps in program evaluation Review standards for effective program evaluation Address misconceptions about the purposes and methods of program evaluation
  • Assigning value and making judgments regarding a program on the basis of evidence requires answering the following questions: What will be evaluated? (i.e. what is "the program" and in what context does it exist) What aspects of the program will be considered when judging program performance? What standards (i.e. type or level of performance) must be reached for the program to be considered successful? What evidence will be used to indicate how the program has performed? What conclusions regarding program performance are justified by comparing the available evidence to the selected standards? How will the lessons learned from the inquiry be used to improve public health effectiveness?
  • ...3 more annotations...
  • These questions should be addressed at the beginning of a program and revisited throughout its implementation. The framework provides a systematic approach for answering these questions.
  • Steps in Evaluation Practice Engage stakeholders Those involved, those affected, primary intended users Describe the program Need, expected effects, activities, resources, stage, context, logic model Focus the evaluation design Purpose, users, uses, questions, methods, agreements Gather credible evidence Indicators, sources, quality, quantity, logistics Justify conclusions Standards, analysis/synthesis, interpretation, judgment, recommendations Ensure use and share lessons learned Design, preparation, feedback, follow-up, dissemination Standards for "Effective" Evaluation Utility Serve the information needs of intended users Feasibility Be realistic, prudent, diplomatic, and frugal Propriety Behave legally, ethically, and with due regard for the welfare of those involved and those affected Accuracy Reveal and convey technically accurate information
  • The challenge is to devise an optimal — as opposed to an ideal — strategy.
  •  
    Framework for Program Evaluation by the CDC This is a good resource for program evaluation. Click through "Steps and Standards" for information on collecting credible evidence and engaging stakeholders.
Gary Brown

A Final Word on the Presidents' Student-Learning Alliance - Measuring Stick - The Chron... - 1 views

  • I was very pleased to see the responses to the announcement of the Presidents’ Alliance as generally welcoming (“commendable,” “laudatory initiative,” “applaud”) the shared commitment of these 71 founding institutions to do more—and do it publicly and cooperatively—with regard to gathering, reporting, and using evidence of student learning.
  • establishing institutional indicators of educational progress that could be valuable in increasing transparency may not suggest what needs changing to improve results
  • As Adelman’s implied critique of the CLA indicates, we may end up with an indicator without connections to practice.
  • ...6 more annotations...
  • The Presidents’ Alliance’s focus on and encouragement of institutional efforts is important to making these connections and steps in a direct way supporting improvement.
  • Second, it is hard to disagree with the notion that ultimately evidence-based improvement will occur only if faculty members are appropriately trained and encouraged to improve their classroom work with undergraduates.
  • Certainly there has to be some connection between and among various levels of assessment—classroom, program, department, and institution—in order to have evidence that serves both to aid improvement and to provide transparency and accountability.
  • Presidents’ Alliance is setting forth a common framework of “critical dimensions” that institutions can use to evaluate and extend their own efforts, efforts that would include better reporting for transparency and accountability and greater involvement of faculty.
  • there is wide variation in where institutions are in their efforts, and we have a long way to go. But what is critical here is the public commitment of these institutions to work on their campuses and together to improve the gathering and reporting of evidence of student learning and, in turn, using evidence to improve outcomes.
  • The involvement of institutions of all types will make it possible to build a more coherent and cohesive professional community in which evidence-based improvement of student learning is tangible, visible, and ongoing.
Joshua Yeidel

Mind - Research Upends Traditional Thinking on Study Habits - NYTimes.com - 2 views

  • “The contrast between the enormous popularity of the learning-styles approach within education and the lack of credible evidence for its utility is, in our opinion, striking and disturbing,” the researchers concluded.
  • “We have yet to identify the common threads between teachers who create a constructive learning atmosphere,” said Daniel T. Willingham, a psychologist at the University of Virginia and author of the book “Why Don’t Students Like School?”
  • psychologists have discovered that some of the most hallowed advice on study habits is flat wrong
  •  
    "Evidence" that the "evidence" is not very effective to promote change.  Apparently the context is crucial to adoption.
Gary Brown

News: Assessing the Assessments - Inside Higher Ed - 2 views

  • The validity of a measure is based on evidence regarding the inferences and assumptions that are intended to be made and the uses to which the measure will be put. Showing that the three tests in question are comparable does not support Shulenburger's assertion regarding the value-added measure as a valid indicator of institutional effectiveness. The claim that public university groups have previously judged the value-added measure as appropriate does not tell us anything about the evidence upon which this judgment was based nor the conditions under which the judgment was reached. As someone familiar with the process, I would assert that there was no compelling evidence presented that these instruments and the value-added measure were validated for making this assertion (no such evidence was available at the time), which is the intended use in the VSA.
  • (however much the sellers of these tests tell you that those samples are "representative"), they provide an easy way out for academic administrators who want to avoid the time-and-effort consuming but incredibly valuable task of developing detailed major program learning outcome statements (even the specialized accrediting bodies don't get down to the level of discrete, operational statements that guide faculty toward appropriate assessment design)
  • f somebody really cared about "value added," they could look at each student's first essay in this course, and compare it with that same student's last essay in this course. This person could then evaluate each individual student's increased mastery of the subject-matter in the course (there's a lot) and also the increased writing skill, if any.
  • ...1 more annotation...
  • These skills cannot be separated out from student success in learning sophisticated subject-matter, because understanding anthropology, or history of science, or organic chemistry, or Japanese painting, is not a matter of absorbing individual facts, but learning facts and ways of thinking about them in a seamless, synthetic way. No assessment scheme that neglects these obvious facts about higher education is going to do anybody any good, and we'll be wasting valuable intellectual and financial resources if we try to design one.
  •  
    ongoing discussion of these tools. Note Longanecker's comment and ask me why.
Gary Brown

Views: The White Noise of Accountability - Inside Higher Ed - 2 views

  • We don’t really know what we are saying
  • “In education, accountability usually means holding colleges accountable for the learning outcomes produced.” One hopes Burck Smith, whose paper containing this sentence was delivered at an American Enterprise Institute conference last November, held a firm tongue-in-cheek with the core phrase.
  • Our adventure through these questions is designed as a prodding to all who use the term to tell us what they are talking about before they otherwise simply echo the white noise.
  • ...20 more annotations...
  • when our students attend three or four schools, the subject of these sentences is considerably weakened in terms of what happens to those students.
  • Who or what is one accountable to?
  • For what?
  • Why that particular “what” -- and not another “what”?
  • To what extent is the relationship reciprocal? Are there rewards and/or sanctions inherent in the relationship? How continuous is the relationship?
  • In the Socratic moral universe, one is simultaneously witness and judge. The Greek syneidesis (“conscience” and “consciousness”) means to know something with, so to know oneself with oneself becomes an obligation of institutions and systems -- to themselves.
  • Obligation becomes self-reflexive.
  • There are no external authorities here. We offer, we accept, we provide evidence, we judge. There is nothing wrong with this: it is indispensable, reflective self-knowledge. And provided we judge without excuses, we hold to this Socratic moral framework. As Peter Ewell has noted, the information produced under this rubric, particularly in the matter of student learning, is “part of our accountability to ourselves.”
  • But is this “accountability” as the rhetoric of higher education uses the white noise -- or something else?
  • in response to shrill calls for “accountability,” U.S. higher education has placed all its eggs in the Socratic basket, but in a way that leaves the basket half-empty. It functions as the witness, providing enormous amounts of information, but does not judge that information.
  • Every single “best practice” cited by Aldeman and Carey is subject to measurement: labor market histories of graduates, ratios of resource commitment to various student outcomes, proportion of students in learning communities or taking capstone courses, publicly-posted NSSE results, undergraduate research participation, space utilization rates, licensing income, faculty patents, volume of non-institutional visitors to art exhibits, etc. etc. There’s nothing wrong with any of these, but they all wind up as measurements, each at a different concentric circle of putatively engaged acceptees of a unilateral contract to provide evidence. By the time one plows through Aldeman and Carey’s banquet, one is measuring everything that moves -- and even some things that don’t.
  • Sorry, but basic capacity facts mean that consumers cannot vote with their feet in higher education.
  • If we glossed the Socratic notion on provision-of-information, the purpose is self-improvement, not comparison. The market approach to accountability implicitly seeks to beat Socrates by holding that I cannot serve as both witness and judge of my own actions unless the behavior of others is also on the table. The self shrinks: others define the reference points. “Accountability” is about comparison and competition, and an institution’s obligations are only to collect and make public those metrics that allow comparison and competition. As for who judges the competition, we have a range of amorphous publics and imagined authorities.
  • There are no formal agreements here: this is not a contract, it is not a warranty, it is not a regulatory relationship. It isn’t even an issue of becoming a Socratic self-witness and judge. It is, instead, a case in which one set of parties, concentrated in places of power, asks another set of parties, diffuse and diverse, “to disclose more and more about academic results,” with the second set of parties responding in their own terms and formulations. The environment itself determines behavior.
  • Ewell is right about the rules of the information game in this environment: when the provider is the institution, it will shape information “to look as good as possible, regardless of the underlying performance.”
  • U.S. News & World Report’s rankings
  • The messengers become self-appointed arbiters of performance, establishing themselves as the second party to which institutions and aggregates of institutions become “accountable.” Can we honestly say that the implicit obligation of feeding these arbiters constitutes “accountability”?
  • But if the issue is student learning, there is nothing wrong with -- and a good deal to be said for -- posting public examples of comprehensive examinations, summative projects, capstone course papers, etc. within the information environment, and doing so irrespective of anyone requesting such evidence of the distribution of knowledge and skills. Yes, institutions will pick what makes them look good, but if the public products resemble AAC&U’s “Our Students’ Best Work” project, they set off peer pressure for self-improvement and very concrete disclosure. The other prominent media messengers simply don’t engage in constructive communication of this type.
  • Ironically, a “market” in the loudest voices, the flashiest media productions, and the weightiest panels of glitterati has emerged to declare judgment on institutional performance in an age when student behavior has diluted the very notion of an “institution” of higher education. The best we can say is that this environment casts nothing but fog over the specific relationships, responsibilities, and obligations that should be inherent in something we call “accountability.” Perhaps it is about time that we defined these components and their interactions with persuasive clarity. I hope that this essay will invite readers to do so.
  • Clifford Adelman is senior associate at the Institute for Higher Education Policy. The analysis and opinions expressed in this essay are those of the author, and do not necessarily represent the positions or opinions of the institute, nor should any such representation be inferred.
  •  
    Perhaps the most important piece I've read recently. Yes must be our answer to Adelman's last challenge: It is time for us to disseminate what and why we do what we do.
Ashley Ater Kranov

Teaching Experiment Decodes a Discipline - Teaching - The Chronicle of Higher Education - 0 views

  • Several years ago, a small group of faculty members at Indiana University at Bloomington decided to do something about the problem. The key, they concluded, was to construct every history course around two core skills of their discipline: assembling evidence and interpreting it.
  • The historians at Indiana have tried to help students through several specific bottlenecks by dividing large concepts into smaller, evidence-related steps. (See the box below.)
  • "Students come into our classrooms believing that history is about stories full of names and dates," says Arlene J. Díaz, an associate professor of history at Indiana who is one of four directors of the department's History Learning Project, as the redesign effort is known. But in courses, "they discover that history is actually about interpretation, evidence, and argument."
Theron DesRosier

Assessing Learning Outcomes at the University of Cincinnati: Comparing Rubric Assessmen... - 2 views

  •  
    "When the CLA results arrived eight months later, the UC team compared the outcomes of the two assessments. "We found no statistically significant correlation between the CLA scores and the portfolio scores," Escoe says. "In some ways, it's a disappointing finding. If we'd found a correlation, we could tell faculty that the CLA, as an instrument, is measuring the same things that we value and that the CLA can be embedded in a course. But that didn't happen." There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points "in a black box": if a student referred to a specific piece of evidence in a critical-thinking question, he or she simply received one point. In addition, she says, faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind-leading to results that would not correlate to a computer-scored test. In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. "
  •  
    Another institution trying to make sense of the CLA. This study compared student's CLA scores with criteria-based scores of their eportfolios. The study used a modified version of the VALUE rubrics developed by the AACU. Our own Gary Brown was on the team that developed the critical thinking rubric for the VALUE project.
  •  
    "The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. " This begs some questions: what meaning can we attach to these two non-correlated measures? What VSA requirements can rubric-based assessment NOT satisfy? Are those "requirements" really useful?
Judy Rumph

Views: Why Are We Assessing? - Inside Higher Ed - 1 views

  • Amid all this progress, however, we seem to have lost our way. Too many of us have focused on the route we’re traveling: whether assessment should be value-added; the improvement versus accountability debate; entering assessment data into a database; pulling together a report for an accreditor. We’ve been so focused on the details of our route that we’ve lost sight of our destinatio
  • Our destination, which is what we should be focusing on, is the purpose of assessment. Over the last decades, we've consistently talked about two purposes of assessment: improvement and accountability. The thinking has been that improvement means using assessment to identify problems — things that need improvement — while accountability means using assessment to show that we're already doing a great job and need no improvement. A great deal has been written about the need to reconcile these two seemingly disparate purposes.
  • The most important purpose of assessment should be not improvement or accountability but their common aim: everyone wants students to get the best possible education
  • ...7 more annotations...
  • Our second common purpose of assessment should be making sure not only that students learn what’s important, but that their learning is of appropriate scope, depth, and rigo
  • Third, we need to accept how good we already are, so we can recognize success when we see i
  • And we haven’t figured out a way to tell the story of our effectiveness in 25 words or less, which is what busy people want and nee
  • Because we're not telling the stories of our successful outcomes in simple, understandable terms, the public continues to define quality using the outdated concept of inputs like faculty credentials, student aptitude, and institutional wealth — things that by themselves don’t say a whole lot about student learning.
  • And people like to invest in success. Because the public doesn't know how good we are at helping students learn, it doesn't yet give us all the support we need in our quest to give our students the best possible education.
  • But while virtually every college and university has had to make draconian budget cuts in the last couple of years, with more to come, I wonder how many are using solid, systematic evidence — including assessment evidence — to inform those decisions.
  • Now is the time to move our focus from the road we are traveling to our destination: a point at which we all are prudent, informed stewards of our resources… a point at which we each have clear, appropriate, justifiable, and externally-informed standards for student learning. Most importantly, now is the time to move our focus from assessment to learning, and to keeping our promises. Only then can we make higher education as great as it needs to be.
  •  
    Yes, this article resonnated with me too. Especially connecting assessment to teaching and learning. The most important purpose of assessment should be not improvement or accountability but their common aim: everyone wants students to get the best possible education.... today we seem to be devoting more time, money, thought, and effort to assessment than to helping faculty help students learn as effectively as possible. When our colleagues have disappointing assessment results, and they don't know what to do to improve them, I wonder how many have been made aware that, in some respects, we are living in a golden age of higher education, coming off a quarter-century of solid research on practices that promote deep, lasting learning. I wonder how many are pointed to the many excellent resources we now have on good teaching practices, including books, journals, conferences and, increasingly, teaching-learning centers right on campus. I wonder how many of the graduate programs they attended include the study and practice of contemporary research on effective higher education pedagogies. No wonder so many of us are struggling to make sense of our assessment results! Too many of us are separating work on assessment from work on improving teaching and learning, when they should be two sides of the same coin. We need to bring our work on teaching, learning, and assessment together.
Gary Brown

71 Presidents Pledge to Improve Their Colleges' Teaching and Learning - Faculty - The C... - 0 views

  • In a venture known as the Presidents' Alliance for Excellence in Student Learning and Accountability, they have promised to take specific steps to gather more evidence about student learning, to use that evidence to improve instruction, and to give the public more information about the quality of learning on their campuses.
  • The 71 pledges, officially announced on Friday, are essentially a dare to accreditors, parents, and the news media: Come visit in two years, and if we haven't done these things, you can zing us.
  • deepen an ethic of professional stewardship and self-regulation among college leaders
  • ...4 more annotations...
  • Beginning in 2011, all first-year students at Westminster will be required to create electronic portfolios that reflect their progress in terms of five campuswide learning goals. And the college will expand the number of seniors who take the Collegiate Learning Assessment, so that the test can be used to help measure the strength of each academic major.
  • "The crucial thing is that all of our learning assessments have been designed and driven by the faculty," says Pamela G. Menke, Miami Dade's associate provost for academic affairs. "The way transformation of learning truly occurs is when faculty members ask the questions, and when they're willing to use what they've found out to make change.
  • Other assessment models might point some things out, but they won't be useful if faculty members don't believe in them."
  • "In the long term, as more people join, I hope that the Web site will provide a resource for the kinds of innovations that seem to be successful," he says. "That process might be difficult. Teaching is an art, not a science. But there is still probably a lot that we can learn from each other."
Gary Brown

Learning Assessment: The Regional Accreditors' Role - Measuring Stick - The Chronicle o... - 0 views

  • The National Institute for Learning Outcomes Assessment has just released a white paper about the regional accreditors’ role in prodding colleges to assess their students’ learning
  • All four presidents suggested that their campuses’ learning-assessment projects are fueled by Fear of Accreditors. One said that a regional accreditor “came down on us hard over assessment.” Another said, “Accreditation visit coming up. This drives what we need to do for assessment.”
  • Western Association of Schools and Colleges, Ms. Provezis reports, “almost every action letter to institutions over the last five years has required additional attention to assessment, with reasons ranging from insufficient faculty involvement to too little evidence of a plan to sustain assessment.”
  • ...4 more annotations...
  • regional accreditors are more likely now than they were a decade ago to insist that colleges hand them evidence about student-learning outcomes.
  • The white paper gently criticizes the accreditors for failing to make sure that faculty members are involved in learning assessment.
  • “it would be good to know more about what would make assessment worthwhile to the faculty—for a better understanding of the source of their resistance.”
  • Many of the most visible and ambitious learning-assessment projects out there seem to strangely ignore the scholarly disciplines’ own internal efforts to improve teaching and learning.
  •  
    fyi
Corinna Lo

How People Learn: Brain, Mind, Experience, and School: Expanded Edition - 0 views

  •  
    This book offers exciting new research about the mind and the brain that provides answers to a number of compelling questions. When do infants begin to learn? How do experts learn and how is this different from non-experts? What can teachers and schools do-with curricula, classroom settings, and teaching methods--to help children learn most effectively? New evidence from many branches of science has significantly added to our understanding of what it means to know, from the neural processes that occur during learning to the influence of culture on what people see and absorb. You can read the entire book online for free.
Joshua Yeidel

Official Google Blog: A new approach to China - 1 views

  •  
    The Official Google Blog reports that Google detected a "highly sophisticated and targeted attack" originating in China against them and against at least 20 other large business. Google says they have "evidence to suggest that a primary goal of the attackers was accessing the Gmail accounts of Chinese human rights activists." Google says "We have decided we are no longer willing to continue censoring our results on Google.cn, and so over the next few weeks we will be discussing with the Chinese government the basis on which we could operate an unfiltered search engine within the law, if at all. We recognize that this may well mean having to shut down Google.cn, and potentially our offices in China."
Joshua Yeidel

Taking the sting out of the honeybee controversy - environmentalresearchweb - 1 views

  •  
    Researchers use "harvesting feedback" and a uncertainty scale to illuminate how stakeholders use evidence to explain honeybee declines in France.
Nils Peterson

Views: Changing the Equation - Inside Higher Ed - 1 views

  • But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resources
  • But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resource
  • But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resources
  • ...19 more annotations...
  • year we strug
  • year we strug
  • those who control influential rating systems of the sort published by U.S. News & World Report -- define academic quality as small classes taught by distinguished faculty, grand campuses with impressive libraries and laboratories, and bright students heavily recruited. Since all of these indicators of quality are costly, my college’s pursuit of quality, like that of so many others, led us to seek more revenue to spend on quality improvements. And the strategy worked.
  • Based on those concerns, and informed by the literature on the “teaching to learning” paradigm shift, we began to change our focus from what we were teaching to what and how our students were learning.
  • No one wants to cut costs if their reputation for quality will suffer, yet no one wants to fall off the cliff.
  • When quality is defined by those things that require substantial resources, efforts to reduce costs are doomed to failure
  • some of the best thinkers in higher education have urged us to define the quality in terms of student outcomes.
  • Faculty said they wanted to move away from giving lectures and then having students parrot the information back to them on tests. They said they were tired of complaining that students couldn’t write well or think critically, but not having the time to address those problems because there was so much material to cover. And they were concerned when they read that employers had reported in national surveys that, while graduates knew a lot about the subjects they studied, they didn’t know how to apply what they had learned to practical problems or work in teams or with people from different racial and ethnic backgrounds.
  • Our applications have doubled over the last decade and now, for the first time in our 134-year history, we receive the majority of our applications from out-of-state students.
  • We established what we call college-wide learning goals that focus on "essential" skills and attributes that are critical for success in our increasingly complex world. These include critical and analytical thinking, creativity, writing and other communication skills, leadership, collaboration and teamwork, and global consciousness, social responsibility and ethical awareness.
  • despite claims to the contrary, many of the factors that drive up costs add little value. Research conducted by Dennis Jones and Jane Wellman found that “there is no consistent relationship between spending and performance, whether that is measured by spending against degree production, measures of student engagement, evidence of high impact practices, students’ satisfaction with their education, or future earnings.” Indeed, they concluded that “the absolute level of resources is less important than the way those resources are used.”
  • After more than a year, the group had developed what we now describe as a low-residency, project- and competency-based program. Here students don’t take courses or earn grades. The requirements for the degree are for students to complete a series of projects, captured in an electronic portfolio,
  • students must acquire and apply specific competencies
  • Faculty spend their time coaching students, providing them with feedback on their projects and running two-day residencies that bring students to campus periodically to learn through intensive face-to-face interaction
  • At the very least, finding innovative ways to lower costs without compromising student learning is wise competitive positioning for an uncertain future
  • As the campus learns more about the demonstration project, other faculty are expressing interest in applying its design principles to courses and degree programs in their fields. They created a Learning Coalition as a forum to explore different ways to capitalize on the potential of the learning paradigm.
  • a problem-based general education curriculum
  • After a year and a half, the evidence suggests that students are learning as much as, if not more than, those enrolled in our traditional business program
  • the focus of student evaluations has changed noticeably. Instead of focusing almost 100% on the instructor and whether he/she was good, bad, or indifferent, our students' evaluations are now focusing on the students themselves - as to what they learned, how much they have learned, and how much fun they had learning.
    • Nils Peterson
       
      gary diigoed this article. this comment shines another light -- the focus of the course eval shifted from faculty member to course & student learning when the focus shifted from teaching to learning
  •  
    A must read spotted by Jane Sherman--I've highlighed, as usual, much of it.
Nils Peterson

Accreditation and assessment in an Open Course - an opening proposal | Open Course in E... - 1 views

  • A good example of this may be a learning portfolio created by a students and reviewed by an instructor. The instructor might be looking for higher orders of learning... evidence of creative thinking, of the development of complex concepts or looking for things like improvement.
    • Nils Peterson
       
      He starts with a portfolio reviewed by the instructor, but it gets better
  • There is a simple sense in which assessing people for this course involves tracking their willingness to participate in the discussion. I have claimed in many contexts that in fields in which the canon is difficult to identify, where what is 'true' is not possible to identify knowledge becomes a negotiation. This will certainly true in this course, so I think the most important part of the assessment will be whether the learner in question has collaborated, has participated has ENGAGED with the material and with other participants of the course.
  • What we need, then, is a peer review model for assessment. We need people to take it as their responsibility to review the work of others, to confirm their engagement, and form community/networks of assessment that monitor and help each other.
  • ...4 more annotations...
  • (say... 3-5 other participants are willing to sign off on your participation)
    • Nils Peterson
       
      peer credentialling.
  • Evidence of contribution on course projects
    • Nils Peterson
       
      I would prefer he say "projects" where the learner has latitude to define the project, rather than a 'course project' where the agency seems to be outside the learner. See our diagram of last April, the learner should be working their problem in their community
  • I think for those that are looking for PD credit we should be able to use the proposed assessment model (once you guys make it better) for accreditation. You would end up with an email that said "i was assessed based on this model and was not found wanting" signed by facilitators (or other participants, as surely given the quality of the participants i've seen, they would qualify as people who could guarantee such a thing).
    • Nils Peterson
       
      Peer accreditation. It depends on the credibility of those signing off see also http://www.nilspeterson.com/2010/03/21/reimagining-both-learning-learning-institutions/
  • I think the Otago model would work well here. I call it the Otago model as Leigh Blackall's course at Otago was the first time i actually heard of someone doing it. In this model you do all the work in a given course, and then are assessed for credit AFTER the course by, essentially, challenging for PLAR. It's a nice distributed model, as it allows different people to get different credit for the same course.
    • Nils Peterson
       
      Challenging for a particular credit in an established institutional system, or making the claim that you have a useful solution to a problem and the solution merits "credit" in a particular system's procedures.
Gary Brown

News: More Meaningful Accreditation - Inside Higher Ed - 0 views

  • ts most distinctive feature is that it would clearly separate "compliance" from "improvement." Colleges would be required to build "portfolios" of data and materials, documenting (through more frequent peer reviews) their compliance with the association's many standards, with much of the information being made public. On a parallel track, or "pathway," colleges would have the flexibility to propose their own projects or themes as the focus of the self-improvement piece of their accreditation review, and would be judged (once the projects were approved by a peer team) by how well they carried out the plan. (Colleges the commission deems to be troubled would have a "pathway" chosen for them, to address their shortcomings.)
  • educe the paperwork burden on institutions (by making the portfolio electronic and limiting the written report for the portfolio to 50 pages), and make the process more valuable for colleges by letting them largely define for themselves where they want to improve and what they want to accomplish.
  • "We want to make accreditation so valuable to institutions that they would do it without Title IV," she said in an interview after the presentation. "The only way we can protect the improvement piece, and make it valuable to institutions to aim high, is if we separate it from the compliance piece."
  • ...3 more annotations...
  • Mainly what happens in the current structure, she said, is that the compliance role is so onerous and so dominates the process that, in too many cases, colleges fail to get anything meaningful out of the improvement portion. That, she said, is why separating the two is so essential.
  • s initially conceptualized, the commission's revised process would have institutions build electronic portfolios made up of (1) an annual institutional data update the accreditor already uses, (2) a collection of "evidence of quality and capacity" drawn from existing sources (other accrediting reports), federal surveys and audits, and a "50-page, evidence-based report that demonstrates fulfillment of the criteria for accreditation," based largely on the information in (1) and (2), commission documents say. A panel of peer reviewers would "rigorously" review the data (without a site visit) at various intervals -- how much more frequently than the current 10-year accreditation review would probably depend on the perceived health of the college -- and make a recommendation on whether to approve the institution for re-accreditation.
  • "The portfolio portion really should be what's tied to continued accreditation," said one member of the audience. "As soon as you tie the pathway portion into that, you make it a very different exercise, as we're going to want to make a good case, to make ourselves look good."
Joshua Yeidel

Coopman - 0 views

shared by Joshua Yeidel on 01 Jul 09 - Cached
  •  
    CRITIQUE OF E-LEARNING IN BLACKBOARD "Just as utopic visions of the Internet predicted an egalitarian online world where information flowed freely and power became irrelevant, so did many proponents of online education, who viewed online classrooms as a way to free students and instructors from traditional power relationships . . ." In "A Critical Examination of Blackboard's EˆLearning Environment" (FIRST MONDAY, vol. 14, no. 6, June 1, 2009), Stephanie J. Coopman, professor at San Jose State University, identifies the ways that the Blackboard 8.0 and Blackboard CE6 platforms "both constrain and facilitate instructorˆstudent and studentˆstudent interaction." She argues that while the systems have improved the instructor's ability to track and measure student activity, this "creates a dangerously decontextualized, essentialized image of a class in which levels of 'participation' stand in for evidence of learning having taken place. Students are treated not as learners, as partners in an educational enterprise, but as users."
Theron DesRosier

Pontydysgu - Bridge to Learning » Blog Archive » Learning in practice - a soc... - 0 views

  •  
    Complex inter-relationship between: space, time, locality, practice, boundary crossings between different practices. For example trainee doctor in the hospital in one practice, translation of this experience into 'evidence for assessment purposes' needs to then be 'validated' by auditors in another community of practice.
Joshua Yeidel

Putting Learning Under a Microscope - Curriculum - The Chronicle of Higher Education - 1 views

  • First, the faculty has designed a unified curriculum.
  • Second, the faculty members are organized under a single academic unit, no matter their disciplinary backgrounds.
  • Third, the program plans to collect an enormous amount of data about student performance and to analyze those data in new ways.
  • ...1 more annotation...
  • an elaborate database that will help them track the success (or lack thereof) of various instructional techniques.
  •  
    A startup campus in health sciences attempts to put into practice many OAI-like approaches in health sciences. Amid the enthusiasm there is some anecdotal evidence about practical difficulaties like coordinating curriculum and finding time for research.
1 - 20 of 51 Next › Last »
Showing 20 items per page