Skip to main content

Home/ CTLT and Friends/ Group items matching "Outcomes" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
1More

Key Steps in Outcome Management - 0 views

  •  
    First in a series from the Urban Institute on outcome management for non-profits, for an audience of non-evaluation-savvy leadership and staff. Lots to steal here if we ever create an Assessment Handbook for WSU.
2More

University World News - US: America can learn from Bologna process - 0 views

  •  
    Lumina proposes that the US "adapt and apply the lessons learned from the Bologna Process in the EU, which has developed methodologies that "uniquely focus on linking student learning and the outcomes of higher education" -- tautological though that sounds.
  •  
    Apparently the "audacious" discussion in the WCET webinar yesterday (to be linked) featuring Ellen Wagner and Peter Smith is old hat in Europe. A national "degree framework" is almost inconceivable in the US, but 'tuning' -- "faculty-led approach that involves seeking input from students, recent graduates and employers to establish criterion-referenced learning outcomes and competencies" -- sounds a lot like in goal-setting.
2More

Jim Dudley on Letting Go of Rigid Adherence to What Evaluation Should Look Like | AEA365 - 1 views

  •  
    "Recently, in working with a board of directors of a grassroots organization, I was reminded of how important it is to "let go" of rigid adherence to typologies and other traditional notions of what an evaluation should look like. For example, I completed an evaluation that incorporated elements of all of the stages of program development - a needs assessment (e.g., how much do board members know about their programs and budget), a process evaluation (e.g., how well do the board members communicate with each other when they meet), and an outcome evaluation (e.g., how effective is their marketing plan for recruiting children and families for its programs)."
  •  
    Needs evaluation, process evaluation, outcomes evaluation -- all useful for improvement.
5More

Many College Boards Are at Sea in Assessing Student Learning, Survey Finds - Leadership... - 0 views

  • While oversight of educational quality is a critical responsibility of college boards of trustees, a majority of trustees and chief academic officers say boards do not spend enough time discussing student-learning outcomes, and more than a third say boards do not understand how student learning is assessed, says a report issued on Thursday by the Association of Governing Boards of Universities and Colleges.
  • While boards should not get involved in the details of teaching or ways to improve student-learning outcomes, they must hold the administration accountable for identifying needs in the academic programs and then meeting them, the report says. Boards should also make decisions on where to allocate resources based on what works or what should improve.
  • The most commonly received information by boards was college-ranking data
  • ...1 more annotation...
  • Boards should expect to receive useful high-level information on learning outcomes, the report says, and should make comparisons over time and to other institutions. Training in how to understand academic and learning assessments should also be part of orientation for new board members.
  •  
    This piece coupled with the usual commentary reveal again the profound identity crisis shaking education in this country.
3More

Assessing Learning Outcomes at the University of Cincinnati: Comparing Rubric Assessmen... - 2 views

  •  
    "When the CLA results arrived eight months later, the UC team compared the outcomes of the two assessments. "We found no statistically significant correlation between the CLA scores and the portfolio scores," Escoe says. "In some ways, it's a disappointing finding. If we'd found a correlation, we could tell faculty that the CLA, as an instrument, is measuring the same things that we value and that the CLA can be embedded in a course. But that didn't happen." There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points "in a black box": if a student referred to a specific piece of evidence in a critical-thinking question, he or she simply received one point. In addition, she says, faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind-leading to results that would not correlate to a computer-scored test. In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. "
  •  
    Another institution trying to make sense of the CLA. This study compared student's CLA scores with criteria-based scores of their eportfolios. The study used a modified version of the VALUE rubrics developed by the AACU. Our own Gary Brown was on the team that developed the critical thinking rubric for the VALUE project.
  •  
    "The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. " This begs some questions: what meaning can we attach to these two non-correlated measures? What VSA requirements can rubric-based assessment NOT satisfy? Are those "requirements" really useful?
7More

Graphic Display of Student Learning Objectives - ProfHacker - The Chronicle of Higher E... - 2 views

  • Creating SLOs or goals for a course is simple to us, usually.  We want students to learn certain skills, we create assignments that will help students reach those goals, and we’ll judge how well they have learned those skills. 
  • This graphic displays the three learning objectives for the course, and it connects the course assignment to the learning objectives.  Students can see—at a glance—that work none of course assignments are random or arbitrary (an occasional student complaint), but that each assignment links directly to a course learning objective.
  • The syllabus graphic is quite simple and it’s one that students easily understand.  Additionally, I use an expanded graphic (below) when thinking about small goals within the larger learning objectives.
  • ...3 more annotations...
  • In fact, The Graphic Syllabus and the Outcomes Map: Communicating Your Course (Linda Nilson) is an interesting way to organize graphically an entire course.
  • An example of a graphic syllabus can be found in Dr. W. Mark Smillie’s displays of his philosophy courses [.pdf file].
  • Some students won’t care.  Moreover, they rarely remember the connection between course content and assignments.  The course and the assignments can all seem random and arbitrary.  Nevertheless, some students will care, and some will appreciate the connections.
  •  
    Perhaps useful resource
4More

Blog U.: It Boils Down to... - Confessions of a Community College Dean - Inside Higher Ed - 4 views

  • I had a conversation a few days ago with a professor who helped me understand some of the otherwise-puzzling opposition faculty have shown to actually using the general education outcomes they themselves voted into place.
  • Yet getting those outcomes from ‘adopted’ to ‘used’ has proved a long, hard slog.
  • The delicate balance is in respecting the ambitions of the various disciplines, while still maintaining -- correctly, in my view -- that you can’t just assume that the whole of a degree is equal to the sum of its parts. Even if each course works on its own terms, if the mix of courses is wrong, the students will finish with meaningful gaps. Catching those gaps can help you determine what’s missing, which is where assessment is supposed to come in. But there’s some local history to overcome first.
  •  
    This is an interesting take on what we are doing and the comments interesting
6More

National Institute for Learning Outcomes Assessment - 1 views

  • Of the various ways to assess student learning outcomes, many faculty members prefer what are called “authentic” approaches that document student performance during or at the end of a course or program of study.  Authentic assessments typically ask students to generate rather than choose a response to demonstrate what they know and can do.  In their best form, such assessments are flexible and closely aligned with teaching and learning processes, and represent some of students more meaningful educational experiences.  In this paper, assessment experts Trudy Banta, Merilee Griffin, Theresa Flateby, and Susan Kahn describe the development of several promising authentic assessment approaches. 
  • Educators and policy makers in postsecondary education are interested in assessment processes that improve student learning, and at the same time provide comparable data for the purpose of demonstrating accountability.
  • First, ePortfolios provide an in-depth, long-term view of student achievement on a range of skills and abilities instead of a quick snapshot based on a single sample of learning outcomes. Second, a system of rubrics used to evaluate student writing and depth of learning has been combined with faculty learning and team assessments, and is now being used at multiple institutions. Third, online assessment communities link local faculty members in collaborative work to develop shared norms and teaching capacity, and then link local communities with each other in a growing system of assessment.
    • Nils Peterson
       
      hey, does this sound familiar? i'm guessing the portfolios are not anywhere on the Internet, but we're otherwise in good company
  • ...1 more annotation...
  • Three Promising Alternatives for Assessing College Students' Knowledge and Skills
    • Nils Peterson
       
      I'm not sure they are 'alternatives' so much as 3 elements we would combine into a single strategy
6More

Law Schools Resist Proposal to Assess Them Based on What Students Learn - Curriculum - ... - 1 views

  • Law schools would be required to identify key skills and competencies and develop ways to test how well their graduates are learning them under controversial revisions to accreditation standards being proposed by the American Bar Association.
  • Several law deans said they have enough to worry about with budget cuts, a tough job market for their graduates, and the soaring cost of legal education without adding a potentially expensive assessment overhaul.
  • "It is worth pausing to ask how the proponents of outcome measures can be so very confident that the actual performance of tasks deemed essential for the practice of law can be identified, measured, and evaluated," said Robert C. Post, dean of Yale Law School.
  • ...2 more annotations...
  • The proposed standards, which are still being developed, call on law schools to define learning outcomes that are consistent with their missions and to offer curricula that will achieve those outcomes. Different versions being considered offer varying degrees of specificity about what those skills should include.
  • Phillip A. Bradley, senior vice president and general counsel for Duane Reade, a large drugstore chain, likened law schools to car companies that are "manufacturing something that nobody wants." Mr. Bradley said many law firms are developing core competencies they expect of their lawyers, but many law schools aren't delivering graduates who come close to meeting them.
  •  
    The homeopathic fallacy again, and as goes law school, so goes law....
11More

Learning to Hate Learning Objectives - The Chronicle Review - The Chronicle of Higher E... - 1 views

shared by Gary Brown on 16 Dec 09 - Cached
  • Perhaps learning objectives make sense for most courses outside the humanities, but for me—as, no doubt, for many others—they bear absolutely no connection to anything that happens in the classroom.
    • Gary Brown
       
      The homeopathic fallacy, debunked by volumes of research...
  • The problem is, this kind of teaching does not correlate with the assumption of my local accreditation body, which sees teaching—as perhaps it is, in many disciplines—as passing on a body of knowledge and skills to a particular audience.
    • Gary Brown
       
      A profoundly dangerous misperception of accreditation and its role.
  • We talked about the ways in which the study of literature can help to develop and nurture observation, analysis, empathy, and self-reflection, all of which are essential for the practice of psychotherapy,
    • Gary Brown
       
      Reasonable outcomes, with a bit of educational imagination and an understanding of assessment obviously underdeveloped.
  • ...4 more annotations...
  • They will not achieve any "goals or outcomes." Indeed, they will not have "achieved" anything, except, perhaps, to doubt the value of terms like "achievement" when applied to reading literature.
    • Gary Brown
       
      good outcome
  • To describe this as a learning objective is demeaning and reductive to all concerned.
    • Gary Brown
       
      Only in the sense Ralph Tyler criticized, and he is the one who coined the term and developed the concept.
  • except to observe certain habits of mind, nuances of thinking, an appreciation for subtleties and ambiguities of argument, and an appreciation of the capacity for empathy, as well as the need, on certain occasions, to resist this capacity. There is no reason for anyone to take the course except a need to understand more about the consciousness of others, including nonhuman animals.
10More

Views: Accreditation 2.0 - Inside Higher Ed - 0 views

  • The first major conversation is led by the academic and accreditation communities themselves. It focuses on how accreditation is addressing accountability, with particular emphasis on the relationship (some would say tension, or even conflict) between accountability and institutional improvement.
  • The second conversation is led by critics of accreditation who question its effectiveness in addressing accountability
  • The third conversation is led by federal officials who also focus on the gatekeeping role of accreditation.
  • ...6 more annotations...
  • The emerging Accreditation 2.0 is likely to be characterized by six key elements. Some are familiar features of accreditation; some are modifications of existing practice, some are new: Community-driven, shared general education outcomes. Common practices to address transparency. Robust peer review. Enhanced efficiency of quality improvement efforts. Diversification of the ownership of accreditation. Alternative financing models for accreditation.
  • All are based on a belief that accreditation needs to change, though in what way and at what pace is seen differently
  • The Essential Learning Outcomes of the Association of American Colleges and Universities, the Collegiate Learning Assessment and the Voluntary System of Accountability of the Association of Public and Land-grant Universities all provide for agreement across institutions about expected Outcomes. This work is vital as we continue to address the crucial question of “What is a college education?”
  • peer review can be further enhanced through, for example, encouraging greater diversity of teams, including more faculty and expanding public participation
  • Accreditation 2.0 can include means to assure more immediate institutional action to address the weaknesses and prevent their being sustained over long periods of time.
  • Judith Eaton is president of the Council for Higher Education Accreditation, which is a national advocate for self-regulation of academic quality through accreditation. CHEA has 3,000 degree-granting colleges and universities as members and recognizes 59 institutional and programmatic accrediting organizations.
  •  
    The way the winds are blowing
18More

AAC&U News | April 2010 | Feature - 1 views

  • Comparing Rubric Assessments to Standardized Tests
  • First, the university, a public institution of about 40,000 students in Ohio, needed to comply with the Voluntary System of Accountability (VSA), which requires that state institutions provide data about graduation rates, tuition, student characteristics, and student learning outcomes, among other measures, in the consistent format developed by its two sponsoring organizations, the Association of Public and Land-grant Universities (APLU), and the Association of State Colleges and Universities (AASCU).
  • And finally, UC was accepted in 2008 as a member of the fifth cohort of the Inter/National Coalition for Electronic Portfolio Research, a collaborative body with the goal of advancing knowledge about the effect of electronic portfolio use on student learning outcomes.  
  • ...13 more annotations...
  • outcomes required of all UC students—including critical thinking, knowledge integration, social responsibility, and effective communication
  • “The wonderful thing about this approach is that full-time faculty across the university  are gathering data about how their  students are doing, and since they’ll be teaching their courses in the future, they’re really invested in rubric assessment—they really care,” Escoe says. In one case, the capstone survey data revealed that students weren’t doing as well as expected in writing, and faculty from that program adjusted their pedagogy to include more writing assignments and writing assessments throughout the program, not just at the capstone level. As the university prepares to switch from a quarter system to semester system in two years, faculty members are using the capstone survey data to assist their course redesigns, Escoe says.
  • the university planned a “dual pilot” study examining the applicability of electronic portfolio assessment of writing and critical thinking alongside the Collegiate Learning Assessment,
  • The rubrics the UC team used were slightly modified versions of those developed by AAC&U’s Valid Assessment of Learning in Undergraduate Education (VALUE) project. 
  • In the critical thinking rubric assessment, for example, faculty evaluated student proposals for experiential honors projects that they could potentially complete in upcoming years.  The faculty assessors were trained and their rubric assessments “normed” to ensure that interrater reliability was suitably high.
  • “It’s not some nitpicky, onerous administrative add-on. It’s what we do as we teach our courses, and it really helps close that assessment loop.”
  • There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points “in a black box”:
  • faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind—leading to results that would not correlate to a computer-scored test. 
  • “The CLA provides scores at the institutional level. It doesn’t give me a picture of how I can affect those specific students’ learning. So that’s where rubric assessment comes in—you can use it to look at data that’s compiled over time.”
  • Their portfolios are now more like real learning portfolios, not just a few artifacts, and we want to look at them as they go into their third and fourth years to see what they can tell us about students’ whole program of study.”  Hall and Robles are also looking into the possibility of forming relationships with other schools from NCEPR to exchange student e-portfolios and do a larger study on the value of rubric assessment of student learning.
  • “We’re really trying to stress that assessment is pedagogy,”
  • “We found no statistically significant correlation between the CLA scores and the portfolio scores,”
  • In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement.
    • Nils Peterson
       
      CLA did not provide information for continuous program improvement -- we've heard this argument before
  •  
    The lack of correlation might be rephrased--there appears to be no corrlation between what is useful for faculty who teach and what is useful for the VSA. A corollary question: Of what use is the VSA?
2More

YouTube - Assessment Quickies #1: What Are Student Learning Outcomes? - 3 views

shared by Gary Brown on 22 Apr 10 - Cached
  • Assessment Quickies #1: What Are Student Learning Outcomes?
  •  
    a useful resource for our partners here at WSU from a new Cal State partner.
2More

Faith in Prior Learning Was Well Placed - Letters to the Editor - The Chronicle of High... - 1 views

  • The recognition that a college that offered credit for experiential learning could stand with traditional institutions, while commonplace today, was a leap of faith then. Empire State had to demonstrate its validity through results—educational outcomes—and on that score, it stood tall. In fact, focusing on outcomes, as we did, led many of us to question how well traditional institutions would measure up!
  •  
    an important question.
2More

Brainstorm - The Occupation Will Be Televised - The Chronicle of Higher Education - 0 views

  •  
    The poster in the accompanying picture says: "Education is not for sale". "In response to the massive re-orientation of education toward job training, privatization and the standardization of curricular outcomes mandated by the Bologna Process, students across Europe have been turning out by the thousands. This past June, as many as 250,000 students, parents, schoolteachers, college faculty and staff coordinated a week-long education strike in 90 cities across Germany."
  •  
    Apropos of Ashley's comments about European views of accreditation and accountability: apparently standardization of curricular outcomes is facing some opposition.
24More

Views: The White Noise of Accountability - Inside Higher Ed - 2 views

  • We don’t really know what we are saying
  • “In education, accountability usually means holding colleges accountable for the learning outcomes produced.” One hopes Burck Smith, whose paper containing this sentence was delivered at an American Enterprise Institute conference last November, held a firm tongue-in-cheek with the core phrase.
  • Our adventure through these questions is designed as a prodding to all who use the term to tell us what they are talking about before they otherwise simply echo the white noise.
  • ...20 more annotations...
  • when our students attend three or four schools, the subject of these sentences is considerably weakened in terms of what happens to those students.
  • Who or what is one accountable to?
  • For what?
  • Why that particular “what” -- and not another “what”?
  • To what extent is the relationship reciprocal? Are there rewards and/or sanctions inherent in the relationship? How continuous is the relationship?
  • In the Socratic moral universe, one is simultaneously witness and judge. The Greek syneidesis (“conscience” and “consciousness”) means to know something with, so to know oneself with oneself becomes an obligation of institutions and systems -- to themselves.
  • Obligation becomes self-reflexive.
  • There are no external authorities here. We offer, we accept, we provide evidence, we judge. There is nothing wrong with this: it is indispensable, reflective self-knowledge. And provided we judge without excuses, we hold to this Socratic moral framework. As Peter Ewell has noted, the information produced under this rubric, particularly in the matter of student learning, is “part of our accountability to ourselves.”
  • But is this “accountability” as the rhetoric of higher education uses the white noise -- or something else?
  • in response to shrill calls for “accountability,” U.S. higher education has placed all its eggs in the Socratic basket, but in a way that leaves the basket half-empty. It functions as the witness, providing enormous amounts of information, but does not judge that information.
  • Every single “best practice” cited by Aldeman and Carey is subject to measurement: labor market histories of graduates, ratios of resource commitment to various student outcomes, proportion of students in learning communities or taking capstone courses, publicly-posted NSSE results, undergraduate research participation, space utilization rates, licensing income, faculty patents, volume of non-institutional visitors to art exhibits, etc. etc. There’s nothing wrong with any of these, but they all wind up as measurements, each at a different concentric circle of putatively engaged acceptees of a unilateral contract to provide evidence. By the time one plows through Aldeman and Carey’s banquet, one is measuring everything that moves -- and even some things that don’t.
  • Sorry, but basic capacity facts mean that consumers cannot vote with their feet in higher education.
  • If we glossed the Socratic notion on provision-of-information, the purpose is self-improvement, not comparison. The market approach to accountability implicitly seeks to beat Socrates by holding that I cannot serve as both witness and judge of my own actions unless the behavior of others is also on the table. The self shrinks: others define the reference points. “Accountability” is about comparison and competition, and an institution’s obligations are only to collect and make public those metrics that allow comparison and competition. As for who judges the competition, we have a range of amorphous publics and imagined authorities.
  • There are no formal agreements here: this is not a contract, it is not a warranty, it is not a regulatory relationship. It isn’t even an issue of becoming a Socratic self-witness and judge. It is, instead, a case in which one set of parties, concentrated in places of power, asks another set of parties, diffuse and diverse, “to disclose more and more about academic results,” with the second set of parties responding in their own terms and formulations. The environment itself determines behavior.
  • Ewell is right about the rules of the information game in this environment: when the provider is the institution, it will shape information “to look as good as possible, regardless of the underlying performance.”
  • U.S. News & World Report’s rankings
  • The messengers become self-appointed arbiters of performance, establishing themselves as the second party to which institutions and aggregates of institutions become “accountable.” Can we honestly say that the implicit obligation of feeding these arbiters constitutes “accountability”?
  • But if the issue is student learning, there is nothing wrong with -- and a good deal to be said for -- posting public examples of comprehensive examinations, summative projects, capstone course papers, etc. within the information environment, and doing so irrespective of anyone requesting such evidence of the distribution of knowledge and skills. Yes, institutions will pick what makes them look good, but if the public products resemble AAC&U’s “Our Students’ Best Work” project, they set off peer pressure for self-improvement and very concrete disclosure. The other prominent media messengers simply don’t engage in constructive communication of this type.
  • Ironically, a “market” in the loudest voices, the flashiest media productions, and the weightiest panels of glitterati has emerged to declare judgment on institutional performance in an age when student behavior has diluted the very notion of an “institution” of higher education. The best we can say is that this environment casts nothing but fog over the specific relationships, responsibilities, and obligations that should be inherent in something we call “accountability.” Perhaps it is about time that we defined these components and their interactions with persuasive clarity. I hope that this essay will invite readers to do so.
  • Clifford Adelman is senior associate at the Institute for Higher Education Policy. The analysis and opinions expressed in this essay are those of the author, and do not necessarily represent the positions or opinions of the institute, nor should any such representation be inferred.
  •  
    Perhaps the most important piece I've read recently. Yes must be our answer to Adelman's last challenge: It is time for us to disseminate what and why we do what we do.
14More

Under Obama, Accreditation Is Still in the Hot Seat - Government - The Chronicle of Hig... - 1 views

  • George Miller, a California Democrat who is chairman of the House education committee, said defining a credit hour is critical to ensure that students and taxpayers, through federal student aid, are not footing the bill for courses that are not worth the amount of credit being awarded.
    • Gary Brown
       
      "Worth" opens up some interesting implications.  Intended I suspect, to dampen courses like basket-weaving, the production of outcomes cannot be far off, the production of economic impact related to those outcomes a step or less behind. 
  • Senators also questioned the independence of accreditors, which are supported by dues from member institutions and governed by representatives of the colleges they accredit.
  • Sen. Michael B. Enzi, the top Republican on the Senate Education Committee, has said he wants Congress to look beyond just problems in the for-profit sector. He said at a hearing last month that he would be "working to lay the groundwork for a broader, thorough, and more fair investigation into higher education" that would ask whether taxpayers are getting an appropriate value for the money they spend on all colleges.
  • ...10 more annotations...
  • State and federal governments are better equipped to enforce consumer protections for students, say accreditors, who have traditionally focused on preserving academic quality.
  • Judith S. Eaton, president of the Council for Higher Education Accreditation, which represents about 3,000 colleges, said that over the past several years accrediting organizations have responded to the growing calls for accountability and transparency from the public and lawmakers. The groups, she said, have worked to better identify and judge student achievement and share more information about what they do and how well the institutions are performing.
  • Peter T. Ewell, vice president of the National Center for Higher Education Management Systems, said the debate boils down to whether accreditors should serve primarily as consumer protectors or continue their traditional role of monitoring academic quality more broadly.
  • Richard K. Vedder, director of the Center for College Affordability & Productivity and a member of the Spellings Commission
  • "We should be moving to more of a Consumer Reports for colleges, to provide the public with information that the college rankings do imperfectly," he said.
  • accreditation will have to evolve to meet not only government's expectations but also the changing college
  • market
  • Nearly two years into the Obama Administration, colleges have not gotten the relief they expected from the contentious battles over measuring quality that defined the Bush Education Department.
  • Bracing for the prospect of new rules and laws that could expand their responsibilities, accreditors and the institutions they monitor are defending the self-regulation colleges use to ensure academic quality. But they are also responding to the pressures from the White House and Capitol Hill by making some changes on their own, hoping to stanch the possibility of more far-reaching federal requirements.
  • Advocates of change say the six regional and seven national accreditors have varying standards that are sometimes too lax, allowing for limited oversight of how credits are awarded, how much learning is accomplished, and what happens to the mission of institutions that change owners.
8More

Learning Assessment: The Regional Accreditors' Role - Measuring Stick - The Chronicle o... - 0 views

  • The National Institute for Learning Outcomes Assessment has just released a white paper about the regional accreditors’ role in prodding colleges to assess their students’ learning
  • All four presidents suggested that their campuses’ learning-assessment projects are fueled by Fear of Accreditors. One said that a regional accreditor “came down on us hard over assessment.” Another said, “Accreditation visit coming up. This drives what we need to do for assessment.”
  • regional accreditors are more likely now than they were a decade ago to insist that colleges hand them evidence about student-learning outcomes.
  • ...4 more annotations...
  • Western Association of Schools and Colleges, Ms. Provezis reports, “almost every action letter to institutions over the last five years has required additional attention to assessment, with reasons ranging from insufficient faculty involvement to too little evidence of a plan to sustain assessment.”
  • The white paper gently criticizes the accreditors for failing to make sure that faculty members are involved in learning assessment.
  • “it would be good to know more about what would make assessment worthwhile to the faculty—for a better understanding of the source of their resistance.”
  • Many of the most visible and ambitious learning-assessment projects out there seem to strangely ignore the scholarly disciplines’ own internal efforts to improve teaching and learning.
  •  
    fyi
3More

ACM Ubiquity - An Interview with Michael Schrage - 0 views

  • I learn about the organization's innovation culture as follows: I say that, when someone comes up with an idea you think is a good one and people say, "We can't do that because..." then whatever follows the words "we can't do that because... " is your innovation culture.
  • UBIQUITY: What turns people into such dolts? SCHRAGE:         Internal imperatives.
  •  
    An MIT Media Lab expert on innovation says it's about outcomes, not ideas.
11More

OECD Project Seeks International Measures for Assessing Educational Quality - Internati... - 0 views

  • The first phase of an ambitious international study that intends to assess and compare learning outcomes in higher-education systems around the world was announced here on Wednesday at the conference of the Council for Higher Education Accreditation.
  • Richard Yelland, of the OECD's Education Directorate, is leading the project, which he said expects to eventually offer faculty members, students, and governments "a more balanced assessment of higher-education quality" across the organization's 31 member countries.
  • learning outcomes are becoming a central focus worldwide
  • ...7 more annotations...
  • the feasibility study is adapting the Collegiate Learning Assessment, an instrument developed by the Council for Aid to Education in the United States, to an international context.
  • At least six nations are participating in the feasibility study.
  • 14 countries are expected to participate in the full project, with an average of 10 institutions per country and about 200 students per institution,
  • The project's target population will be students nearing the end of three-year or four-year degrees, and will eventually measure student knowledge in economics and engineering.
  • While the goal of the project is not to produce another global ranking of universities, the growing preoccupation with such lists has crystallized what Mr. Yelland described as the urgency of pinning down what exactly it is that most of the world's universities are teaching and how well they are doing
  • Judith S. Eaton, president of the Council for Higher Education Accreditation, said she was also skeptical about whether the project would eventually yield common international assessment mechanisms.
  • Ms. Eaton noted, the same sets of issues recur across borders and systems, about how best to enhance student learning and strengthen economic development and international competitiveness.
  •  
    Another day, another press, again thinking comparisons
‹ Previous 21 - 40 of 97 Next › Last »
Showing 20 items per page