Skip to main content

Home/ CTLT and Friends/ Group items tagged program assessment

Rss Feed Group items tagged

Judy Rumph

Views: Why Are We Assessing? - Inside Higher Ed - 1 views

  • Amid all this progress, however, we seem to have lost our way. Too many of us have focused on the route we’re traveling: whether assessment should be value-added; the improvement versus accountability debate; entering assessment data into a database; pulling together a report for an accreditor. We’ve been so focused on the details of our route that we’ve lost sight of our destinatio
  • Our destination, which is what we should be focusing on, is the purpose of assessment. Over the last decades, we've consistently talked about two purposes of assessment: improvement and accountability. The thinking has been that improvement means using assessment to identify problems — things that need improvement — while accountability means using assessment to show that we're already doing a great job and need no improvement. A great deal has been written about the need to reconcile these two seemingly disparate purposes.
  • The most important purpose of assessment should be not improvement or accountability but their common aim: everyone wants students to get the best possible education
  • ...7 more annotations...
  • Our second common purpose of assessment should be making sure not only that students learn what’s important, but that their learning is of appropriate scope, depth, and rigo
  • Third, we need to accept how good we already are, so we can recognize success when we see i
  • And we haven’t figured out a way to tell the story of our effectiveness in 25 words or less, which is what busy people want and nee
  • Because we're not telling the stories of our successful outcomes in simple, understandable terms, the public continues to define quality using the outdated concept of inputs like faculty credentials, student aptitude, and institutional wealth — things that by themselves don’t say a whole lot about student learning.
  • And people like to invest in success. Because the public doesn't know how good we are at helping students learn, it doesn't yet give us all the support we need in our quest to give our students the best possible education.
  • But while virtually every college and university has had to make draconian budget cuts in the last couple of years, with more to come, I wonder how many are using solid, systematic evidence — including assessment evidence — to inform those decisions.
  • Now is the time to move our focus from the road we are traveling to our destination: a point at which we all are prudent, informed stewards of our resources… a point at which we each have clear, appropriate, justifiable, and externally-informed standards for student learning. Most importantly, now is the time to move our focus from assessment to learning, and to keeping our promises. Only then can we make higher education as great as it needs to be.
  •  
    Yes, this article resonnated with me too. Especially connecting assessment to teaching and learning. The most important purpose of assessment should be not improvement or accountability but their common aim: everyone wants students to get the best possible education.... today we seem to be devoting more time, money, thought, and effort to assessment than to helping faculty help students learn as effectively as possible. When our colleagues have disappointing assessment results, and they don't know what to do to improve them, I wonder how many have been made aware that, in some respects, we are living in a golden age of higher education, coming off a quarter-century of solid research on practices that promote deep, lasting learning. I wonder how many are pointed to the many excellent resources we now have on good teaching practices, including books, journals, conferences and, increasingly, teaching-learning centers right on campus. I wonder how many of the graduate programs they attended include the study and practice of contemporary research on effective higher education pedagogies. No wonder so many of us are struggling to make sense of our assessment results! Too many of us are separating work on assessment from work on improving teaching and learning, when they should be two sides of the same coin. We need to bring our work on teaching, learning, and assessment together.
Nils Peterson

AAC&U News | April 2010 | Feature - 1 views

  • Comparing Rubric Assessments to Standardized Tests
  • First, the university, a public institution of about 40,000 students in Ohio, needed to comply with the Voluntary System of Accountability (VSA), which requires that state institutions provide data about graduation rates, tuition, student characteristics, and student learning outcomes, among other measures, in the consistent format developed by its two sponsoring organizations, the Association of Public and Land-grant Universities (APLU), and the Association of State Colleges and Universities (AASCU).
  • And finally, UC was accepted in 2008 as a member of the fifth cohort of the Inter/National Coalition for Electronic Portfolio Research, a collaborative body with the goal of advancing knowledge about the effect of electronic portfolio use on student learning outcomes.  
  • ...13 more annotations...
  • outcomes required of all UC students—including critical thinking, knowledge integration, social responsibility, and effective communication
  • “The wonderful thing about this approach is that full-time faculty across the university  are gathering data about how their  students are doing, and since they’ll be teaching their courses in the future, they’re really invested in rubric assessment—they really care,” Escoe says. In one case, the capstone survey data revealed that students weren’t doing as well as expected in writing, and faculty from that program adjusted their pedagogy to include more writing assignments and writing assessments throughout the program, not just at the capstone level. As the university prepares to switch from a quarter system to semester system in two years, faculty members are using the capstone survey data to assist their course redesigns, Escoe says.
  • the university planned a “dual pilot” study examining the applicability of electronic portfolio assessment of writing and critical thinking alongside the Collegiate Learning Assessment,
  • The rubrics the UC team used were slightly modified versions of those developed by AAC&U’s Valid Assessment of Learning in Undergraduate Education (VALUE) project. 
  • In the critical thinking rubric assessment, for example, faculty evaluated student proposals for experiential honors projects that they could potentially complete in upcoming years.  The faculty assessors were trained and their rubric assessments “normed” to ensure that interrater reliability was suitably high.
  • “It’s not some nitpicky, onerous administrative add-on. It’s what we do as we teach our courses, and it really helps close that assessment loop.”
  • There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points “in a black box”:
  • faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind—leading to results that would not correlate to a computer-scored test. 
  • “The CLA provides scores at the institutional level. It doesn’t give me a picture of how I can affect those specific students’ learning. So that’s where rubric assessment comes in—you can use it to look at data that’s compiled over time.”
  • Their portfolios are now more like real learning portfolios, not just a few artifacts, and we want to look at them as they go into their third and fourth years to see what they can tell us about students’ whole program of study.”  Hall and Robles are also looking into the possibility of forming relationships with other schools from NCEPR to exchange student e-portfolios and do a larger study on the value of rubric assessment of student learning.
  • “We’re really trying to stress that assessment is pedagogy,”
  • “We found no statistically significant correlation between the CLA scores and the portfolio scores,”
  • In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement.
    • Nils Peterson
       
      CLA did not provide information for continuous program improvement -- we've heard this argument before
  •  
    The lack of correlation might be rephrased--there appears to be no corrlation between what is useful for faculty who teach and what is useful for the VSA. A corollary question: Of what use is the VSA?
Nils Peterson

Change Magazine - The New Guys in Assessment Town - 0 views

  • if one of the institution’s general education goals is critical thinking, the system makes it possible to call up all the courses and programs that assess student performance on that outcome.
  • bringing together student learning outcomes data at the level of the institution, program, course, and throughout student support services so that “the data flows between and among these levels”
  • Like its competitors, eLumen maps outcomes vertically across courses and programs, but its distinctiveness lies in its capacity to capture what goes on in the classroom. Student names are entered into the system, and faculty use a rubric-like template to record assessment results for every student on every goal. The result is a running record for each student available only to the course instructor (and in a some cases to the students themselves, who can go to the system to  get feedback on recent assessments).
    • Nils Peterson
       
      sounds like harvesting gradebook. assess student work and roll up
    • Joshua Yeidel
       
      This system has some potential for formative use at the per-student leve.
  • ...7 more annotations...
  • “I’m a little wary.  It seems as if, in addition to the assessment feedback we are already giving to students, we might soon be asked to add a data-entry step of filling in boxes in a centralized database for all the student learning outcomes. This is worrisome to those of us already struggling under the weight of all that commenting and essay grading.”
    • Nils Peterson
       
      its either double work, or not being understood that the grading and the assessment can be the same activity. i suspect the former -- grading is being done with different metrics
    • Joshua Yeidel
       
      I am in the unusual position of seeing many papers _after_ they have been graded by a wide variety of teachers. Many of these contain little "assessment feedback" -- many teachers focus on "correcting" the papers and finding some letter or number to assign as a value.
  • “This is where we see many institutions struggling,” Galvin says. “Faculty simply don’t have the time for a deeper involvement in the mechanics of assessment.” Many have never seen a rubric or worked with one, “so generating accurate, objective data for analysis is a challenge.”  
    • Nils Peterson
       
      Rather than faculty using the community to help with assessment, they are outsourcing to a paid assessor -- this is the result of undertaking this thinking while also remaining in the institution-centric end of the spectrum we developed
  • I asked about faculty pushback. “Not so much,” Galvin says, “not after faculty understand that the process is not intended to evaluate their work.”
    • Nils Peterson
       
      red flag
  • the annual reports required by this process were producing “heaps of paper” while failing to track trends and developments over time. “It’s like our departments were starting anew every year,” Chaplot says. “We wanted to find a way to house the data that gave us access to what was done in the past,” which meant moving from discrete paper reports to an electronic database.
    • Joshua Yeidel
       
      It's not clear whether the "database" is housing measurements, narratives and reflections, or all of the above.
  • Can eLumen represent student learning in language? No, but it can quantify the number of boxes checked against number of boxes not checked.”
  • developing a national repository of resources, rubrics, outcomes statements, and the like that can be reviewed and downloaded by users
    • Nils Peterson
       
      in building our repository we could well open-source these tools, no need to lock them up
  • “These solutions cement the idea that assessment is an administrative rather than an educational enterprise, focused largely on accountability. They increasingly remove assessment decision making from the everyday rhythm of teaching and learning and the realm of the faculty.
    • Nils Peterson
       
      Over the wall assessment, see Transformative Assessment rubric for more detail
Nils Peterson

2009 Annual Meeting | Conference Program - 0 views

  • This session explores the notion that assessment for transformational learning is best utilized as a learning tool. By providing timely, transparent, and appropriate feedback, both to students and to the institution itself, learning is enhanced – a far different motive for assessment than is external accountability.
    • Nils Peterson
       
      need to get to these guys with our harvesting gradebook ideas...
    • Nils Peterson
       
      decided to attend another session. Hersh was OK before lunch, but the talk by Pan looks more promising
  • Academic and corporate communities agree on the urgent need for contemporary, research-based pedagogies of engagement in STEM fields. Participants will learn how leaders from academic departments and institutions have collaborated with leaders from the corporate and business community in regional networks to ensure that graduates meet the expectations of prospective employers and the public.
    • Nils Peterson
       
      here is another session with links to CTLT work, both harvesting gradebook and the ABET work
  • Professor Pan will discuss the reflective teaching methods used to prepare students to recognize and mobilize community assets as they design, implement, and evaluate projects to improve public health.
    • Nils Peterson
       
      Students tasked to learn about a community, ride the bus, make a Doc appt. Then tasked to do a non-clinical health project in that community (they do plenty of clinical stuff elsewhere in the program). Project must build capacity in the community to survive after the student leaves. Example. Work with hispanic parents in Sacramento about parenting issue, ex getting kids to sleep on time. Student had identified problem in the community, but first project idea was show a video, which was not capacity building. Rather than showing the video, used the video as a template and made a new video. Families were actors. Result was spanish DVD that the community could own. Pan thinks this is increased capacity in the community.
  • ...17 more annotations...
  • Freshman Survey annually examines the academic habits of mind of entering first-year students.  Along with academic involvement, the survey examines diversity, civic engagement, college admissions and expectations of college. 
  • The project aims to promote faculty and student assessment of undergraduate research products in relation to outcomes associated with basic research skills and general undergraduate learning principles (communication and quantitative reasoning, critical thinking, and integration and application of knowledge).
  • They focus educators on the magnitude of the challenge to prepare an ever-increasingly diverse, globally-connected student body with the knowledge, ability, processes, and confidence to adapt to diverse environments and respond creatively to the enormous issues facing humankind.
  • One challenge of civic engagement in the co-curriculum is the merging of cost and outcome: creating meaningful experiences for students and the community with small staffs, on small budgets, while still having significant, purposeful impact. 
  • a)claims that faculty are the sole arbiters of what constitutes a liberal education and b) counter claims that student life professionals also possess the knowledge and expertise critical to defining students’ total learning experiences.  
    • Nils Peterson
       
      also, how many angels can dance on the head of a pin?
  • This session introduces a three-year national effort to document how colleges and universities are using assessment data to improve teaching and learning and to facilitate the dissemination and adoption of best practices in the assessment of college learning outcomes.
  • Exciting pedagogies of engagement abound, including undergraduate research, community-engaged learning, interdisciplinary exploration, and international study.  However, such experiences are typically optional and non-credit-bearing for students, and/or “on top of” the workload for faculty. This session explores strategies for integrating engaged learning into the institutional fabric (curriculum, student role, faculty role) and increasing access to these transformative experiences.
  • hands-on experiential learning, especially in collaboration with other students, is a superior pedagogy but how can this be provided in increasingly larger introductory classes? 
  • As educators seek innovative ways to manage knowledge and expand interdisciplinary attention to pressing global issues, as students and parents look for assurances that their tuition investment will pay professional dividends, and as alumni look for meaningful ways to give back to the institutions that nurtured and prepared them, colleges and universities can integrate these disparate goals through the Guilds, intergenerational membership networks that draw strength from the contributions of all of their members.
    • Nils Peterson
       
      see Theron's ideas for COMM.
  • Civic engagement learning derives its power from the engagement of students with real communities—local, national, and global. This panel explores the relationship between student learning and the contexts in which that learning unfolds by examining programs that place students in diverse contexts close to campus and far afield.
  • For institutional assessment to make a difference for student learning its results must result in changes in classroom practice. This session explores ways in which the institutional assessment of student learning, such as the Wabash National Study of Liberal Arts Education and the Collegiate Learning Assessment, can be connected to our classrooms.
  • Interdisciplinary Teaching and Object-Based Learning in Campus Museums
  • To address pressing needs of their communities, government and non-profit agencies are requesting higher education to provide education in an array of human and social services. To serve these needs effectively, higher educationneeds to broaden and deepen its consultation with practitioners in designing new curricula. Colleges and universities would do well to consider a curriculum development model that requires consultation not only with potential employers, but also with practitioners and supervisors of practitioners.
  • Should Academics be Active? Campuses and Cutting Edge Civic Engagement
  • If transformational liberal education requires engaging the whole student across the educational experience, how can colleges and universities renew strategy and allocate resources effectively to support it?  How can assessment be used to improve student learning and strengthen a transformational learning environment? 
    • Nils Peterson
       
      Purpose of university is not to grant degrees, it has something to do with learning. Keeling's perspective is that the learning should be transformative; changing perspective. Liberating and emancipatory Learning is a complex interaction among student and others, new knowledge and experience, event, own aspirations. learners construct meaning from these elements. "we change our minds" altering the brain at the micro-level Brain imaging research demonstrates that analogical learning (abstract) demands more from more areas of the brain than semantic (concrete) learning. Mind is not an abstraction, it is based in the brain, a working physical organ .Learner and the environment matter to the learning. Seeds magazine, current issue on brain imaging and learning. Segway from brain research to need for university to educate the whole student. Uses the term 'transformative learning' meaning to transform the learning (re-wire the brain) but does not use transformative assessment (see wikipedia).
  • But as public debates roil, higher education has been more reactive than proactive on the question of how best to ensure that today’s students are fully prepared for a fast-paced future.
    • Nils Peterson
       
      Bologna process being adopted (slowly) in EU, the idea is to make academic degrees more interchangeable and understandable across the EU three elements * Qualification Frameworks (transnational, national, disciplinary). Frameworks are graduated, with increasing expertise and autonomy required for the upper levels. They sound like broad skills that we might recognize in the WSU CITR. Not clear how they are assessed * Tuning (benchmarking) process * Diploma Supplements (licensure, thesis, other capstone activities) these extend the information in the transcript. US equivalent might be the Kuali Students system for extending the transcript. Emerging dialog on American capability This dialog is coming from 2 directions * on campus * employers Connect to the Greater Exceptions (2000-2005) iniative. Concluded that American HE has islands of innovation. Lead to LEAP (Liberal Education and America's Promise) Initiative (2005-2015). The dialog is converging because of several forces * Changes in the balance of economic and political power. "The rise of the rest (of the world)" * Global economy in which innovation is key to growth and prosperity LEAP attempts to frame the dialog (look for LEAP in AACU website). Miami-Dade CC has announced a LEAP-derived covenant, the goals must span all aspects of their programs. Define liberal education Knowledge of human cultures and the physical and natural world intellectual and practical skills responsibility integrative skills Marker of success is (here is where the Transformative Gradebook fits in): evidence that students can apply the essential learning outcomes to complex, unscripted problems and real-world settings Current failure -- have not tracked our progress, or have found that we are not doing well. See AACU employer survey 5-10% percent of current graduates taking courses that would meet the global competencies (transcript analysis) See NSSE on Personal and social responsibility gains, less tha
  • Dr. Pan will also talk about strategies for breaking down cultural barriers.
    • Nils Peterson
       
      Pan. found a non-profit agency to be a conduit and coordinator to level the power between univ and grass roots orgs. helped with cultural gaps.
Theron DesRosier

Assessing Learning Outcomes at the University of Cincinnati: Comparing Rubric Assessmen... - 2 views

  •  
    "When the CLA results arrived eight months later, the UC team compared the outcomes of the two assessments. "We found no statistically significant correlation between the CLA scores and the portfolio scores," Escoe says. "In some ways, it's a disappointing finding. If we'd found a correlation, we could tell faculty that the CLA, as an instrument, is measuring the same things that we value and that the CLA can be embedded in a course. But that didn't happen." There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points "in a black box": if a student referred to a specific piece of evidence in a critical-thinking question, he or she simply received one point. In addition, she says, faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind-leading to results that would not correlate to a computer-scored test. In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. "
  •  
    Another institution trying to make sense of the CLA. This study compared student's CLA scores with criteria-based scores of their eportfolios. The study used a modified version of the VALUE rubrics developed by the AACU. Our own Gary Brown was on the team that developed the critical thinking rubric for the VALUE project.
  •  
    "The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. " This begs some questions: what meaning can we attach to these two non-correlated measures? What VSA requirements can rubric-based assessment NOT satisfy? Are those "requirements" really useful?
Gary Brown

Designing Effective Assessments: Q&A with Trudy Banta - 0 views

  • One-hundred forty-six assessment examples were sent to us, and we used all of those in one way or another in the book. I think it’s a pretty fair sample of what’s going on in higher education assessment. Yet most of the programs that we looked at had only been underway for two, three, or four years. When we asked what the long-term impact of doing assessment and using the findings to improve programs had been, in only six percent of the cases were the authors able to say that student learning had been improved.
  •  
    Though and advertisement for a workshop, Trudy Banta confirms our own suspicions. The blurb here further confirms that we need not look far for models--our energy will be better spent making our work at WSU a model.
Nils Peterson

National Institute for Learning Outcomes Assessment - 1 views

  • Of the various ways to assess student learning outcomes, many faculty members prefer what are called “authentic” approaches that document student performance during or at the end of a course or program of study.  Authentic assessments typically ask students to generate rather than choose a response to demonstrate what they know and can do.  In their best form, such assessments are flexible and closely aligned with teaching and learning processes, and represent some of students more meaningful educational experiences.  In this paper, assessment experts Trudy Banta, Merilee Griffin, Theresa Flateby, and Susan Kahn describe the development of several promising authentic assessment approaches. 
  • Educators and policy makers in postsecondary education are interested in assessment processes that improve student learning, and at the same time provide comparable data for the purpose of demonstrating accountability.
  • First, ePortfolios provide an in-depth, long-term view of student achievement on a range of skills and abilities instead of a quick snapshot based on a single sample of learning outcomes. Second, a system of rubrics used to evaluate student writing and depth of learning has been combined with faculty learning and team assessments, and is now being used at multiple institutions. Third, online assessment communities link local faculty members in collaborative work to develop shared norms and teaching capacity, and then link local communities with each other in a growing system of assessment.
    • Nils Peterson
       
      hey, does this sound familiar? i'm guessing the portfolios are not anywhere on the Internet, but we're otherwise in good company
  • ...1 more annotation...
  • Three Promising Alternatives for Assessing College Students' Knowledge and Skills
    • Nils Peterson
       
      I'm not sure they are 'alternatives' so much as 3 elements we would combine into a single strategy
Gary Brown

Assess this! - 5 views

  • Assess this! is a gathering place for information and resources about new and better ways to promote learning in higher education, with a special focus on high-impact educational practices, student engagement, general or liberal education, and assessment of learning.
  • If you'd like to help make Assess this! more useful, there are some things you can do. You can comment on a post by clicking on the comments link following the post.
  • Of the various ways to assess student learning outcomes, many faculty members prefer what are called “authentic” approaches that document student performance during or at the end of a course or program of study. In this paper, assessment experts Trudy Banta, Merilee Griffin, Teresa Flateby, and Susan Kahn describe the development of several promising authentic assessment approaches.
  • ...5 more annotations...
  • Going PublicDouglas C. Bennett, President of Earlham College, suggests each institution having a public learning audit document and gives the example of what this means for Earlham College as a way for public accountability.
  • More TransparencyMartha Kanter, from the US Education Department, calls for more transparency in the way higher education does accreditation.
  • Despite the uptick in activity, "I still feel like there's no there there" when it comes to colleges' efforts to measure student learning, Kevin Carey, policy director at Education Sector, said in a speech at the Council for Higher Education Accreditation meeting Tuesday.
  • Most of the assessment activity on campuses can be found in nooks and crannies of the institutions - by individual professors, or in one department - and it is often not tied to goals set broadly at the institutional level.
  • Nine Principles of Good Practice for Assessing Student Learning
  •  
    A very interesting useful site where we might help ourselves by getting involved.
Gary Brown

At Colleges, Assessment Satisfies Only Accreditors - Letters to the Editor - The Chroni... - 2 views

  • Some of that is due to the influence of the traditional academic freedom that faculty members have enjoyed. Some of it is ego. And some of it is lack of understanding of how it can work. There is also a huge disconnect between satisfying outside parties, like accreditors and the government, and using assessment as a quality-improvement system.
  • We are driven by regional accreditation and program-level accreditation, not by quality improvement. At our institution, we talk about assessment a lot, and do just enough to satisfy the requirements of our outside reviewers.
  • Standardized direct measures, like the Major Field Test for M.B.A. graduates?
  • ...5 more annotations...
  • The problem with the test is that it does not directly align with our program's learning outcomes and it does not yield useful information for closing the loop. So why do we use it? Because it is accepted by accreditors as a direct measure and it is less expensive and time-consuming than more useful tools.
  • Without exception, the most useful information for improving the program and student learning comes from the anecdotal and indirect information.
  • We don't have the time and the resources to do what we really want to do to continuously improve the quality of our programs and instruction. We don't have a culture of continuous improvement. We don't make changes on a regular basis, because we are trapped by the catalog publishing cycle, accreditation visits, and the entrenched misunderstanding of the purposes of assessment.
  • The institutions that use it are ones that have adequate resources to do so. The time necessary for training, whole-system involvement, and developing the programs for improvement is daunting. And it is only being used by one regional accrediting body, as far as I know.
  • Until higher education as a whole is willing to look at changing its approach to assessment, I don't think it will happen
  •  
    The challenge and another piece of evidence that the nuances of assessment as it related to teaching and learning remain elusive.
Gary Brown

News: Assessment vs. Action - Inside Higher Ed - 0 views

  • The assessment movement has firmly taken hold in American higher education, if you judge it by how many colleges are engaged in measuring what undergraduates learn. But if you judge by how many of them use that information to do something, the picture is different.
  • The most common approach used for institutional assessment is a nationally normed survey of students.
  • ut the survey found more attention to learning outcomes at the program level, especially by community colleges.)
  • ...3 more annotations...
  • Much smaller percentages of colleges report that assessment is based on external evaluations of student work (9 percent), student portfolios (8 percent) and employer interviews (8 percent).
  • “Some faculty and staff at prestigious, highly selective campuses wonder why documenting something already understood to be superior is warranted. They have little to gain and perhaps a lot to lose,” the report says. “On the other hand, many colleagues at lower-status campuses often feel pressed to demonstrate their worth; some worry that they may not fare well in comparison with their better-resourced, more selective counterparts. Here too, anxiety may morph into a perceived threat if the results disappoint.”
  • The provosts in the survey said what they most needed to more effectively use assessment was more faculty involvement, with 66 percent citing this need. The percentage was even greater (80 percent) at doctoral institutions.George Kuh, director of the institute, said that he viewed the results as "cause for cautious optimism," and that the reality of so much assessment activity makes it possible to work on making better use of it.
  •  
    From National Institute for LOA:\n\n"The provosts in the survey said what they most \nneeded to more effectively use assessment was more faculty involvement, with 66 \npercent citing this need. The percentage was even greater (80 percent) at \ndoctoral institutions."
  •  
    another report on survey with interesting implications
Joshua Yeidel

Digication e-Portfolios: Highered - Assessment - 0 views

  •  
    "Our web-based assessment solution for tracking, comparing, and reporting on student progress and performance gives faculty and administrators the tools they need to assess a class, department, or institution based on your standards, goals, or objectives. The Digication AMS integrates tightly with our award winning e-Portfolio system, enabling students to record and showcase learning outcomes within customizable, media friendly templates."
  •  
    Could this start out as with program portfolios, and bgrow to include student work?
Joshua Yeidel

Scholar Raises Doubts About the Value of a Test of Student Learning - Research - The Ch... - 3 views

  • Beginning in 2011, the 331 universities that participate in the Voluntary System of Accountability will be expected to publicly report their students' performance on one of three national tests of college-level learning.
  • But at least one of those three tests—the Collegiate Learning Assessment, or CLA—isn't quite ready to be used as a tool of public accountability, a scholar suggested here on Tuesday during the annual meeting of the Association for Institutional Research.
  • Students' performance on the test was strongly correlated with how long they spent taking it.
  • ...6 more annotations...
  • Besides the CLA, which is sponsored by the Council for Aid to Education, other tests that participants in the voluntary system may use are the Collegiate Assessment of Academic Proficiency, from ACT Inc., and the Measure of Academic Proficiency and Progress, offered by the Educational Testing Service.
  • The test has sometimes been criticized for relying on a cross-sectional system rather than a longitudinal model, in which the same students would be tested in their first and fourth years of college.
  • there have long been concerns about just how motivated students are to perform well on the CLA.
  • Mr. Hosch suggested that small groups of similar colleges should create consortia for measuring student learning. For example, five liberal-arts colleges might create a common pool of faculty members that would evaluate senior theses from all five colleges. "That wouldn't be a national measure," Mr. Hosch said, "but it would be much more authentic."
  • Mr. Shavelson said. "The challenge confronting higher education is for institutions to address the recruitment and motivation issues if they are to get useful data. From my perspective, we need to integrate assessment into teaching and learning as part of students' programs of study, thereby raising the stakes a bit while enhancing motivation of both students and faculty
  • "I do agree with his central point that it would not be prudent to move to an accountability system based on cross-sectional assessments of freshmen and seniors at an institution," said Mr. Arum, who is an author, with Josipa Roksa, of Academically Adrift: Limited Learning on College Campuses, forthcoming from the University of Chicago Press
  •  
    CLA debunking, but the best item may be the forthcoming book on "limited learning on College Campuses."
  •  
    "Micheal Scriven and I spent more than a few years trying to apply his multiple-ranking item tool (a very robust and creative tool, I recommend it to others when the alternative is multiple-choice items) to the assessment of critical thinking in health care professionals. The result might be deemed partially successful, at best. I eventually abandoned the test after about 10,000 administrations because the scoring was so complex we could not place it in non-technical hands."
  •  
    In comments on an article about CLA, Scriven's name comes up...
Joshua Yeidel

Using Outcome Information: Making Data Pay Off - 1 views

  •  
    Sixth in a series on outcome management for nonprofits. Grist for the mill for any Assessment Handbook we might make. "Systematic use of outcome data pays off. In an independent survey of nearly 400 health and human service organizations, program directors agreed or strongly agreed that implementing program outcome measurement had helped their programs * focus staff on shared goals (88%); * communicate results to stakeholders (88%); * clarify program purpose (86%); * identify effective practices (84%); * compete for resources (83%); * enhance record keeping (80%); and * improve service delivery (76%)."
Joshua Yeidel

Jim Dudley on Letting Go of Rigid Adherence to What Evaluation Should Look Like | AEA365 - 1 views

  •  
    "Recently, in working with a board of directors of a grassroots organization, I was reminded of how important it is to "let go" of rigid adherence to typologies and other traditional notions of what an evaluation should look like. For example, I completed an evaluation that incorporated elements of all of the stages of program development - a needs assessment (e.g., how much do board members know about their programs and budget), a process evaluation (e.g., how well do the board members communicate with each other when they meet), and an outcome evaluation (e.g., how effective is their marketing plan for recruiting children and families for its programs)."
  •  
    Needs evaluation, process evaluation, outcomes evaluation -- all useful for improvement.
Joshua Yeidel

Program Assessment of Student Learning - 3 views

  •  
    "It is hoped that, in some small way, this blog can both engage and challenge faculty and administrators alike to become more intentional in their program assessment efforts, creating systematic and efficient processes that actually have the likelihood of improving student learning while honoring faculty time."
  •  
    As recommended by Ashley. Apparently Dr. Rogers' blog is just starting up, so you can "get in on the ground floor".
Kimberly Green

Movie Clips and Copyright - 0 views

  •  
    Video clips -- sometimes the copyright question comes up, so this green light is good news. Video clips may lend themselves to scenario-based assessments -- instead of reading a long article, students could look at a digitally presented case to analyze and critique -- might open up a lot of possibilities for assessment activities. a latest round of rule changes, issued Monday by the U.S. Copyright Office, dealing with what is legal and what is not as far as decrypting and repurposing copyrighted content. One change in particular is making waves in academe: an exemption that allows professors in all fields and "film and media studies students" to hack encrypted DVD content and clip "short portions" into documentary films and "non-commercial videos." (The agency does not define "short portions.") This means that any professors can legally extract movie clips and incorporate them into lectures, as long as they are willing to decrypt them - a task made relatively easy by widely available programs known as "DVD rippers." The exemption also permits professors to use ripped content in non-classroom settings that are similarly protected under "fair use" - such as presentations at academic conferences.
Gary Brown

Program Assessment of Student Learning: July 2010 - 3 views

  • There are lots of considerations when considering a technology solution to the outcomes assessment process.  The first thing is to be very clear about what a system can and cannot do.  It CANNOT do your program assessment and evaluation for you!  The institution or program must first define the intended outcomes and performance indicators.  Without a doubt, that is the most difficult part of the process.  Once the indicators have been defined you need to be clear about the role of students and faculty in the use of the techology.  Also, who is the technology "owner"--who will maintain it, keep the outcomes/indicators current, generate reports, etc. etc.
  •  
    This question returns to us, so here is a resource and key to be able to point to.
Gary Brown

Scholars Assess Their Progress on Improving Student Learning - Research - The Chronicle... - 0 views

  • International Society for the Scholarship of Teaching and Learning, which drew 650 people. The scholars who gathered here were cautiously hopeful about colleges' commitment to the study of student learning, even as the Carnegie Foundation winds down its own project. (Mr. Shulman stepped down as president last year, and the foundation's scholarship-of-teaching-and-learning program formally came to an end last week.) "It's still a fragile thing," said Pat Hutchings, the Carnegie Foundation's vice president, in an interview here. "But I think there's a huge amount of momentum." She cited recent growth in faculty teaching centers,
  • Mary Taylor Huber, director of the foundation's Integrative Learning Project, said that pressure from accrediting organizations, policy makers, and the public has encouraged colleges to pour new resources into this work.
  • The scholars here believe that it is much more useful to try to measure and improve student learning at the level of individual courses. Institutionwide tests like the Collegiate Learning Assessment have limited utility at best, they said.
  • ...6 more annotations...
  • Mr. Bass and Toru Iiyoshi, a senior strategist at the Massachusetts Institute of Technology's office of educational innovation and technology, pointed to an emerging crop of online multimedia projects where college instructors can share findings about their teaching. Those sites include Merlot and the Digital Storytelling Multimedia Archive.
  • "If you use a more generic instrument, you can give the accreditors all the data in the world, but that's not really helpful to faculty at the department level," said the society's president, Jennifer Meta Robinson, in an interview. (Ms. Robinson is also a senior lecturer in communication and culture at Indiana University at Bloomington.)
  • "We need to create 'middle spaces' for the scholarship of teaching and learning," said Randall Bass, assistant provost for teaching and learning initiatives at Georgetown University, during a conference session on Friday.
  • It is vital, Ms. Peseta said, for scholars' articles about teaching and learning to be engaging and human. But at the same time, she urged scholars not to dumb down their statistical analyses or the theoretical foundations of their studies. She even put in a rare good word for jargon.
  • No one had a ready answer. Ms. Huber, of the Carnegie Foundation, noted that a vast number of intervening variables make it difficult to assess the effectiveness of any educational project.
  • "Well, I guess we have a couple of thousand years' worth of evidence that people don't listen to each other, and that we don't build knowledge," Mr. Bass quipped. "So we're building on that momentum."
  •  
    Note our friends Randy Bass (AAEEBL) and Mary Huber are prominent.
Gary Brown

News: Assessing the Assessments - Inside Higher Ed - 2 views

  • The validity of a measure is based on evidence regarding the inferences and assumptions that are intended to be made and the uses to which the measure will be put. Showing that the three tests in question are comparable does not support Shulenburger's assertion regarding the value-added measure as a valid indicator of institutional effectiveness. The claim that public university groups have previously judged the value-added measure as appropriate does not tell us anything about the evidence upon which this judgment was based nor the conditions under which the judgment was reached. As someone familiar with the process, I would assert that there was no compelling evidence presented that these instruments and the value-added measure were validated for making this assertion (no such evidence was available at the time), which is the intended use in the VSA.
  • (however much the sellers of these tests tell you that those samples are "representative"), they provide an easy way out for academic administrators who want to avoid the time-and-effort consuming but incredibly valuable task of developing detailed major program learning outcome statements (even the specialized accrediting bodies don't get down to the level of discrete, operational statements that guide faculty toward appropriate assessment design)
  • f somebody really cared about "value added," they could look at each student's first essay in this course, and compare it with that same student's last essay in this course. This person could then evaluate each individual student's increased mastery of the subject-matter in the course (there's a lot) and also the increased writing skill, if any.
  • ...1 more annotation...
  • These skills cannot be separated out from student success in learning sophisticated subject-matter, because understanding anthropology, or history of science, or organic chemistry, or Japanese painting, is not a matter of absorbing individual facts, but learning facts and ways of thinking about them in a seamless, synthetic way. No assessment scheme that neglects these obvious facts about higher education is going to do anybody any good, and we'll be wasting valuable intellectual and financial resources if we try to design one.
  •  
    ongoing discussion of these tools. Note Longanecker's comment and ask me why.
Kimberly Green

Strategic National Arts Alumni Project (SNAAP) - 0 views

  •  
    WSU is participating in this survey. Looks interesting, follow up on students who graduate with an arts degree. Could be useful in program assessment in a number of ways ( a model, sample questions, as well as ways to leverage nationally collected data.) Kimberly Welcome to the Strategic National Arts Alumni Project (SNAAP), an annual online survey, data management, and institutional improvement system designed to enhance the impact of arts-school education. SNAAP partners with arts high schools, art and design colleges, conservatories and arts programs within colleges and universities to administer the survey to their graduates. SNAAP is a project of the Indiana University Center for Postsecondary Research in collaboration with the Vanderbilt University Curb Center for Art, Enterprise, and Public Policy. Lead funding is provided by the Surdna Foundation, with major partnership support from the Houston Endowment, Barr Foundation, Cleveland Foundation, Educational Foundation of America and the National Endowment for the Arts. improvement system designed to enhance the impact of arts-school education. SNAAP partners with arts high schools, art and design colleges, conservatories and arts programs within colleges and universities to administer the survey to their graduates. SNAAP is a project of the Indiana University Center for Postsecondary Research in collaboration with the Vanderbilt University Curb Center for Art, Enterprise, and Public Policy. Lead funding is provided by the Surdna Foundation, with major partnership support from the Houston Endowment, Barr Foundation, Cleveland Foundation, Educational Foundation of America and the National Endowment for the Arts."
1 - 20 of 52 Next › Last »
Showing 20 items per page