Skip to main content

Home/ CTLT and Friends/ Group items tagged student evaluations

Rss Feed Group items tagged

Nils Peterson

2009 Annual Meeting | Conference Program - 0 views

  • This session explores the notion that assessment for transformational learning is best utilized as a learning tool. By providing timely, transparent, and appropriate feedback, both to students and to the institution itself, learning is enhanced – a far different motive for assessment than is external accountability.
    • Nils Peterson
       
      need to get to these guys with our harvesting gradebook ideas...
    • Nils Peterson
       
      decided to attend another session. Hersh was OK before lunch, but the talk by Pan looks more promising
  • Academic and corporate communities agree on the urgent need for contemporary, research-based pedagogies of engagement in STEM fields. Participants will learn how leaders from academic departments and institutions have collaborated with leaders from the corporate and business community in regional networks to ensure that graduates meet the expectations of prospective employers and the public.
    • Nils Peterson
       
      here is another session with links to CTLT work, both harvesting gradebook and the ABET work
  • Professor Pan will discuss the reflective teaching methods used to prepare students to recognize and mobilize community assets as they design, implement, and evaluate projects to improve public health.
    • Nils Peterson
       
      Students tasked to learn about a community, ride the bus, make a Doc appt. Then tasked to do a non-clinical health project in that community (they do plenty of clinical stuff elsewhere in the program). Project must build capacity in the community to survive after the student leaves. Example. Work with hispanic parents in Sacramento about parenting issue, ex getting kids to sleep on time. Student had identified problem in the community, but first project idea was show a video, which was not capacity building. Rather than showing the video, used the video as a template and made a new video. Families were actors. Result was spanish DVD that the community could own. Pan thinks this is increased capacity in the community.
  • ...17 more annotations...
  • Freshman Survey annually examines the academic habits of mind of entering first-year students.  Along with academic involvement, the survey examines diversity, civic engagement, college admissions and expectations of college. 
  • The project aims to promote faculty and student assessment of undergraduate research products in relation to outcomes associated with basic research skills and general undergraduate learning principles (communication and quantitative reasoning, critical thinking, and integration and application of knowledge).
  • They focus educators on the magnitude of the challenge to prepare an ever-increasingly diverse, globally-connected student body with the knowledge, ability, processes, and confidence to adapt to diverse environments and respond creatively to the enormous issues facing humankind.
  • One challenge of civic engagement in the co-curriculum is the merging of cost and outcome: creating meaningful experiences for students and the community with small staffs, on small budgets, while still having significant, purposeful impact. 
  • a)claims that faculty are the sole arbiters of what constitutes a liberal education and b) counter claims that student life professionals also possess the knowledge and expertise critical to defining students’ total learning experiences.  
    • Nils Peterson
       
      also, how many angels can dance on the head of a pin?
  • This session introduces a three-year national effort to document how colleges and universities are using assessment data to improve teaching and learning and to facilitate the dissemination and adoption of best practices in the assessment of college learning outcomes.
  • Exciting pedagogies of engagement abound, including undergraduate research, community-engaged learning, interdisciplinary exploration, and international study.  However, such experiences are typically optional and non-credit-bearing for students, and/or “on top of” the workload for faculty. This session explores strategies for integrating engaged learning into the institutional fabric (curriculum, student role, faculty role) and increasing access to these transformative experiences.
  • hands-on experiential learning, especially in collaboration with other students, is a superior pedagogy but how can this be provided in increasingly larger introductory classes? 
  • As educators seek innovative ways to manage knowledge and expand interdisciplinary attention to pressing global issues, as students and parents look for assurances that their tuition investment will pay professional dividends, and as alumni look for meaningful ways to give back to the institutions that nurtured and prepared them, colleges and universities can integrate these disparate goals through the Guilds, intergenerational membership networks that draw strength from the contributions of all of their members.
    • Nils Peterson
       
      see Theron's ideas for COMM.
  • Civic engagement learning derives its power from the engagement of students with real communities—local, national, and global. This panel explores the relationship between student learning and the contexts in which that learning unfolds by examining programs that place students in diverse contexts close to campus and far afield.
  • For institutional assessment to make a difference for student learning its results must result in changes in classroom practice. This session explores ways in which the institutional assessment of student learning, such as the Wabash National Study of Liberal Arts Education and the Collegiate Learning Assessment, can be connected to our classrooms.
  • Interdisciplinary Teaching and Object-Based Learning in Campus Museums
  • To address pressing needs of their communities, government and non-profit agencies are requesting higher education to provide education in an array of human and social services. To serve these needs effectively, higher educationneeds to broaden and deepen its consultation with practitioners in designing new curricula. Colleges and universities would do well to consider a curriculum development model that requires consultation not only with potential employers, but also with practitioners and supervisors of practitioners.
  • Should Academics be Active? Campuses and Cutting Edge Civic Engagement
  • If transformational liberal education requires engaging the whole student across the educational experience, how can colleges and universities renew strategy and allocate resources effectively to support it?  How can assessment be used to improve student learning and strengthen a transformational learning environment? 
    • Nils Peterson
       
      Purpose of university is not to grant degrees, it has something to do with learning. Keeling's perspective is that the learning should be transformative; changing perspective. Liberating and emancipatory Learning is a complex interaction among student and others, new knowledge and experience, event, own aspirations. learners construct meaning from these elements. "we change our minds" altering the brain at the micro-level Brain imaging research demonstrates that analogical learning (abstract) demands more from more areas of the brain than semantic (concrete) learning. Mind is not an abstraction, it is based in the brain, a working physical organ .Learner and the environment matter to the learning. Seeds magazine, current issue on brain imaging and learning. Segway from brain research to need for university to educate the whole student. Uses the term 'transformative learning' meaning to transform the learning (re-wire the brain) but does not use transformative assessment (see wikipedia).
  • But as public debates roil, higher education has been more reactive than proactive on the question of how best to ensure that today’s students are fully prepared for a fast-paced future.
    • Nils Peterson
       
      Bologna process being adopted (slowly) in EU, the idea is to make academic degrees more interchangeable and understandable across the EU three elements * Qualification Frameworks (transnational, national, disciplinary). Frameworks are graduated, with increasing expertise and autonomy required for the upper levels. They sound like broad skills that we might recognize in the WSU CITR. Not clear how they are assessed * Tuning (benchmarking) process * Diploma Supplements (licensure, thesis, other capstone activities) these extend the information in the transcript. US equivalent might be the Kuali Students system for extending the transcript. Emerging dialog on American capability This dialog is coming from 2 directions * on campus * employers Connect to the Greater Exceptions (2000-2005) iniative. Concluded that American HE has islands of innovation. Lead to LEAP (Liberal Education and America's Promise) Initiative (2005-2015). The dialog is converging because of several forces * Changes in the balance of economic and political power. "The rise of the rest (of the world)" * Global economy in which innovation is key to growth and prosperity LEAP attempts to frame the dialog (look for LEAP in AACU website). Miami-Dade CC has announced a LEAP-derived covenant, the goals must span all aspects of their programs. Define liberal education Knowledge of human cultures and the physical and natural world intellectual and practical skills responsibility integrative skills Marker of success is (here is where the Transformative Gradebook fits in): evidence that students can apply the essential learning outcomes to complex, unscripted problems and real-world settings Current failure -- have not tracked our progress, or have found that we are not doing well. See AACU employer survey 5-10% percent of current graduates taking courses that would meet the global competencies (transcript analysis) See NSSE on Personal and social responsibility gains, less tha
  • Dr. Pan will also talk about strategies for breaking down cultural barriers.
    • Nils Peterson
       
      Pan. found a non-profit agency to be a conduit and coordinator to level the power between univ and grass roots orgs. helped with cultural gaps.
Gary Brown

Evaluations That Make the Grade: 4 Ways to Improve Rating the Faculty - Teaching - The ... - 1 views

  • For students, the act of filling out those forms is sometimes a fleeting, half-conscious moment. But for instructors whose careers can live and die by student evaluations, getting back the forms is an hour of high anxiety
  • "They have destroyed higher education." Mr. Crumbley believes the forms lead inexorably to grade inflation and the dumbing down of the curriculum.
  • Texas enacted a law that will require every public college to post each faculty member's student-evaluation scores on a public Web site.
  • ...10 more annotations...
  • The IDEA Center, an education research group based at Kansas State University, has been spreading its particular course-evaluation gospel since 1975. The central innovation of the IDEA system is that departments can tailor their evaluation forms to emphasize whichever learning objectives are most important in their discipline.
  • (Roughly 350 colleges use the IDEA Center's system, though in some cases only a single department or academic unit participates.)
  • The new North Texas instrument that came from these efforts tries to correct for biases that are beyond an instructor's control. The questionnaire asks students, for example, whether the classroom had an appropriate size and layout for the course. If students were unhappy with the classroom, and if it appears that their unhappiness inappropriately colored their evaluations of the instructor, the system can adjust the instructor's scores accordingly.
  • The survey instrument, known as SALG, for Student Assessment of their Learning Gains, is now used by instructors across the country. The project's Web site contains more than 900 templates, mostly for courses in the sciences.
  • "So the ability to do some quantitative analysis of these comments really allows you to take a more nuanced and effective look at what these students are really saying."
  • Mr. Frick and his colleagues found that his new course-evaluation form was strongly correlated with both students' and instructors' own measures of how well the students had mastered each course's learning goals.
  • Elaine Seymour, who was then director of ethnography and evaluation research at the University of Colorado at Boulder, was assisting with a National Science Foundation project to improve the quality of science instruction at the college level. She found that many instructors were reluctant to try new teaching techniques because they feared their course-evaluation ratings might decline.
  • "Students are the inventory," Mr. Crumbley says. "The real stakeholders in higher education are employers, society, the people who hire our graduates. But what we do is ask the inventory if a professor is good or bad. At General Motors," he says, "you don't ask the cars which factory workers are good at their jobs. You check the cars for defects, you ask the drivers, and that's how you know how the workers are doing."
  • William H. Pallett, president of the IDEA Center, says that when course rating surveys are well-designed and instructors make clear that they care about them, students will answer honestly and thoughtfully.
  • In Mr. Bain's view, student evaluations should be just one of several tools colleges use to assess teaching. Peers should regularly visit one another's classrooms, he argues. And professors should develop "teaching portfolios" that demonstrate their ability to do the kinds of instruction that are most important in their particular disciplines. "It's kind of ironic that we grab onto something that seems fixed and fast and absolute, rather than something that seems a little bit messy," he says. "Making decisions about the ability of someone to cultivate someone else's learning is inherently a messy process. It can't be reduced to a formula."
  •  
    Old friends at the Idea Center, and an old but persistent issue.
Nils Peterson

Views: Changing the Equation - Inside Higher Ed - 1 views

  • But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resources
  • But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resource
  • But each year, after some gnashing of teeth, we opted to set tuition and institutional aid at levels that would maximize our net tuition revenue. Why? We were following conventional wisdom that said that investing more resources translates into higher quality and higher quality attracts more resources
  • ...19 more annotations...
  • year we strug
  • year we strug
  • those who control influential rating systems of the sort published by U.S. News & World Report -- define academic quality as small classes taught by distinguished faculty, grand campuses with impressive libraries and laboratories, and bright students heavily recruited. Since all of these indicators of quality are costly, my college’s pursuit of quality, like that of so many others, led us to seek more revenue to spend on quality improvements. And the strategy worked.
  • Based on those concerns, and informed by the literature on the “teaching to learning” paradigm shift, we began to change our focus from what we were teaching to what and how our students were learning.
  • No one wants to cut costs if their reputation for quality will suffer, yet no one wants to fall off the cliff.
  • When quality is defined by those things that require substantial resources, efforts to reduce costs are doomed to failure
  • some of the best thinkers in higher education have urged us to define the quality in terms of student outcomes.
  • Faculty said they wanted to move away from giving lectures and then having students parrot the information back to them on tests. They said they were tired of complaining that students couldn’t write well or think critically, but not having the time to address those problems because there was so much material to cover. And they were concerned when they read that employers had reported in national surveys that, while graduates knew a lot about the subjects they studied, they didn’t know how to apply what they had learned to practical problems or work in teams or with people from different racial and ethnic backgrounds.
  • Our applications have doubled over the last decade and now, for the first time in our 134-year history, we receive the majority of our applications from out-of-state students.
  • We established what we call college-wide learning goals that focus on "essential" skills and attributes that are critical for success in our increasingly complex world. These include critical and analytical thinking, creativity, writing and other communication skills, leadership, collaboration and teamwork, and global consciousness, social responsibility and ethical awareness.
  • despite claims to the contrary, many of the factors that drive up costs add little value. Research conducted by Dennis Jones and Jane Wellman found that “there is no consistent relationship between spending and performance, whether that is measured by spending against degree production, measures of student engagement, evidence of high impact practices, students’ satisfaction with their education, or future earnings.” Indeed, they concluded that “the absolute level of resources is less important than the way those resources are used.”
  • After more than a year, the group had developed what we now describe as a low-residency, project- and competency-based program. Here students don’t take courses or earn grades. The requirements for the degree are for students to complete a series of projects, captured in an electronic portfolio,
  • students must acquire and apply specific competencies
  • Faculty spend their time coaching students, providing them with feedback on their projects and running two-day residencies that bring students to campus periodically to learn through intensive face-to-face interaction
  • After a year and a half, the evidence suggests that students are learning as much as, if not more than, those enrolled in our traditional business program
  • As the campus learns more about the demonstration project, other faculty are expressing interest in applying its design principles to courses and degree programs in their fields. They created a Learning Coalition as a forum to explore different ways to capitalize on the potential of the learning paradigm.
  • a problem-based general education curriculum
  • At the very least, finding innovative ways to lower costs without compromising student learning is wise competitive positioning for an uncertain future
  • the focus of student evaluations has changed noticeably. Instead of focusing almost 100% on the instructor and whether he/she was good, bad, or indifferent, our students' evaluations are now focusing on the students themselves - as to what they learned, how much they have learned, and how much fun they had learning.
    • Nils Peterson
       
      gary diigoed this article. this comment shines another light -- the focus of the course eval shifted from faculty member to course & student learning when the focus shifted from teaching to learning
  •  
    A must read spotted by Jane Sherman--I've highlighed, as usual, much of it.
Kimberly Green

Would You Like Credit With That Internship? - Students - The Chronicle of Higher Education - 0 views

  •  
    Students pay tuition to work for free ... unpaid internships are growing in this down economy, favoring the wealthy who can afford them. But internships are generally valuable for students, administrators say. The complementary courses involve journals, essays, oral presentations, or work portfolios. Independent studies lean toward academics. Companies often see academic credit as substitute compensation that qualifies interns as legally unpaid trainees and keeps them on their colleges' liability insurance. Advertisements specify: "Candidates must be able to receive academic credit." That makes some campus officials bristle. "What they're saying is holding the institution hostage," says Kathy L. Sims, director of career services at the University of California at Los Angeles. Employers don't know colleges' academic standards, she says. "It's really not their call whether their experience is creditworthy." Colleges have dealt with that quandary in various ways. Some, especially those with traditions of experiential learning, vet and monitor internships, enrolling students in courses designed to complement their real-world work. Others let professors sponsor independent studies based on internships. More and more have devised some form of noncredit recognition to try to satisfy employers without altering academic philosophies or making students pay tuition to work free. At Bates College, the game is up. "We're quite adamant about our refusal to play along," says James W. Hughes, a professor of economics. As chairman of the department eight years ago, he got dozens of calls from students, parents, and employers asking for credit for unpaid internships, mainly in the financial industry. "Why is it that we have to evaluate this experience," he says, "just so some multibillion-dollar bank can avoid paying $7.50 an hour?" But the law is vague, and arguably antiquated. In the for-profit sector, guidelines for legally unpaid internships come from a 1947 U
Nils Peterson

AAC&U News | April 2010 | Feature - 1 views

  • Comparing Rubric Assessments to Standardized Tests
  • First, the university, a public institution of about 40,000 students in Ohio, needed to comply with the Voluntary System of Accountability (VSA), which requires that state institutions provide data about graduation rates, tuition, student characteristics, and student learning outcomes, among other measures, in the consistent format developed by its two sponsoring organizations, the Association of Public and Land-grant Universities (APLU), and the Association of State Colleges and Universities (AASCU).
  • And finally, UC was accepted in 2008 as a member of the fifth cohort of the Inter/National Coalition for Electronic Portfolio Research, a collaborative body with the goal of advancing knowledge about the effect of electronic portfolio use on student learning outcomes.  
  • ...13 more annotations...
  • outcomes required of all UC students—including critical thinking, knowledge integration, social responsibility, and effective communication
  • “The wonderful thing about this approach is that full-time faculty across the university  are gathering data about how their  students are doing, and since they’ll be teaching their courses in the future, they’re really invested in rubric assessment—they really care,” Escoe says. In one case, the capstone survey data revealed that students weren’t doing as well as expected in writing, and faculty from that program adjusted their pedagogy to include more writing assignments and writing assessments throughout the program, not just at the capstone level. As the university prepares to switch from a quarter system to semester system in two years, faculty members are using the capstone survey data to assist their course redesigns, Escoe says.
  • the university planned a “dual pilot” study examining the applicability of electronic portfolio assessment of writing and critical thinking alongside the Collegiate Learning Assessment,
  • The rubrics the UC team used were slightly modified versions of those developed by AAC&U’s Valid Assessment of Learning in Undergraduate Education (VALUE) project. 
  • In the critical thinking rubric assessment, for example, faculty evaluated student proposals for experiential honors projects that they could potentially complete in upcoming years.  The faculty assessors were trained and their rubric assessments “normed” to ensure that interrater reliability was suitably high.
  • “It’s not some nitpicky, onerous administrative add-on. It’s what we do as we teach our courses, and it really helps close that assessment loop.”
  • There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points “in a black box”:
  • faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind—leading to results that would not correlate to a computer-scored test. 
  • “The CLA provides scores at the institutional level. It doesn’t give me a picture of how I can affect those specific students’ learning. So that’s where rubric assessment comes in—you can use it to look at data that’s compiled over time.”
  • Their portfolios are now more like real learning portfolios, not just a few artifacts, and we want to look at them as they go into their third and fourth years to see what they can tell us about students’ whole program of study.”  Hall and Robles are also looking into the possibility of forming relationships with other schools from NCEPR to exchange student e-portfolios and do a larger study on the value of rubric assessment of student learning.
  • “We’re really trying to stress that assessment is pedagogy,”
  • “We found no statistically significant correlation between the CLA scores and the portfolio scores,”
  • In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement.
    • Nils Peterson
       
      CLA did not provide information for continuous program improvement -- we've heard this argument before
  •  
    The lack of correlation might be rephrased--there appears to be no corrlation between what is useful for faculty who teach and what is useful for the VSA. A corollary question: Of what use is the VSA?
Gary Brown

The Future of Wannabe U. - The Chronicle Review - The Chronicle of Higher Education - 2 views

  • Alice didn't tell me about the topics of her research; instead she listed the number of articles she had written, where they had been submitted and accepted, the reputation of the journals, the data sets she was constructing, and how many articles she could milk from each data set.
  • colleges and universities have transformed themselves from participants in an audit culture to accomplices in an accountability regime.
  • higher education has inaugurated an accountability regime—a politics of surveillance, control, and market management that disguises itself as value-neutral and scientific administration.
  • ...7 more annotations...
  • annabe administrator noted that the recipient had published well more than 100 articles. He never said why those articles mattered.
  • And all we have are numbers about teaching. And we don't know what the difference is between a [summary measure of] 7.3 and a 7.7 or an 8.2 and an 8.5."
  • The problem is that such numbers have no meaning. They cannot indicate the quality of a student's education.
  • or can the many metrics that commonly appear in academic (strategic) plans, like student credit hours per full-time-equivalent faculty member, or the percentage of classes with more than 50 students. Those productivity measures (for they are indeed productivity measures) might as well apply to the assembly-line workers who fabricate the proverbial widget, for one cannot tell what the metrics have to do with the supposed purpose of institutions of higher education—to create and transmit knowledge. That includes leading students to the possibility of a fuller life and an appreciation of the world around them and expanding their horizons.
  • But, like the fitness club's expensive cardio machines, a significant increase in faculty research, in the quality of student experiences (including learning), in the institution's service to its state, or in its standing among its peers may cost more than a university can afford to invest or would even dream of paying.
  • Such metrics are a speedup of the academic assembly line, not an intensification or improvement of student learning. Indeed, sometimes a boost in some measures, like an increase in the number of first-year students participating in "living and learning communities," may even detract from what students learn. (Wan U.'s pre-pharmacy living-and-learning community is so competitive that students keep track of one another's grades more than they help one another study. Last year one student turned off her roommate's alarm clock so that she would miss an exam and thus no longer compete for admission to the School of Pharmacy.)
  • Even metrics intended to indicate what students may have learned seem to have more to do with controlling faculty members than with gauging education. Take student-outcomes assessments, meant to be evaluations of whether courses have achieved their goals. They search for fault where earlier researchers would not have dreamed to look. When parents in the 1950s asked why Johnny couldn't read, teachers may have responded that it was Johnny's fault; they had prepared detailed lesson plans. Today student-outcomes assessment does not even try to discover whether Johnny attended class; instead it produces metrics about outcomes without considering Johnny's input.
  •  
    A good one to wrestle with.  It may be worth formulating distinctions we hold, and steering accordingly.
Nils Peterson

Change Magazine - The New Guys in Assessment Town - 0 views

  • if one of the institution’s general education goals is critical thinking, the system makes it possible to call up all the courses and programs that assess student performance on that outcome.
  • bringing together student learning outcomes data at the level of the institution, program, course, and throughout student support services so that “the data flows between and among these levels”
  • Like its competitors, eLumen maps outcomes vertically across courses and programs, but its distinctiveness lies in its capacity to capture what goes on in the classroom. Student names are entered into the system, and faculty use a rubric-like template to record assessment results for every student on every goal. The result is a running record for each student available only to the course instructor (and in a some cases to the students themselves, who can go to the system to  get feedback on recent assessments).
    • Nils Peterson
       
      sounds like harvesting gradebook. assess student work and roll up
    • Joshua Yeidel
       
      This system has some potential for formative use at the per-student leve.
  • ...7 more annotations...
  • “I’m a little wary.  It seems as if, in addition to the assessment feedback we are already giving to students, we might soon be asked to add a data-entry step of filling in boxes in a centralized database for all the student learning outcomes. This is worrisome to those of us already struggling under the weight of all that commenting and essay grading.”
    • Nils Peterson
       
      its either double work, or not being understood that the grading and the assessment can be the same activity. i suspect the former -- grading is being done with different metrics
    • Joshua Yeidel
       
      I am in the unusual position of seeing many papers _after_ they have been graded by a wide variety of teachers. Many of these contain little "assessment feedback" -- many teachers focus on "correcting" the papers and finding some letter or number to assign as a value.
  • “This is where we see many institutions struggling,” Galvin says. “Faculty simply don’t have the time for a deeper involvement in the mechanics of assessment.” Many have never seen a rubric or worked with one, “so generating accurate, objective data for analysis is a challenge.”  
    • Nils Peterson
       
      Rather than faculty using the community to help with assessment, they are outsourcing to a paid assessor -- this is the result of undertaking this thinking while also remaining in the institution-centric end of the spectrum we developed
  • I asked about faculty pushback. “Not so much,” Galvin says, “not after faculty understand that the process is not intended to evaluate their work.”
    • Nils Peterson
       
      red flag
  • the annual reports required by this process were producing “heaps of paper” while failing to track trends and developments over time. “It’s like our departments were starting anew every year,” Chaplot says. “We wanted to find a way to house the data that gave us access to what was done in the past,” which meant moving from discrete paper reports to an electronic database.
    • Joshua Yeidel
       
      It's not clear whether the "database" is housing measurements, narratives and reflections, or all of the above.
  • Can eLumen represent student learning in language? No, but it can quantify the number of boxes checked against number of boxes not checked.”
  • developing a national repository of resources, rubrics, outcomes statements, and the like that can be reviewed and downloaded by users
    • Nils Peterson
       
      in building our repository we could well open-source these tools, no need to lock them up
  • “These solutions cement the idea that assessment is an administrative rather than an educational enterprise, focused largely on accountability. They increasingly remove assessment decision making from the everyday rhythm of teaching and learning and the realm of the faculty.
    • Nils Peterson
       
      Over the wall assessment, see Transformative Assessment rubric for more detail
Gary Brown

Online Evaluations Show Same Results, Lower Response Rate - Wired Campus - The Chronicl... - 1 views

  • Students give the same responses on paper as on online course evaluations but are less likely to respond to online surveys, according to a recent study.
  • The only meaningful difference between student ratings completed online and on paper was that students who took online surveys gave their professors higher ratings for using educational technology to promote learning.
  • Seventy-eight percent of students enrolled in classes with paper surveys responded to them, but  only 53 percent of students enrolled in classes with online surveys responded.
  • ...2 more annotations...
  • "If you have lower response rates, you're less inclined to make summative decisions about a faculty member's performance,"
  • While the majority of instructors still administer paper surveys, the number using online surveys increased from 1.08 percent in 2002 to 23.23 percent in 2008.
  •  
    replication of our own studies
Gary Brown

Empowerment Evaluation - 1 views

  • Empowerment Evaluation in Stanford University's School of Medicine
  • Empowerment evaluation provides a method for gathering, analyzing, and sharing data about a program and its outcomes and encourages faculty, students, and support personnel to actively participate in system changes.
  • It assumes that the more closely stakeholders are involved in reflecting on evaluation findings, the more likely they are to take ownership of the results and to guide curricular decision making and reform.
  • ...8 more annotations...
  • The steps of empowerment evaluation
  • designating a “critical friend” to communicate areas of potential improvement,
  • collecting evaluation data,
  • encouraging a cycle of reflection and action
  • establishing a culture of evidence
  • developing reflective educational practitioners.
  • cultivating a community of learners
  • yearly cycles of improvement at the Stanford University School of Medicine
  •  
    The findings were presented in Academic Medicine, a medical education journal, earlier this year
Gary Brown

What's Wrong With the American University System - Culture - The Atlantic - 3 views

  • But when the young superstar sat down with the department chair, he seemed to have only one goal: to land a tenure-track position that involved as many sabbaticals and as little teaching as possible
  • Hacker and his coauthor, New York Times writer Claudia Dreifus, use this cautionary tale to launch their new book, a fierce critique of modern academia called Higher Education? "The question mark in our title," they write, "is the key to this book." To their minds, little of what takes place on college campuses today can be considered either "higher" or "education."
  • They blame a system that favors research over teaching and vocational training over liberal arts.
  • ...10 more annotations...
  • Tenure, they argue, does anything but protect intellectual freedom
  • Schools get status by bringing on professors who are star researchers, star scholars. That's all we really know about Caltech or MIT or Stanford. We don't really know about the quality of undergraduate teaching at any of these places. And it's the students who suffer.
  • Claudia and I were up at Harvard talking to students, and they said they get nothing from their classes, but that doesn't matter. They're smart already—they can breeze through college. The point is that they're going to be Harvard people when they come out.
  • So tenure is, in fact, the enemy of spontaneity, the enemy of intellectual freedom.
  • Good teaching can't be quantified at the college level.
  • or instance, Evergreen College, a sweet little state school in Olympia, Washington. We spent three days there and it was fantastic. They don't give grades, and they don't have academic departments. There are no faculty rankings. Almost all the classes we saw were taught by two professors—say, one from philosophy and one from psychology, teaching jointly on Henry and William James. Even though they don't give grades, the professors write out long evaluations for students. And the students have no problem getting into graduate schools.
  • I like Missouri Western State. It's a third-tier university, but the faculty realize they're going to stay there, they're not going to get hired away by other colleges, so they pitch in and take teaching seriously. At a school like that, you have a decent chance of finding a mentor who will write you a strong recommendation, better than you would at Harvard.
  • We believe the current criteria for admissions—particularly the SAT—are just so out of whack. It's like No Child Left Behind. It really is. It's one of the biggest crimes that's ever been perpetrated.
  • Professor X. He argued that some students just aren't ready for college. What's your view on that? Our view is that the primary obligation belongs to the teacher. Good teaching is not just imparting knowledge, like pouring milk into a jug. It's the job of the teacher to get students interested and turned on no matter what the subject is. Every student can be turned on if teachers really engage in this way. We saw it at Evergreen and other places that have this emphasis.
  • This is the hand I was dealt this semester. This is my job." Some people say to me, "Your students at Queens, are they any good?" I say, "I make them good." Every student is capable of college. I know some people have had difficult high school educations. But if you have good teachers who really care, it's remarkable how you can make up the difference.
  •  
    In case you haven't already seen this.  While don't deny higher education needs attention, I personal wish there'd be far more attention paid to lower education and regressive education (my own term for, redressing and improving the education of all U.S. citizens).  We are in the process of destroying our country and our world.  Education as at the very heart of any solution.
  •  
    More of the discussion in the news--the Atlantic
Nils Peterson

Half an Hour: Open Source Assessment - 0 views

  • When posed the question in Winnipeg regarding what I thought the ideal open online course would look like, my eventual response was that it would not look like a course at all, just the assessment.
    • Nils Peterson
       
      I remembered this Downes post on the way back from HASTAC. It is some of the roots of our Spectrum I think.
  • The reasoning was this: were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources.
  • In Holland I encountered a person from an organization that does nothing but test students. This is the sort of thing I long ago predicted (in my 1998 Future of Online Learning) so I wasn't that surprised. But when I pressed the discussion the gulf between different models of assessment became apparent.Designers of learning resources, for example, have only the vaguest of indication of what will be on the test. They have a general idea of the subject area and recommendations for reading resources. Why not list the exact questions, I asked? Because they would just memorize the answers, I was told. I was unsure how this varied from the current system, except for the amount of stuff that must be memorized.
    • Nils Peterson
       
      assumes a test as the form of assessment, rather than something more open ended.
  • ...8 more annotations...
  • As I think about it, I realize that what we have in assessment is now an exact analogy to what we have in software or learning content. We have proprietary tests or examinations, the content of which is held to be secret by the publishers. You cannot share the contents of these tests (at least, not openly). Only specially licensed institutions can offer the tests. The tests cost money.
    • Nils Peterson
       
      See our Where are you on the spectrum, Assessment is locked vs open
  • Without a public examination of the questions, how can we be sure they are reliable? We are forced to rely on 'peer reviews' or similar closed and expert-based evaluation mechanisms.
  • there is the question of who is doing the assessing. Again, the people (or machines) that grade the assessments work in secret. It is expert-based, which creates a resource bottleneck. The criteria they use are not always apparent (and there is no shortage of literature pointing to the randomness of the grading). There is an analogy here with peer-review processes (as compared to recommender system processes)
  • What constitutes achievement in a field? What constitutes, for example, 'being a physicist'?
  • This is a reductive theory of assessment. It is the theory that the assessment of a big thing can be reduced to the assessment of a set of (necessary and sufficient) little things. It is a standards-based theory of assessment. It suggests that we can measure accomplishment by testing for accomplishment of a predefined set of learning objectives.Left to its own devices, though, an open system of assessment is more likely to become non-reductive and non-standards based. Even if we consider the mastery of a subject or field of study to consist of the accomplishment of smaller components, there will be no widespread agreement on what those components are, much less how to measure them or how to test for them.Consequently, instead of very specific forms of evaluation, intended to measure particular competences, a wide variety of assessment methods will be devised. Assessment in such an environment might not even be subject-related. We won't think of, say, a person who has mastered 'physics'. Rather, we might say that they 'know how to use a scanning electron microscope' or 'developed a foundational idea'.
  • We are certainly familiar with the use of recognition, rather than measurement, as a means of evaluating achievement. Ludwig Wittgenstein is 'recognized' as a great philosopher, for example. He didn't pass a series of tests to prove this. Mahatma Gandhi is 'recognized' as a great leader.
  • The concept of the portfolio is drawn from the artistic community and will typically be applied in cases where the accomplishments are creative and content-based. In other disciplines, where the accomplishments resemble more the development of skills rather than of creations, accomplishments will resemble more the completion of tasks, like 'quests' or 'levels' in online games, say.Eventually, over time, a person will accumulate a 'profile' (much as described in 'Resource Profiles').
  • In other cases, the evaluation of achievement will resemble more a reputation system. Through some combination of inputs, from a more or less define community, a person may achieve a composite score called a 'reputation'. This will vary from community to community.
  •  
    Fine piece, transformative. "were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources."
Theron DesRosier

Education Data Model (National Forum on Education Statistics). Strategies for building ... - 0 views

  •  
    "The National Education Data Model is a conceptual but detailed representation of the education information domain focused at the student, instructor and course/class levels. It delineates the relationships and interdependencies between the data elements necessary to document, operate, track, evaluate, and improve key aspects of an education system. The NEDM strives to be a shared understanding among all education stakeholders as to what information needs to be collected and managed at the local level in order to enable effective instruction of students and superior leadership of schools. It is a comprehensive, non-proprietary inventory and a map of education information that can be used by schools, LEAs, states, vendors, and researchers to identify the information required for teaching, learning, administrative systems, and evaluation of education programs and approaches. "
Joshua Yeidel

Digication :: NCCC Art Department Program Evaluation :: Purpose of Evaluation - 0 views

  •  
    An eportfolio for program evaluation by the Northwest Connecticut Community College Art Department. Slick, well-organized, and pretty using Digication as a platform and host. A fine portfolio, which could well be a model for our programs, except that there is not a single direct measure of student learning outcomes.
Gary Brown

Don't Shrug Off Student Evaluations - The Chronicle Review - The Chronicle of Higher Ed... - 0 views

  • On their most basic level, student evaluations are important because they open the doors of our classrooms. It is one of the remarkable ironies of academe that while we teachers seek to open the minds of our students—to shine a light on hypocrisy, illusion, corruption, and distortion; to tell the truth of our disciplines as we see it—some of us want that classroom door to be closed to the outside world. It is as if we were living in some sort of academic version of the Da Vinci code: Only insiders can know the secret handshake.
  •  
    A Chronicle version that effectively surveys the issues. Maybe nothing new, but a few nuggets.
Joshua Yeidel

Scholar Raises Doubts About the Value of a Test of Student Learning - Research - The Ch... - 3 views

  • Beginning in 2011, the 331 universities that participate in the Voluntary System of Accountability will be expected to publicly report their students' performance on one of three national tests of college-level learning.
  • But at least one of those three tests—the Collegiate Learning Assessment, or CLA—isn't quite ready to be used as a tool of public accountability, a scholar suggested here on Tuesday during the annual meeting of the Association for Institutional Research.
  • Students' performance on the test was strongly correlated with how long they spent taking it.
  • ...6 more annotations...
  • Besides the CLA, which is sponsored by the Council for Aid to Education, other tests that participants in the voluntary system may use are the Collegiate Assessment of Academic Proficiency, from ACT Inc., and the Measure of Academic Proficiency and Progress, offered by the Educational Testing Service.
  • The test has sometimes been criticized for relying on a cross-sectional system rather than a longitudinal model, in which the same students would be tested in their first and fourth years of college.
  • there have long been concerns about just how motivated students are to perform well on the CLA.
  • Mr. Hosch suggested that small groups of similar colleges should create consortia for measuring student learning. For example, five liberal-arts colleges might create a common pool of faculty members that would evaluate senior theses from all five colleges. "That wouldn't be a national measure," Mr. Hosch said, "but it would be much more authentic."
  • Mr. Shavelson said. "The challenge confronting higher education is for institutions to address the recruitment and motivation issues if they are to get useful data. From my perspective, we need to integrate assessment into teaching and learning as part of students' programs of study, thereby raising the stakes a bit while enhancing motivation of both students and faculty
  • "I do agree with his central point that it would not be prudent to move to an accountability system based on cross-sectional assessments of freshmen and seniors at an institution," said Mr. Arum, who is an author, with Josipa Roksa, of Academically Adrift: Limited Learning on College Campuses, forthcoming from the University of Chicago Press
  •  
    CLA debunking, but the best item may be the forthcoming book on "limited learning on College Campuses."
  •  
    "Micheal Scriven and I spent more than a few years trying to apply his multiple-ranking item tool (a very robust and creative tool, I recommend it to others when the alternative is multiple-choice items) to the assessment of critical thinking in health care professionals. The result might be deemed partially successful, at best. I eventually abandoned the test after about 10,000 administrations because the scoring was so complex we could not place it in non-technical hands."
  •  
    In comments on an article about CLA, Scriven's name comes up...
Nils Peterson

National Institute for Learning Outcomes Assessment - 1 views

  • Of the various ways to assess student learning outcomes, many faculty members prefer what are called “authentic” approaches that document student performance during or at the end of a course or program of study.  Authentic assessments typically ask students to generate rather than choose a response to demonstrate what they know and can do.  In their best form, such assessments are flexible and closely aligned with teaching and learning processes, and represent some of students more meaningful educational experiences.  In this paper, assessment experts Trudy Banta, Merilee Griffin, Theresa Flateby, and Susan Kahn describe the development of several promising authentic assessment approaches. 
  • Educators and policy makers in postsecondary education are interested in assessment processes that improve student learning, and at the same time provide comparable data for the purpose of demonstrating accountability.
  • First, ePortfolios provide an in-depth, long-term view of student achievement on a range of skills and abilities instead of a quick snapshot based on a single sample of learning outcomes. Second, a system of rubrics used to evaluate student writing and depth of learning has been combined with faculty learning and team assessments, and is now being used at multiple institutions. Third, online assessment communities link local faculty members in collaborative work to develop shared norms and teaching capacity, and then link local communities with each other in a growing system of assessment.
    • Nils Peterson
       
      hey, does this sound familiar? i'm guessing the portfolios are not anywhere on the Internet, but we're otherwise in good company
  • ...1 more annotation...
  • Three Promising Alternatives for Assessing College Students' Knowledge and Skills
    • Nils Peterson
       
      I'm not sure they are 'alternatives' so much as 3 elements we would combine into a single strategy
Corinna Lo

News: The Challenge of Comparability - Inside Higher Ed - 0 views

  •  
    But when it came to defining sets of common learning outcomes for specific degree programs -- Transparency by Design's most distinguishing characteristic -- commonality was hard to come by. Questions to apply to any institution could be: 1) For any given program, what specific student learning outcomes are graduates expected to demonstrate? 2) By what standards and measurements are students being evaluated? 3) How well have graduating students done relative to these expectations? Comparability of results (the 3rd question) depends on transparency of goals and expectations (the 1st question) and transparency of measures (the 2nd question).
Nils Peterson

How To Crowdsource Grading | HASTAC - 0 views

  • My colleagues and I at the University of Maine have pursued a similar course with The Pool, an online environment for sharing art and code that invites students to evaluate each other at various stages of their projects, from intent to approach to release.
    • Nils Peterson
       
      This is feedback on our Harvesting Gradebook and Crowdsourcing ideas. The Pool seems to be an implementation of the feedback mechanism with some ideas about reputation.
  • Like Slashdot's karma system, The Pool entrusts students who have contributed good work in the past with greater power to rate other students. In general students at U-Me have responded responsibly to this ethic; it may help that students are sometimes asked to evaluate students in other classes,
    • Nils Peterson
       
      While there is notion of karma and peer feedback, there does not seem to be notion of bringing in outside expertise or if it were to come in, to track its roles
Gary Brown

A Measure of Learning Is Put to the Test - Faculty - The Chronicle of Higher Education - 1 views

  • Others say those who take the test have little motivation to do well, which makes it tough to draw conclusions from their performance.
  • "Everything that No Child Left Behind signified during the Bush administration—we operate 180 degrees away from that," says Roger Benjamin, president of the Council for Aid to Education, which developed and promotes the CLA. "We don't want this to be a high-stakes test. We're putting a stake in the ground on classic liberal-arts issues. I'm willing to rest my oar there. These core abilities, these higher-order skills, are very important, and they're even more important in a knowledge economy where everyone needs to deal with a surplus of information." Only an essay test, like the CLA, he says, can really get at those skills.
  • "The CLA is really an authentic assessment process," says Pedro Reyes, associate vice chancellor for academic planning and assessment at the University of Texas system.
  • ...20 more annotations...
  • "The Board of Regents here saw that it would be an important test because it measures analytical ability, problem-solving ability, critical thinking, and communication. Those are the skills that you want every undergraduate to walk away with." (Other large systems that have embraced the CLA include California State University and the West Virginia system.)
  • value added
  • We began by administering a retired CLA question, a task that had to do with analyzing crime-reduction strategies,
  • performance task that mirrors the CLA
  • Mr. Ernsting and Ms. McConnell are perfectly sincere about using CLA-style tasks to improve instruction on their campuses. But at the same time, colleges have a less high-minded motive for familiarizing students with the CLA style: It just might improve their scores when it comes time to take the actual test.
  • by 2012, the CLA scores of more than 100 colleges will be posted, for all the world to see, on the "College Portrait" Web site of the Voluntary System of Accountability, an effort by more than 300 public colleges and universities to provide information about life and learning on their campuses.
  • If familiarizing students with CLA-style tasks does raise their scores, then the CLA might not be a pure, unmediated reflection of the full range of liberal-arts skills. How exactly should the public interpret the scores of colleges that do not use such training exercises?
  • Trudy W. Banta, a professor of higher education and senior adviser to the chancellor for academic planning and evaluation at Indiana University-Purdue University at Indianapolis, believes it is a serious mistake to publicly release and compare scores on the test. There is too much risk, she says, that policy makers and the public will misinterpret the numbers.
  • most colleges do not use a true longitudinal model: That is, the students who take the CLA in their first year do not take it again in their senior year. The test's value-added model is therefore based on a potentially apples-and-oranges comparison.
  • freshman test-takers' scores are assessed relative to their SAT and ACT scores, and so are senior test-takers' scores. For that reason, colleges cannot game the test by recruiting an academically weak pool of freshmen and a strong pool of seniors.
  • students do not always have much motivation to take the test seriously
  • seniors, who are typically recruited to take the CLA toward the end of their final semester, when they can already taste the graduation champagne.
  • Of the few dozen universities that had already chosen to publish CLA data on that site, roughly a quarter of the reports appeared to include erroneous descriptions of the year-to-year value-added scores.
  • It is clear that CLA scores do reflect some broad properties of a college education.
  • Students' CLA scores improved if they took courses that required a substantial amount of reading and writing. Many students didn't take such courses, and their CLA scores tended to stay flat.
  • Colleges that make demands on students can actually develop their skills on the kinds of things measured by the CLA.
  • Mr. Shavelson believes the CLA's essays and "performance tasks" offer an unusually sophisticated way of measuring what colleges do, without relying too heavily on factual knowledge from any one academic field.
  • Politicians and consumers want easily interpretable scores, while colleges need subtler and more detailed data to make internal improvements.
  • The CLA is used at more than 400 colleges
  • Since its debut a decade ago, it has been widely praised as a sophisticated alternative to multiple-choice tests
Gary Brown

Outsourced Grading, With Supporters and Critics, Comes to College - Teaching - The Chro... - 3 views

shared by Gary Brown on 06 Apr 10 - Cached
  • Lori Whisenant knows that one way to improve the writing skills of undergraduates is to make them write more. But as each student in her course in business law and ethics at the University of Houston began to crank out—often awkwardly—nearly 5,000 words a semester, it became clear to her that what would really help them was consistent, detailed feedback.
  • She outsourced assignment grading to a company whose employees are mostly in Asia.
  • The graders working for EduMetry, based in a Virginia suburb of Washington, are concentrated in India, Singapore, and Malaysia, along with some in the United States and elsewhere. They do their work online and communicate with professors via e-mail.
  • ...8 more annotations...
  • The company argues that professors freed from grading papers can spend more time teaching and doing research.
  • "This is what they do for a living," says Ms. Whisenant. "We're working with professionals." 
  • Assessors are trained in the use of rubrics, or systematic guidelines for evaluating student work, and before they are hired are given sample student assignments to see "how they perform on those," says Ravindra Singh Bangari, EduMetry's vice president of assessment services.
  • Professors give final grades to assignments, but the assessors score the papers based on the elements in the rubric and "help students understand where their strengths and weaknesses are," says Tara Sherman, vice president of client services at EduMetry. "Then the professors can give the students the help they need based on the feedback."
  • The assessors use technology that allows them to embed comments in each document; professors can review the results (and edit them if they choose) before passing assignments back to students.
  • But West Hills' investment, which it wouldn't disclose, has paid off in an unexpected way. The feedback from Virtual-TA seems to make the difference between a student's remaining in an online course and dropping out.
  • Because Virtual-TA provides detailed comments about grammar, organization, and other writing errors in the papers, students have a framework for improvement that some instructors may not be able to provide, she says.
  • "People need to get past thinking that grading must be done by the people who are teaching," says Mr. Rajam, who is director of assurance of learning at George Washington University's School of Business. "Sometimes people get so caught up in the mousetrap that they forget about the mouse."
1 - 20 of 44 Next › Last »
Showing 20 items per page