Skip to main content

Home/ Red Balloon Resources/ Group items tagged Student Evaluation of Faculty

Rss Feed Group items tagged

George Mehaffy

How 'Flipping' the Classroom Can Improve the Traditional Lecture - Teaching - The Chron... - 0 views

  •  
    The Chronicle of Higher Education February 19, 2012 How 'Flipping' the Classroom Can Improve the Traditional Lecture By Dan Berrett Andrew P. Martin loves it when his lectures break out in chaos. It happens frequently, when he asks the 80 students in his evolutionary-biology class at the University of Colorado at Boulder to work in small groups to solve a problem, or when he asks them to persuade one another that the answer they arrived at before class is correct. When they start working together, his students rarely stay in their seats, which are bolted to the floor. Instead they gather in the hallway or in the aisles, or spill toward the front of the room, where the professor typically stands. Mr. Martin, a professor of ecology and evolutionary biology, drops in on the discussions, asking and answering questions, and hearing where students are stumped. "Students are effectively educating each other," he says of the din that overtakes his room. "It means they're in control, and not me." Enlarge Image How 'Flipping' the Classroom Can Improve the Traditional Lecture 2 Benjamin Rasmussen for The Chronicle Students discuss the relationship between finches' beak sizes and survival rates during Andrew Martin's evolutionary-biology class at the U. of Colorado at Boulder. Such moments of chaos are embraced by advocates of a teaching technique called "flipping." As its name suggests, flipping describes the inversion of expectations in the traditional college lecture. It takes many forms, including interactive engagement, just-in-time teaching (in which students respond to Web-based questions before class, and the professor uses this feedback to inform his or her teaching), and peer instruction. But the techniques all share the same underlying imperative: Students cannot passively receive material in class, which is one reason some students dislike flipping. Instead they gather the information largely outside of class, by reading, watching recorded lectures, or list
George Mehaffy

Scholars of Education Question the Limits of 'Academically Adrift' - Faculty - The Chro... - 0 views

  •  
    "February 13, 2011 Scholars Question New Book's Gloom on Education Doubts are raised about study behind 'Academically Adrift' Scholars of Education Question the Limits of 'Academically Adrift' By David Glenn It has been a busy month for Richard Arum and Josipa Roksa. In mid-January, the University of Chicago Press published their gloomy account of the quality of undergraduate education, Academi­cally Adrift: Limited Learning on College Campuses. Since then the two sociologists have been through a torrent of radio interviews and public lectures. In the first days after the book's release, they had to handle a certain amount of breathless reaction, both pro and con, from people who hadn't actually read it. But now that more people in higher education have had time to digest their arguments, sophisticated conversations are developing about the study's lessons and about its limitations. Many college leaders are praising the ambition of Mr. Arum and Ms. Roksa's project, and some say they hope the book will focus new attention on the quality of undergraduate instruction. When the authors spoke last month at the annual meeting of the Association of American Colleges and Universities, in San Francisco, the ballroom far overfilled its capacity, and they were introduced as "rock stars." But three lines of skepticism have also emerged. Fiirst, some scholars say that Academically Adrift's heavy reliance on the Collegiate Learning Assessment, a widely used essay test that measures reasoning and writing skills, limits the value of the study. Second, some people believe the authors have not paid enough attention to the deprofessionalization of faculty work and the economic strains on colleges, factors that the critics say have played significant roles in the ero­sion of instructional quality. Third, some readers challenge the authors' position that the federal government should provide far more money to study the quality of college learning, but should not otherwise do mu
George Mehaffy

2 Studies Shed New Light on the Meaning of Course Evaluations - Faculty - The Chronicle... - 1 views

  •  
    "December 19, 2010 2 Studies Shed New Light on the Meaning of Course Evaluations By David Glenn Under the mandate of a recently enacted state law, the Web sites of public colleges and universities in Texas will soon include student-evaluation ratings for each and every undergraduate course. Bored and curious people around the planet-steelworkers in Ukraine, lawyers in Peru, clerical workers in India-will be able, if they're so inclined, to learn how students feel about Geology 3430 at Texas State University at San Marcos. But how should the public interpret those ratings? Are student-course evaluations a reasonable gauge of quality? Are they correlated with genuine measures of learning? And what about students who choose not to fill out the forms-does their absence skew the data? Two recent studies shed new light on those old questions. In one, three economists at the University of California at Riverside looked at a pool of more than 1100 students who took a remedial-mathematics course at a large university in the West (presumably Riverside) between 2007 and 2009. According to a working paper describing the study, the course was taught by 33 different instructors to 97 different sections during that period. The instructors had a good deal of freedom in their teaching and grading practices-but every student in every section had to pass a common high-stakes final exam, which they took after filling out their course evaluations. That high-stakes end-of-the-semester test allowed the Riverside economists to directly measure student learning. The researchers also had access to the students' pretest scores from the beginning of the semester, so they were able to track each student's gains. Most studies of course evaluations have lacked such clean measures of learning. Grades are an imperfect tool, as students' course ratings are usually strongly correlated with their grades in the course. Because of that powerful correlation, some studies have suggested that
George Mehaffy

Let's Improve Learning. OK, but How? - Commentary - The Chronicle of Higher Education - 0 views

  •  
    "December 31, 2011 Let's Improve Learning. OK, but How? By W. Robert Connor Does American higher education have a systematic way of thinking about how to improve student learning? It would certainly be useful, especially at a time when budgets are tight and the pressure is on to demonstrate better results. Oh, there's plenty of discussion-bright ideas, old certainties, and new approaches-and a rich discourse about innovation, reinvention, and transformation. But the most powerful ideas about improving learning are often unspoken. Amid all the talk about change, old assumptions exert their continuing grasp. For example, most of us assume that expanding the number of fields and specialties in the curriculum (and of faculty to teach them), providing more small classes, and lowering teaching loads (and, hence, lowering student-faculty ratios) are inherently good things. But while many of those ideas are plausible, few have been rigorously evaluated. So maybe it's time to stop relying on assumptions about improving learning and start finding out what really works best. A genuine theory of change, as such a systematic evaluation of effectiveness is sometimes called, would be grounded in knowledge about how students learn, and in the best way to put that knowledge to work. The theory should also be educationally robust; that is, it should not just help colleges expose students to certain subject matter, but also challenge institutions to help students develop the long-lasting survival skills needed in a time of radical and often unpredictable change. And it must also have its feet on the ground, with a sure footing in financial realities. Above all, those who would develop a truly systematic way of thinking about and creating change must be able to articulate their purpose. Given the great diversity of institutional types, student demographics, history, and mission among American colleges and universities, it's hard to discern a shared sense of purpose. But when f
George Mehaffy

Invisible Gorillas Are Everywhere - Advice - The Chronicle of Higher Education - 0 views

  •  
    "January 23, 2012 Invisible Gorillas Are Everywhere By William Pannapacker By now most everyone has heard about an experiment that goes something like this: Students dressed in black or white bounce a ball back and forth, and observers are asked to keep track of the bounces to team members in white shirts. While that's happening, another student dressed in a gorilla suit wanders into their midst, looks around, thumps his chest, then walks off, apparently unseen by most observers because they were so focused on the bouncing ball. Voilà: attention blindness. The invisible-gorilla experiment is featured in Cathy Davidson's new book, Now You See It: How the Brain Science of Attention Will Transform the Way We Live, Work, and Learn (Viking, 2011). Davidson is a founder of a nearly 7,000-member organization called Hastac, or the Humanities, Arts, Sciences, and Technology Advanced Collaboratory, that was started in 2002 to promote the use of digital technology in academe. It is closely affiliated with the digital humanities and reflects that movement's emphasis on collaboration among academics, technologists, publishers, and librarians. Last month I attended Hastac's fifth conference, held at the University of Michigan at Ann Arbor. Davidson's keynote lecture emphasized that many of our educational practices are not supported by what we know about human cognition. At one point, she asked members of the audience to answer a question: "What three things do students need to know in this century?" Without further prompting, everyone started writing down answers, as if taking a test. While we listed familiar concepts such as "information literacy" and "creativity," no one questioned the process of working silently and alone. And noticing that invisible gorilla was the real point of the exercise. Most of us are, presumably, the products of compulsory educational practices that were developed during the Industrial Revolution. And the way most of us teach is a relic of the s
George Mehaffy

Measuring College-Teacher Quality - Brainstorm - The Chronicle of Higher Education - 1 views

  •  
    "Measuring College-Teacher Quality January 13, 2011, 10:40 am By Kevin Carey David Glenn's Chronicle article on using course sequence grades to estimate teacher quality in higher education illustrates a crucial flaw in the way education researchers often think about the role of evidence in education practice. The article cites a recent study of Calculus grades in the Air Force Academy. All students there are required to take Calculus I and II. They're randomly assigned to instructors who use the same syllabus. Students all take the same final, which is collectively graded by a pool of instructors. These unusual circumstances control for many external factors that might otherwise complicate an analysis of teacher quality. The researchers found that students taught by permanent faculty got worse grades in Calculus I than students taught by short-term faculty. But the pattern reversed when those students went on to Calculus II-those taught by full-time faculty earned better grades in the more advanced course, suggesting that short-term faculty might have been "teaching to the test" at the expense of deeper conceptual understanding. Students taught by full-time faculty were also more likely to enroll in upper-level math in their junior and senior years. In addition, the study found that student course evaluations were positively correlated with grades in Calculus I but negatively correlated with grades in Calculus II."
George Mehaffy

Are Undergraduates Actually Learning Anything? - Commentary - The Chronicle of Higher E... - 0 views

  •  
    "January 18, 2011 Are Undergraduates Actually Learning Anything? By Richard Arum and Josipa Roksa Drawing on survey responses, transcript data, and results from the Collegiate Learning Assessment (a standardized test taken by students in their first semester and at the end of their second year), Richard Arum and Josipa Roksa concluded that a significant percentage of undergraduates are failing to develop the broad-based skills and knowledge they should be expected to master. Here is an excerpt from Academically Adrift: Limited Learning on College Campuses (University of Chicago Press), their new book based on those findings. "With regard to the quality of research, we tend to evaluate faculty the way the Michelin guide evaluates restaurants," Lee Shulman, former president of the Carnegie Foundation for the Advancement of Teaching, recently noted. "We ask, 'How high is the quality of this cuisine relative to the genre of food? How excellent is it?' With regard to teaching, the evaluation is done more in the style of the Board of Health. The question is, 'Is it safe to eat here?'" Our research suggests that for many students currently enrolled in higher education, the answer is: not particularly. Growing numbers of students are sent to college at increasingly higher costs, but for a large proportion of them the gains in critical thinking, complex reasoning, and written communication are either exceedingly small or empirically nonexistent. At least 45 percent of students in our sample did not demonstrate any statistically significant improvement in Collegiate Learning Assessment [CLA] performance during the first two years of college. [Further study has indicated that 36 percent of students did not show any significant improvement over four years.] While these students may have developed subject-specific skills that were not tested for by the CLA, in terms of general analytical competencies assessed, large numbers of U.S. college students can be accurately described
George Mehaffy

A Final Word on the Presidents' Student-Learning Alliance - Measuring Stick - The Chron... - 0 views

  •  
    "A Final Word on the Presidents' Student-Learning Alliance November 22, 2010, 1:36 pm By David Glenn Last week we published a series of comments (one, two, three) on the Presidents' Alliance for Excellence in Student Learning and Accountability. Today we're pleased to present a reply from David C. Paris, executive director of the presidential alliance's parent organization, the New Leadership Alliance for Student Learning and Accountability: I was very pleased to see the responses to the announcement of the Presidents' Alliance as generally welcoming ("commendable," "laudatory initiative," "applaud") the shared commitment of these 71 founding institutions to do more-and do it publicly and cooperatively-with regard to gathering, reporting, and using evidence of student learning. The set of comments is a fairly representative sample of positions on the issues of evidence, assessment, and accountability. We all agree that higher education needs to do more to develop evidence of student learning, to use it to measure and improve our work, and to be far more transparent and accountable in reporting the results. The comments suggest different approaches-and these differences are more complementary than contradictory-to where we should focus our efforts and how change will occur. I'd suggest that none of us has the answer, and while each of these approaches faces obstacles, each can contribute to progress in this work. For William Chace, Cliff Adelman, and Michael Poliakoff, the focus should be on some overarching measures or concepts that will clearly tell us, our students, and the public how well we are doing. Obtaining agreement on a "scale and index," or the appropriate "active verbs" describing competence, or dashboards and other common reporting mechanisms will drive change by establishing a common framework for evaluation. Josipa Roksa, on the other hand, suggests that real change will only happen from the ground up. Facu
George Mehaffy

'Trust Us' Won't Cut It Anymore - Commentary - The Chronicle of Higher Education - 0 views

  •  
    January 18, 2011 'Trust Us' Won't Cut It Anymore By Kevin Carey "Trust us." That's the only answer colleges ever provide when asked how much their students learn. Sure, they acknowledge, it's hard for students to find out what material individual courses will cover. So most students choose their courses based on a paragraph in the catalog and whatever secondhand information they can gather. No, there's isn't an independent evaluation process. No standardized tests, no external audits, no publicly available learning evidence of any kind. Yes, there's been grade inflation. A-minus is the new C. Granted, faculty have every incentive to neglect their teaching duties while chasing tenure-if they're lucky enough to be in the chase at all. Meanwhile the steady adjunctification of the professoriate proceeds. Still, "trust us," they say: Everyone who walks across our graduation stage has completed a rigorous course of study. We don't need to systematically evaluate student learning. Indeed, that would violate the academic freedom of our highly trained faculty, each of whom embodies the proud scholarly traditions of this venerable institution. Now we know that those are lies."
John Hammang

Texas A&M to Revise Controversial Faculty Rewards Based on Student Evaluations - Facult... - 0 views

  •  
    The Texas A&M System has modified its "Faculty Appreciation Awards" in response to concerns raised by the faculty. Awards will now be based on a two question student opinion survey about their best professor this semester and their best ever.
George Mehaffy

Ranking the Rankings - Innovations - The Chronicle of Higher Education - 0 views

  •  
    "Ranking the Rankings August 23, 2010, 9:01 am By Richard Kahlenberg If it's back to school, it must be time for the publication of college rankings. In recent days, U.S. News & World Report released its much-discussed rankings of U.S. colleges and universities, and the Shanghai Jiao Tong University declared its ranking of world universities. As my Innovations Blog colleague Richard Vedder noted recently, Forbes has its own rankings to compete with U.S. News, and Vedder (who helped Forbes come up with its methodology) argues that Forbes's is better-that is, ranks higher. My good friend Ben Wildavsky, a former education editor at U.S. News, discusses the proliferation of rankings in his fascinating new book, The Great Brain Race: How Global Universities Are Reshaping the World. Wildavsky devotes a lengthy chapter to global rankings and compares and contrasts the two main international rankings-the Shanghai rankings, which look primarily at science research (counting factors such as the number of alumni and faculty who have Nobel Prizes and citations in science journals) with those of the Times Higher Education Supplement, which heavily weights academic peer evaluations. Despite their fundamental differences, Wildavsky notes, in 2008, the top 10 in the two lists had seven overlapping institutions. My own favorite in the rankings game is The Washington Monthly, which today released the 2010 rankings of "What Can Colleges Do for the Country." While other guides "help students and parents decide how to spend their tuition dollars wisely," the Monthly says its goal is "to tell citizens and policy makers which colleges [are] spending their tax dollars wisely." The Monthly ranks colleges and universities based on whether they promote social mobility; research, and service. As I've noted elsewhere, one of the intriguing findings of the Monthly's social mobility ranking is that public universities systems where affirmative action by race has bee
1 - 11 of 11
Showing 20 items per page