Skip to main content

Home/ CTLT and Friends/ Group items matching "faculty" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Gary Brown

A Final Word on the Presidents' Student-Learning Alliance - Measuring Stick - The Chronicle of Higher Education - 1 views

  • I was very pleased to see the responses to the announcement of the Presidents’ Alliance as generally welcoming (“commendable,” “laudatory initiative,” “applaud”) the shared commitment of these 71 founding institutions to do more—and do it publicly and cooperatively—with regard to gathering, reporting, and using evidence of student learning.
  • establishing institutional indicators of educational progress that could be valuable in increasing transparency may not suggest what needs changing to improve results
  • As Adelman’s implied critique of the CLA indicates, we may end up with an indicator without connections to practice.
  • ...6 more annotations...
  • The Presidents’ Alliance’s focus on and encouragement of institutional efforts is important to making these connections and steps in a direct way supporting improvement.
  • Second, it is hard to disagree with the notion that ultimately evidence-based improvement will occur only if faculty members are appropriately trained and encouraged to improve their classroom work with undergraduates.
  • Certainly there has to be some connection between and among various levels of assessment—classroom, program, department, and institution—in order to have evidence that serves both to aid improvement and to provide transparency and accountability.
  • Presidents’ Alliance is setting forth a common framework of “critical dimensions” that institutions can use to evaluate and extend their own efforts, efforts that would include better reporting for transparency and accountability and greater involvement of faculty.
  • there is wide variation in where institutions are in their efforts, and we have a long way to go. But what is critical here is the public commitment of these institutions to work on their campuses and together to improve the gathering and reporting of evidence of student learning and, in turn, using evidence to improve outcomes.
  • The involvement of institutions of all types will make it possible to build a more coherent and cohesive professional community in which evidence-based improvement of student learning is tangible, visible, and ongoing.
Nils Peterson

Nonacademic Members Push Changes in Anthropology Group - Faculty - The Chronicle of Higher Education - 1 views

  • Cathleen Crain, an anthropologist who runs a consulting firm near Washington: "There is a growing vision of a unified anthropology, where academics informs practice and practice informs academics."
    • Nils Peterson
       
      Anthropology is having a conversation about stakeholders and this is impacting the national anthro organization. I wonder if its producing metrics that might inform student learning outcomes work.
Gary Brown

Views: Asking Too Much (and Too Little) of Accreditors - Inside Higher Ed - 1 views

  • Senators want to know why accreditors haven’t protected the public interest.
  • Congress shouldn’t blame accreditors: it should blame itself. The existing accreditation system has neither ensured quality nor ferreted out fraud. Why? Because Congress didn’t want it to. If Congress truly wants to protect the public interest, it needs to create a system that ensures real accountability.
  • But turning accreditors into gatekeepers changed the picture. In effect, accreditors now held a gun to the heads of colleges and universities since federal financial aid wouldn’t flow unless the institution received “accredited” status.
  • ...10 more annotations...
  • Congress listened to higher education lobbyists and designated accreditors -- teams made up largely of administrators and faculty -- to be “reliable authorities” on educational quality. Intending to protect institutional autonomy, Congress appropriated the existing voluntary system by which institutions differentiated themselves.
  • Meanwhile, there is ample evidence that many accredited colleges are adding little educational value. The 2006 National Assessment of Adult Literacy revealed that nearly a third of college graduates were unable to compare two newspaper editorials or compute the cost of office items, prompting the Spellings Commission and others to raise concerns about accreditors’ attention to productivity and quality.
  • accreditation is “premised upon collegiality and assistance; rather than requirements that institutions meet certain standards (with public announcements when they don’t."
  • A gatekeeping system using peer review is like a penal system that uses inmates to evaluate eligibility for parole. The conflicts of interest are everywhere -- and, surprise, virtually everyone is eligible!
  • But Congress wouldn’t let them. Rather than welcoming accreditors’ efforts to enhance their public oversight role, Congress told accreditors to back off and let nonprofit colleges and universities set their own standards for educational quality.
  • ccreditation is nothing more than an outdated industrial-era monopoly whose regulations prevent colleges from cultivating the skills, flexibility, and innovation that they need to ensure quality and accountability.
  • there is a much cheaper and better way: a self-certifying regimen of financial accountability, coupled with transparency about graduation rates and student success. (See some alternatives here and here.)
  • Such a system would prioritize student and parent assessment over the judgment of institutional peers or the educational bureaucracy. And it would protect students, parents, and taxpayers from fraud or mismanagement by permitting immediate complaints and investigations, with a notarized certification from the institution to serve as Exhibit A
  • The only way to protect the public interest is to end the current system of peer review patronage, and demand that colleges and universities put their reputation -- and their performance -- on the line.
  • Anne D. Neal is president of the American Council of Trustees and Alumni. The views stated herein do not represent the views of the National Advisory Committee on Institutional Quality and Integrity, of which she is a member.
  •  
    The ascending view of accreditation.
Gary Brown

Community Colleges Must Focus on Quality of Learning, Report Says - Students - The Chronicle of Higher Education - 0 views

  • Increasing college completion is meaningless unless certificates and degrees represent real learning, which community colleges must work harder to ensure, says a report released on Thursday by the Center for Community College Student Engagement.
  • This year's report centers on "deep learning," or "broadly applicable thinking, reasoning, and judgment skills—abilities that allow individuals to apply information, develop a coherent world view, and interact in more meaningful ways."
  • 67 percent of community-college students said their coursework often involved analyzing the basic elements of an idea, experience, or theory; 59 percent said they frequently synthesized ideas, information, and experiences in new ways. Other averages were lower: 56 percent of students, for example, reported being regularly asked to examine the strengths or weaknesses of their own views on a topic. And just 52 percent of students said they often had to make judgments about the value or soundness of information as part of their academic work.
  • ...5 more annotations...
  • One problem may be low expectations,
  • 37 percent of full-time community-college students spent five or fewer hours a week preparing for class. Nineteen percent of students had never done two or more drafts of an assignment, and 69 percent had come to class unprepared at least once.
  • Nearly nine in 10 entering students said they knew how to get in touch with their instructors outside of class, and the same proportion reported that at least one instructor had learned their names. But more than two-thirds of entering students and almost half of more-seasoned students said they had never discussed ideas from their coursework with instructors outside of class.
  • This year's report also strongly recommends that colleges invest more in professional development, for part-time as well as full-time faculty. "The calls for increased college completion come at a time of increasing student enroll­ments and draconian budget cuts; and too often in those circumstances, efforts to develop faculty and staff take low priority,"
  • Lone Star College's Classroom Research Initiative, a form of professional development based on inquiry. Since last year, about 30 faculty members from the community college's five campuses have collaborated to examine assessment data from the report's surveys and other sources and to propose new ways to try to improve learning.
Gary Brown

Cheating Scandal Snares Hundreds in U. of Central Florida Course - The Ticker - The Chronicle of Higher Education - 1 views

  • evidence of widespread cheating
  • business course on strategic management,
  • I don’t condone cheating. But I think it is equally pathetic that faculty are put in situations where they feel the only option for an examination is an easy to grade multiple choice or true/false test
  • ...3 more annotations...
  • Faculty all need to wake up, as virtually all test banks, and also all instructor’s manuals with homework answers, are widely available on the interne
  • I think we need to question why a class has 600 students enrolled.
  • Perhaps they are the ones being cheated.
Kimberly Green

Constant Curricular Change - 1 views

  •  
    Faculty members routinely change their courses from semester to semester, experimenting with both minor changes and major innovations, according to a national survey released Saturday by the Association of American Colleges and Universities. But while professors see curricular innovation as part of their jobs, they remain uncertain about whether pedagogical efforts are appropriately rewarded, the study found. The survey -- of Faculty members at all ranks at 20 four-year colleges and universities, including both public and private institutions -- found that 86.6 percent make some revision to courses at least once a year. Revisions could be relatively minor, with changes in the syllabus, readings or assignments qualifying. But about 37 percent reported adopting a significant new pedagogy in at least one of their courses at least once a year -- with new pedagogies being defined as such approaches as experiential learning, service learning and learning communities. Only 3 percent of Faculty members surveyed said that they never or "almost never" make changes in the courses they teach from year to year.
Gary Brown

71 Presidents Pledge to Improve Their Colleges' Teaching and Learning - Faculty - The Chronicle of Higher Education - 0 views

  • In a venture known as the Presidents' Alliance for Excellence in Student Learning and Accountability, they have promised to take specific steps to gather more evidence about student learning, to use that evidence to improve instruction, and to give the public more information about the quality of learning on their campuses.
  • The 71 pledges, officially announced on Friday, are essentially a dare to accreditors, parents, and the news media: Come visit in two years, and if we haven't done these things, you can zing us.
  • deepen an ethic of professional stewardship and self-regulation among college leaders
  • ...4 more annotations...
  • Beginning in 2011, all first-year students at Westminster will be required to create electronic portfolios that reflect their progress in terms of five campuswide learning goals. And the college will expand the number of seniors who take the Collegiate Learning Assessment, so that the test can be used to help measure the strength of each academic major.
  • "The crucial thing is that all of our learning assessments have been designed and driven by the faculty," says Pamela G. Menke, Miami Dade's associate provost for academic affairs. "The way transformation of learning truly occurs is when faculty members ask the questions, and when they're willing to use what they've found out to make change.
  • Other assessment models might point some things out, but they won't be useful if faculty members don't believe in them."
  • "In the long term, as more people join, I hope that the Web site will provide a resource for the kinds of innovations that seem to be successful," he says. "That process might be difficult. Teaching is an art, not a science. But there is still probably a lot that we can learn from each other."
Gary Brown

Disciplines Follow Their Own Paths to Quality - Faculty - The Chronicle of Higher Education - 2 views

  • But when it comes to the fundamentals of measuring and improving student learning, engineering professors naturally have more to talk about with their counterparts at, say, Georgia Tech than with the humanities professors at Villanova
    • Gary Brown
       
      Perhaps this is too bad....
  • But there is no nationally normed way to measure the particular kind of critical thinking that students of classics acquire
  • er colleagues have created discipline-specific critical-reasoning tests for classics and political science
  • ...5 more annotations...
  • Political science cultivates skills that are substantially different from those in classics, and in each case those skills can't be measured with a general-education test.
  • he wants to use tests of reasoning that are appropriate for each discipline
  • I believe Richard Paul has spent a lifetime articulating the characteristics of discipline based critical thinking. But anyway, I think it is interesting that an attempt is being made to develop (perhaps) a "national standard" for critical thinking in classics. In order to assess anything effectively we need a standard. Without a standard there are no criteria and therefore no basis from which to assess. But standards do not necessarily have to be established at the national level. This raises the issue of scale. What is the appropriate scale from which to measure the quality and effectiveness of an educational experience? Any valid approach to quality assurance has to be multi-scaled and requires multiple measures over time. But to be honest the issues of standards and scale are really just the tip of the outcomes iceberg.
    • Gary Brown
       
      Missing the notion that the variance is in the activity more than the criteria.  We hear little of embedding nationally normed and weighted assignments and then assessing the implementation and facilitation variables.... mirror, not lens.
  • the UW Study of Undergraduate Learning (UW SOUL). Results from the UW SOUL show that learning in college is disciplinary; therefore, real assessment of learning must occur (with central support and resources)in the academic departments. Generic approaches to assessing thinking, writing, research, quantitative reasoning, and other areas of learning may be measuring something, but they cannot measure learning in college.
  • It turns out there is a six week, or 210+ hour serious reading exposure to two or more domains outside ones own, that "turns on" cross domain mapping as a robust capability. Some people just happen to have accumulated, usually by unseen and unsensed happenstance involvements (rooming with an engineer, son of a dad changing domains/careers, etc.) this minimum level of basics that allows robust metaphor based mapping.
Gary Brown

Learning Assessment: The Regional Accreditors' Role - Measuring Stick - The Chronicle of Higher Education - 0 views

  • The National Institute for Learning Outcomes Assessment has just released a white paper about the regional accreditors’ role in prodding colleges to assess their students’ learning
  • All four presidents suggested that their campuses’ learning-assessment projects are fueled by Fear of Accreditors. One said that a regional accreditor “came down on us hard over assessment.” Another said, “Accreditation visit coming up. This drives what we need to do for assessment.”
  • regional accreditors are more likely now than they were a decade ago to insist that colleges hand them evidence about student-learning outcomes.
  • ...4 more annotations...
  • Western Association of Schools and Colleges, Ms. Provezis reports, “almost every action letter to institutions over the last five years has required additional attention to assessment, with reasons ranging from insufficient faculty involvement to too little evidence of a plan to sustain assessment.”
  • The white paper gently criticizes the accreditors for failing to make sure that faculty members are involved in learning assessment.
  • “it would be good to know more about what would make assessment worthwhile to the faculty—for a better understanding of the source of their resistance.”
  • Many of the most visible and ambitious learning-assessment projects out there seem to strangely ignore the scholarly disciplines’ own internal efforts to improve teaching and learning.
  •  
    fyi
Judy Rumph

Views: Why Are We Assessing? - Inside Higher Ed - 1 views

  • Amid all this progress, however, we seem to have lost our way. Too many of us have focused on the route we’re traveling: whether assessment should be value-added; the improvement versus accountability debate; entering assessment data into a database; pulling together a report for an accreditor. We’ve been so focused on the details of our route that we’ve lost sight of our destinatio
  • Our destination, which is what we should be focusing on, is the purpose of assessment. Over the last decades, we've consistently talked about two purposes of assessment: improvement and accountability. The thinking has been that improvement means using assessment to identify problems — things that need improvement — while accountability means using assessment to show that we're already doing a great job and need no improvement. A great deal has been written about the need to reconcile these two seemingly disparate purposes.
  • The most important purpose of assessment should be not improvement or accountability but their common aim: everyone wants students to get the best possible education
  • ...7 more annotations...
  • Our second common purpose of assessment should be making sure not only that students learn what’s important, but that their learning is of appropriate scope, depth, and rigo
  • Third, we need to accept how good we already are, so we can recognize success when we see i
  • And we haven’t figured out a way to tell the story of our effectiveness in 25 words or less, which is what busy people want and nee
  • Because we're not telling the stories of our successful outcomes in simple, understandable terms, the public continues to define quality using the outdated concept of inputs like faculty credentials, student aptitude, and institutional wealth — things that by themselves don’t say a whole lot about student learning.
  • And people like to invest in success. Because the public doesn't know how good we are at helping students learn, it doesn't yet give us all the support we need in our quest to give our students the best possible education.
  • But while virtually every college and university has had to make draconian budget cuts in the last couple of years, with more to come, I wonder how many are using solid, systematic evidence — including assessment evidence — to inform those decisions.
  • Now is the time to move our focus from the road we are traveling to our destination: a point at which we all are prudent, informed stewards of our resources… a point at which we each have clear, appropriate, justifiable, and externally-informed standards for student learning. Most importantly, now is the time to move our focus from assessment to learning, and to keeping our promises. Only then can we make higher education as great as it needs to be.
  •  
    Yes, this article resonnated with me too. Especially connecting assessment to teaching and learning. The most important purpose of assessment should be not improvement or accountability but their common aim: everyone wants students to get the best possible education.... today we seem to be devoting more time, money, thought, and effort to assessment than to helping faculty help students learn as effectively as possible. When our colleagues have disappointing assessment results, and they don't know what to do to improve them, I wonder how many have been made aware that, in some respects, we are living in a golden age of higher education, coming off a quarter-century of solid research on practices that promote deep, lasting learning. I wonder how many are pointed to the many excellent resources we now have on good teaching practices, including books, journals, conferences and, increasingly, teaching-learning centers right on campus. I wonder how many of the graduate programs they attended include the study and practice of contemporary research on effective higher education pedagogies. No wonder so many of us are struggling to make sense of our assessment results! Too many of us are separating work on assessment from work on improving teaching and learning, when they should be two sides of the same coin. We need to bring our work on teaching, learning, and assessment together.
Gary Brown

Empowerment Evaluation - 1 views

  • Empowerment Evaluation in Stanford University's School of Medicine
  • Empowerment evaluation provides a method for gathering, analyzing, and sharing data about a program and its outcomes and encourages faculty, students, and support personnel to actively participate in system changes.
  • It assumes that the more closely stakeholders are involved in reflecting on evaluation findings, the more likely they are to take ownership of the results and to guide curricular decision making and reform.
  • ...8 more annotations...
  • The steps of empowerment evaluation
  • designating a “critical friend” to communicate areas of potential improvement,
  • collecting evaluation data,
  • encouraging a cycle of reflection and action
  • establishing a culture of evidence
  • developing reflective educational practitioners.
  • cultivating a community of learners
  • yearly cycles of improvement at the Stanford University School of Medicine
  •  
    The findings were presented in Academic Medicine, a medical education journal, earlier this year
Gary Brown

Academic Grants Foster Waste and Antagonism - Commentary - The Chronicle of Higher Education - 1 views

  • We think that our work is primarily organized by institutions of higher education, or by departments, or by conferences, but in reality those have become but appendages to a huge system of distributing resources through grants.
  • It's time we looked at this system—and at its costs: unpaid, anxiety-filled hours upon hours for a single successful grant; scholarship shaped, or misshaped, according to the demands of marketlike forces and the interests of nonacademic private foundations. All to uphold a distributive system that fosters antagonistic competition and increasing inequality.
  • Every hour spent working on or worrying about grants is an hour that could be better spent on research (or family life, or civic engagement, or sleep). But every hour not spent on a grant gives a competitive edge to other applicants.
  • ...5 more annotations...
  • The grant is basically an outsourcing of assessment that could, in most situations, be carried out much better by paid professional staff members.
  • Meanwhile grant-receiving institutions, like universities, become increasingly dependent on grants, to the point that faculty members and other campus voices can scarcely be heard beneath the din of administrators exhorting them to get more and more grants.
  • Colleagues whose research may be equally valuable (based on traditional criteria of academic debate) could be denied resources and livelihoods because, instead of grant writing, they favor publishing, or public engagement, or teaching.
  • Grant applications normalize a mode of scholarly writing and thought that, whatever its merits, has not been chosen collectively by academe in the interests of good scholarship, but has been imposed from without, with the grant as its guide. And as application procedures grow more stringent, the quality of successful projects is likely to sink. Can we honestly expect good scholarship from scholars who must constantly concentrate on something other than their scholarship? Academic life is increasingly made up of a series of applications, while the applied-for work dwindles toward insignificance.
  • It's time, I think, to put an end to our rationalizations. My spine will not be straightened. The agony will not be wiped off my brain. My mind misshapen will not be pounded back, and I have to stop telling myself that everything will be OK. Months and years of my life have been taken away, and nothing short of systemic transformation will redeem them.
Theron DesRosier

Assessing Learning Outcomes at the University of Cincinnati: Comparing Rubric Assessments to Standardized Tests - 2 views

  •  
    "When the CLA results arrived eight months later, the UC team compared the outcomes of the two assessments. "We found no statistically significant correlation between the CLA scores and the portfolio scores," Escoe says. "In some ways, it's a disappointing finding. If we'd found a correlation, we could tell faculty that the CLA, as an instrument, is measuring the same things that we value and that the CLA can be embedded in a course. But that didn't happen." There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points "in a black box": if a student referred to a specific piece of evidence in a critical-thinking question, he or she simply received one point. In addition, she says, faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind-leading to results that would not correlate to a computer-scored test. In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. "
  •  
    Another institution trying to make sense of the CLA. This study compared student's CLA scores with criteria-based scores of their eportfolios. The study used a modified version of the VALUE rubrics developed by the AACU. Our own Gary Brown was on the team that developed the critical thinking rubric for the VALUE project.
  •  
    "The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement. " This begs some questions: what meaning can we attach to these two non-correlated measures? What VSA requirements can rubric-based assessment NOT satisfy? Are those "requirements" really useful?
Gary Brown

The Future of Wannabe U. - The Chronicle Review - The Chronicle of Higher Education - 2 views

  • Alice didn't tell me about the topics of her research; instead she listed the number of articles she had written, where they had been submitted and accepted, the reputation of the journals, the data sets she was constructing, and how many articles she could milk from each data set.
  • colleges and universities have transformed themselves from participants in an audit culture to accomplices in an accountability regime.
  • higher education has inaugurated an accountability regime—a politics of surveillance, control, and market management that disguises itself as value-neutral and scientific administration.
  • ...7 more annotations...
  • annabe administrator noted that the recipient had published well more than 100 articles. He never said why those articles mattered.
  • And all we have are numbers about teaching. And we don't know what the difference is between a [summary measure of] 7.3 and a 7.7 or an 8.2 and an 8.5."
  • The problem is that such numbers have no meaning. They cannot indicate the quality of a student's education.
  • or can the many metrics that commonly appear in academic (strategic) plans, like student credit hours per full-time-equivalent faculty member, or the percentage of classes with more than 50 students. Those productivity measures (for they are indeed productivity measures) might as well apply to the assembly-line workers who fabricate the proverbial widget, for one cannot tell what the metrics have to do with the supposed purpose of institutions of higher education—to create and transmit knowledge. That includes leading students to the possibility of a fuller life and an appreciation of the world around them and expanding their horizons.
  • But, like the fitness club's expensive cardio machines, a significant increase in faculty research, in the quality of student experiences (including learning), in the institution's service to its state, or in its standing among its peers may cost more than a university can afford to invest or would even dream of paying.
  • Such metrics are a speedup of the academic assembly line, not an intensification or improvement of student learning. Indeed, sometimes a boost in some measures, like an increase in the number of first-year students participating in "living and learning communities," may even detract from what students learn. (Wan U.'s pre-pharmacy living-and-learning community is so competitive that students keep track of one another's grades more than they help one another study. Last year one student turned off her roommate's alarm clock so that she would miss an exam and thus no longer compete for admission to the School of Pharmacy.)
  • Even metrics intended to indicate what students may have learned seem to have more to do with controlling faculty members than with gauging education. Take student-outcomes assessments, meant to be evaluations of whether courses have achieved their goals. They search for fault where earlier researchers would not have dreamed to look. When parents in the 1950s asked why Johnny couldn't read, teachers may have responded that it was Johnny's fault; they had prepared detailed lesson plans. Today student-outcomes assessment does not even try to discover whether Johnny attended class; instead it produces metrics about outcomes without considering Johnny's input.
  •  
    A good one to wrestle with.  It may be worth formulating distinctions we hold, and steering accordingly.
Gary Brown

Book review: Taking Stock: Research on Teaching and Learning in Higher Education « Tony Bates - 2 views

  • Christensen Hughes, J. and Mighty, J. (eds.) (2010) Taking Stock: Research on Teaching and Learning in Higher Education Montreal QC and Kingston ON: McGill-Queen’s University Press, 350 pp, C$/US$39.95
  • ‘The impetus for this event was the recognition that researchers have discovered much about teaching and learning in higher education, but that dissemination and uptake of this information have been limited. As such, the impact of educational research on faculty-teaching practice and student-learning experience has been negligible.’
  • Julia Christensen Hughes
  • ...10 more annotations...
  • Chapter 7: Faculty research and teaching approaches Michael Prosser
  • What faculty know about student learning Maryellen Weimer
  • ractices of Convenience: Teaching and Learning in Higher Education
  • Chapter 8: Student engagement and learning: Jillian Kinzie
  • (p. 4)
  • ‘much of our current approach to teaching in higher education might best be described as practices of convenience, to the extent that traditional pedagogical approaches continue to predominate. Such practices are convenient insofar as large numbers of students can be efficiently processed through the system. As far as learning effectiveness is concerned, however, such practices are decidedly inconvenient, as they fall far short of what is needed in terms of fostering self-directed learning, transformative learning, or learning that lasts.’
  • p. 10:
  • …research suggests that there is an association between how faculty teach and how students learn, and how students learn and the learning outcomes achieved. Further, research suggests that many faculty members teach in ways that are not particularly helpful to deep learning. Much of this research has been known for decades, yet we continue to teach in ways that are contrary to these findings.’
  • ‘There is increasing empirical evidence from a variety of international settings that prevailing teaching practices in higher education do not encourage the sort of learning that contemporary society demands….Teaching remains largely didactic, assessment of student work is often trivial, and curricula are more likely to emphasize content coverage than acquisition of lifelong and life-wide skills.’
  • What other profession would go about its business in such an amateurish and unprofessional way as university teaching? Despite the excellent suggestions in this book from those ‘within the tent’, I don’t see change coming from within. We have government and self-imposed industry regulation to prevent financial advisers, medical practitioners, real estate agents, engineers, construction workers and many other professions from operating without proper training. How long are we prepared to put up with this unregulated situation in university and college teaching?
Gary Brown

Let's Make Rankings That Matter - Commentary - The Chronicle of Higher Education - 3 views

  • By outsourcing evaluation of our doctoral programs to an external agency, we allow ourselves to play the double game of insulating ourselves from the criticisms they may raise by questioning their accuracy, while embracing the praise they bestow.
  • The solution to the problem is obvious: Universities should provide relevant information to potential students and faculty members themselves, instead of relying on an outside body to do it for them, years too late. How? By carrying out yearly audits of their doctoral programs.
  • The ubiquitous rise of social networking and open access to information via electronic media facilitate this approach to self-evaluation of academic departments. There is no need to depend on an obsolete system that irregularly publishes rankings when all of the necessary tools—e-mail, databases, Web sites—are available at all institutions of higher learning.
  • ...2 more annotations...
  • A great paradox of modern academe is that our institutions take pride in being on the cutting edge of new ideas and innovations, yet remain resistant and even hostile to the openness made possible by technology
  • We should not hide our departments' deficiencies in debatable rankings, but rather be honest about those limitations in order to aggressively pursue solutions that will strengthen doctoral programs and the institutions in which they play a vital role.
Gary Brown

A Critic Sees Deep Problems in the Doctoral Rankings - Faculty - The Chronicle of Higher Education - 1 views

  • This week he posted a public critique of the NRC study on his university's Web site.
  • "Little credence should be given" to the NRC's ranges of rankings.
  • There's not very much real information about quality in the simple measures they've got."
  • ...4 more annotations...
  • The NRC project's directors say that those small samples are not a problem, because the reputational scores were not converted directly into program assessments. Instead, the scores were used to develop a profile of the kinds of traits that faculty members value in doctoral programs in their field.
  • For one thing, Mr. Stigler says, the relationships between programs' reputations and the various program traits are probably not simple and linear.
  • if these correlations between reputation and citations were plotted on a graph, the most accurate representation would be a curved line, not a straight line. (The curve would occur at the tipping point where high citation levels make reputations go sky-high.)
  • Mr. Stigler says that it was a mistake for the NRC to so thoroughly abandon the reputational measures it used in its previous doctoral studies, in 1982 and 1995. Reputational surveys are widely criticized, he says, but they do provide a check on certain kinds of qualitative measures.
  •  
    What is not challenged is the validity and utility of the construct itself--reputation rankings.
Judy Rumph

Blog U.: It Boils Down to... - Confessions of a Community College Dean - Inside Higher Ed - 4 views

  • I had a conversation a few days ago with a professor who helped me understand some of the otherwise-puzzling opposition faculty have shown to actually using the general education outcomes they themselves voted into place.
  • Yet getting those outcomes from ‘adopted’ to ‘used’ has proved a long, hard slog.
  • The delicate balance is in respecting the ambitions of the various disciplines, while still maintaining -- correctly, in my view -- that you can’t just assume that the whole of a degree is equal to the sum of its parts. Even if each course works on its own terms, if the mix of courses is wrong, the students will finish with meaningful gaps. Catching those gaps can help you determine what’s missing, which is where assessment is supposed to come in. But there’s some local history to overcome first.
  •  
    This is an interesting take on what we are doing and the comments interesting
Gary Brown

An Oasis of Niceness - Tweed - The Chronicle of Higher Education - 2 views

  • Not exactly, but faculty members and students at Rutgers University are embarking this week on a two-year effort  to "cultivate small acts of courtesy and compassion" on the New Brunswick campus.
  • being civil is more than just demonstrating good manners.
  • "Living together more civilly means living together more peacefully, more kindly, and more justly," she says. Rutgers, Ms. Hull hopes, will become a "warmer, closer community" as a result of Project Civility
  •  
    an item of urgency, in my view.
Joshua Yeidel

University World News - US: America can learn from Bologna process - 0 views

  •  
    Lumina proposes that the US "adapt and apply the lessons learned from the Bologna Process in the EU, which has developed methodologies that "uniquely focus on linking student learning and the outcomes of higher education" -- tautological though that sounds.
  •  
    Apparently the "audacious" discussion in the WCET webinar yesterday (to be linked) featuring Ellen Wagner and Peter Smith is old hat in Europe. A national "degree framework" is almost inconceivable in the US, but 'tuning' -- "faculty-led approach that involves seeking input from students, recent graduates and employers to establish criterion-referenced learning outcomes and competencies" -- sounds a lot like in goal-setting.
1 - 20 of 133 Next › Last »
Showing 20 items per page