Skip to main content

Home/ Groups/ CTLT and Friends
Gary Brown

Texas A&M System Will Rate Professors Based on Their Bottom-Line Value - Faculty - The ... - 2 views

  • Under the proposal, officials will add the money generated by each professor and subtract that amount from his or her salary to get a bottom-line value for each, according to the article.
  • the public wanted accountability. "It's something that we're really not used to in higher education: for someone questioning whether we're working hard, whether our students are learning. That accountability is going to be with us from now on."
  • American Association of University Professors, blamed a conservative think tank with ties to Gov. Rick Perry for coming up with an idea that he said is simplistic and relies on "a silly measure" of accountability.
  •  
    Nothing more to say about this....
  •  
    I would simply like to note the thoughtless slide from a desire to know "whether we're working hard, whether our students are learning" to revenue measures.
  •  
    Our colleagues in science disciplines, who had seen this, pointed out that unlike other institutions where this kind of system goes largely unspoken, at least at Texas AM there is some value included in the metric for those who teach undergraduates.
Joshua Yeidel

GOVT Week: David Bernstein on Top 10 Indicators of Performance Measurement Quality | AE... - 2 views

  •  
    Not surprisingly, the #1 indicator of performance measurement quality is "usefulness".
Gary Brown

At Colleges, Assessment Satisfies Only Accreditors - Letters to the Editor - The Chroni... - 2 views

  • Some of that is due to the influence of the traditional academic freedom that faculty members have enjoyed. Some of it is ego. And some of it is lack of understanding of how it can work. There is also a huge disconnect between satisfying outside parties, like accreditors and the government, and using assessment as a quality-improvement system.
  • We are driven by regional accreditation and program-level accreditation, not by quality improvement. At our institution, we talk about assessment a lot, and do just enough to satisfy the requirements of our outside reviewers.
  • Standardized direct measures, like the Major Field Test for M.B.A. graduates?
  • ...5 more annotations...
  • The problem with the test is that it does not directly align with our program's learning outcomes and it does not yield useful information for closing the loop. So why do we use it? Because it is accepted by accreditors as a direct measure and it is less expensive and time-consuming than more useful tools.
  • Without exception, the most useful information for improving the program and student learning comes from the anecdotal and indirect information.
  • We don't have the time and the resources to do what we really want to do to continuously improve the quality of our programs and instruction. We don't have a culture of continuous improvement. We don't make changes on a regular basis, because we are trapped by the catalog publishing cycle, accreditation visits, and the entrenched misunderstanding of the purposes of assessment.
  • The institutions that use it are ones that have adequate resources to do so. The time necessary for training, whole-system involvement, and developing the programs for improvement is daunting. And it is only being used by one regional accrediting body, as far as I know.
  • Until higher education as a whole is willing to look at changing its approach to assessment, I don't think it will happen
  •  
    The challenge and another piece of evidence that the nuances of assessment as it related to teaching and learning remain elusive.
Gary Brown

For-Profit Hearing: Legislation Might Include All Colleges & Greed is Good « ... - 2 views

  • Democrats were being unfair in singling out the for-profit institutions.  Senator Enzi, the ranking minority member on the Committee, followed up with a statement released on the HELP webpage.  Enzi said. “It is naïve to think that these problems are limited to just the for-profit sector.”
  • Senator Jeff Merkley (OR) asked if “student loans should be extended to programs that are not accredited.”  Ms. Asher gave a polite lesson on the difference between accrediting institution and accrediting program.
  • Finances in higher education is confusing and accreditation is confusing
  • ...1 more annotation...
  • Ms. Asher was also a champion of reviewing the financial incentives for colleges.  “We need to shift incentives for colleges to focus on outcomes for students.”
Corinna Lo

Official Google Blog: Transparency, choice and control - now complete with a Dashboard! - 2 views

  •  
    "In an effort to provide you with greater transparency and control over their own data, we've built the Google Dashboard. Designed to be simple and useful, the Dashboard summarizes data for each product that you use (when signed in to your account) and provides you direct links to control your personal settings. "
Joshua Yeidel

Mind - Research Upends Traditional Thinking on Study Habits - NYTimes.com - 2 views

  • “The contrast between the enormous popularity of the learning-styles approach within education and the lack of credible evidence for its utility is, in our opinion, striking and disturbing,” the researchers concluded.
  • “We have yet to identify the common threads between teachers who create a constructive learning atmosphere,” said Daniel T. Willingham, a psychologist at the University of Virginia and author of the book “Why Don’t Students Like School?”
  • psychologists have discovered that some of the most hallowed advice on study habits is flat wrong
  •  
    "Evidence" that the "evidence" is not very effective to promote change.  Apparently the context is crucial to adoption.
Gary Brown

Public Higher Education Is 'Eroding From All Sides,' Warn Political Scientists - Facult... - 2 views

  • The ideal of American public higher education may have entered a death spiral, several scholars said here Thursday during a panel discussion at the annual meeting of the American Political Science Association. That crisis might ultimately harm not only universities, but also democracy itself, they warned.
  • And families who are frozen out of the system see public universities as something for the affluent. They'd rather see the state spend money on health care."
  • Cultural values don't support the liberal arts. Debt-burdened families aren't demanding it. The capitalist state isn't interested in it. Universities aren't funding it."
  • ...3 more annotations...
  • Instead, all of public higher education will be essentially vocational in nature, oriented entirely around the market logic of job preparation. Instead of educating whole persons, Ms. Brown warned, universities will be expected to "build human capital," a narrower and more hollow mission.
  • His own campus, Mr. Nelson said, has recently seen several multimillion-dollar projects that were favorites of administrators but were not endorsed by the faculty.
  • Instead, he said that faculty activists should open up a more basic debate about the purposes of education. They should fight, he said, for a tuition-free public higher-education system wholly subsidized by the federal government.
  •  
    The issues are taking root in disciplinary discussions, so perhaps awareness and response will sprout.
Gary Brown

News: Assessing the Assessments - Inside Higher Ed - 2 views

  • The validity of a measure is based on evidence regarding the inferences and assumptions that are intended to be made and the uses to which the measure will be put. Showing that the three tests in question are comparable does not support Shulenburger's assertion regarding the value-added measure as a valid indicator of institutional effectiveness. The claim that public university groups have previously judged the value-added measure as appropriate does not tell us anything about the evidence upon which this judgment was based nor the conditions under which the judgment was reached. As someone familiar with the process, I would assert that there was no compelling evidence presented that these instruments and the value-added measure were validated for making this assertion (no such evidence was available at the time), which is the intended use in the VSA.
  • (however much the sellers of these tests tell you that those samples are "representative"), they provide an easy way out for academic administrators who want to avoid the time-and-effort consuming but incredibly valuable task of developing detailed major program learning outcome statements (even the specialized accrediting bodies don't get down to the level of discrete, operational statements that guide faculty toward appropriate assessment design)
  • f somebody really cared about "value added," they could look at each student's first essay in this course, and compare it with that same student's last essay in this course. This person could then evaluate each individual student's increased mastery of the subject-matter in the course (there's a lot) and also the increased writing skill, if any.
  • ...1 more annotation...
  • These skills cannot be separated out from student success in learning sophisticated subject-matter, because understanding anthropology, or history of science, or organic chemistry, or Japanese painting, is not a matter of absorbing individual facts, but learning facts and ways of thinking about them in a seamless, synthetic way. No assessment scheme that neglects these obvious facts about higher education is going to do anybody any good, and we'll be wasting valuable intellectual and financial resources if we try to design one.
  •  
    ongoing discussion of these tools. Note Longanecker's comment and ask me why.
Gary Brown

Book review: Taking Stock: Research on Teaching and Learning in Higher Educat... - 2 views

  • Christensen Hughes, J. and Mighty, J. (eds.) (2010) Taking Stock: Research on Teaching and Learning in Higher Education Montreal QC and Kingston ON: McGill-Queen’s University Press, 350 pp, C$/US$39.95
  • ‘The impetus for this event was the recognition that researchers have discovered much about teaching and learning in higher education, but that dissemination and uptake of this information have been limited. As such, the impact of educational research on faculty-teaching practice and student-learning experience has been negligible.’
  • Julia Christensen Hughes
  • ...10 more annotations...
  • Chapter 7: Faculty research and teaching approaches Michael Prosser
  • What faculty know about student learning Maryellen Weimer
  • ractices of Convenience: Teaching and Learning in Higher Education
  • Chapter 8: Student engagement and learning: Jillian Kinzie
  • (p. 4)
  • ‘much of our current approach to teaching in higher education might best be described as practices of convenience, to the extent that traditional pedagogical approaches continue to predominate. Such practices are convenient insofar as large numbers of students can be efficiently processed through the system. As far as learning effectiveness is concerned, however, such practices are decidedly inconvenient, as they fall far short of what is needed in terms of fostering self-directed learning, transformative learning, or learning that lasts.’
  • p. 10:
  • …research suggests that there is an association between how faculty teach and how students learn, and how students learn and the learning outcomes achieved. Further, research suggests that many faculty members teach in ways that are not particularly helpful to deep learning. Much of this research has been known for decades, yet we continue to teach in ways that are contrary to these findings.’
  • ‘There is increasing empirical evidence from a variety of international settings that prevailing teaching practices in higher education do not encourage the sort of learning that contemporary society demands….Teaching remains largely didactic, assessment of student work is often trivial, and curricula are more likely to emphasize content coverage than acquisition of lifelong and life-wide skills.’
  • What other profession would go about its business in such an amateurish and unprofessional way as university teaching? Despite the excellent suggestions in this book from those ‘within the tent’, I don’t see change coming from within. We have government and self-imposed industry regulation to prevent financial advisers, medical practitioners, real estate agents, engineers, construction workers and many other professions from operating without proper training. How long are we prepared to put up with this unregulated situation in university and college teaching?
Gary Brown

The Future of Wannabe U. - The Chronicle Review - The Chronicle of Higher Education - 2 views

  • Alice didn't tell me about the topics of her research; instead she listed the number of articles she had written, where they had been submitted and accepted, the reputation of the journals, the data sets she was constructing, and how many articles she could milk from each data set.
  • colleges and universities have transformed themselves from participants in an audit culture to accomplices in an accountability regime.
  • higher education has inaugurated an accountability regime—a politics of surveillance, control, and market management that disguises itself as value-neutral and scientific administration.
  • ...7 more annotations...
  • annabe administrator noted that the recipient had published well more than 100 articles. He never said why those articles mattered.
  • And all we have are numbers about teaching. And we don't know what the difference is between a [summary measure of] 7.3 and a 7.7 or an 8.2 and an 8.5."
  • The problem is that such numbers have no meaning. They cannot indicate the quality of a student's education.
  • or can the many metrics that commonly appear in academic (strategic) plans, like student credit hours per full-time-equivalent faculty member, or the percentage of classes with more than 50 students. Those productivity measures (for they are indeed productivity measures) might as well apply to the assembly-line workers who fabricate the proverbial widget, for one cannot tell what the metrics have to do with the supposed purpose of institutions of higher education—to create and transmit knowledge. That includes leading students to the possibility of a fuller life and an appreciation of the world around them and expanding their horizons.
  • But, like the fitness club's expensive cardio machines, a significant increase in faculty research, in the quality of student experiences (including learning), in the institution's service to its state, or in its standing among its peers may cost more than a university can afford to invest or would even dream of paying.
  • Such metrics are a speedup of the academic assembly line, not an intensification or improvement of student learning. Indeed, sometimes a boost in some measures, like an increase in the number of first-year students participating in "living and learning communities," may even detract from what students learn. (Wan U.'s pre-pharmacy living-and-learning community is so competitive that students keep track of one another's grades more than they help one another study. Last year one student turned off her roommate's alarm clock so that she would miss an exam and thus no longer compete for admission to the School of Pharmacy.)
  • Even metrics intended to indicate what students may have learned seem to have more to do with controlling faculty members than with gauging education. Take student-outcomes assessments, meant to be evaluations of whether courses have achieved their goals. They search for fault where earlier researchers would not have dreamed to look. When parents in the 1950s asked why Johnny couldn't read, teachers may have responded that it was Johnny's fault; they had prepared detailed lesson plans. Today student-outcomes assessment does not even try to discover whether Johnny attended class; instead it produces metrics about outcomes without considering Johnny's input.
  •  
    A good one to wrestle with.  It may be worth formulating distinctions we hold, and steering accordingly.
Joshua Yeidel

Google's new Social Search surprisingly useful - Ars Technica - 2 views

  •  
    "No, Social Search isn't yet another social network aggregator. It's a way for you to make your Google search results more relevant by adding a section dedicated to content written by your friends and acquaintances. Though limited, we think it's pretty useful thus far."
  •  
    A piece of a Personal Learning Network -- searchability.
Gary Brown

Online Colleges and States Are at Odds Over Quality Standards - Wired Campus - The Chro... - 2 views

  • But state officials said they are still concerned that self-imposed standards are not good enough and that online programs are not consistent in providing students with high-quality education.
  • “We’re very interested in making sure that as many good opportunities are available to students as possible,” added David Longanecker, president of the Western Interstate Commission.
  • he group called for a more uniform accreditation standard across state lines as well as a formal framework for getting a conversation on regulation started. Even with the framework in place, however, the state representatives said it will be difficult to get state-education agencies and state legislatures to agree. “Trying to bring 50 different people together is really tough,” Mr. Longanecker said.
  • ...1 more annotation...
  • Like state regulators, colleges are also facing hard decisions on quality standards. With such a diversity in online institutions, Ms. Eaton said it will be difficult to impose a uniform set of standards. “If we were in agreement about quality,” she said, “somebody’s freedom would be compromised.”
  •  
    I am dismayed to see Longanecker's position on this.
Gary Brown

Conference Highlights Contradictory Attitudes Toward Global Rankings - International - ... - 2 views

  • He emphasized, however, that "rankings are only useful if the indicators they use don't just measure things that are easy to measure, but the things that need to be measured."
  • "In Malaysia we do not call it a ranking exercise," she said firmly, saying that the effort was instead a benchmarking exercise that attempts to rate institutions against an objective standard.
  • "If Ranking Is the Disease, Is Benchmarking the Cure?" Jamil Salmi, tertiary education coordinator at the World Bank, said that rankings are "just the tip of the iceberg" of a growing accountability agenda, with students, governments, and employers all seeking more comprehensive information about institutions
  • ...3 more annotations...
  • "Rankings are the most visible and easy to understand" of the various measures, but they are far from the most reliable,
  • Jamie P. Merisotis
  • He described himself as a longtime skeptic of rankings, but noted that "these kinds of forums are useful, because you have to have conversations involving the producers of rankings, consumers, analysts, and critics."
Gary Brown

Types of Reliability - 2 views

  • You learned in the Theory of Reliability that it's not possible to calculate reliability exactly. Instead, we have to estimate reliability, and this is always an imperfect endeavor.
  •  
    A recommended resource
Judy Rumph

Views: Why Are We Assessing? - Inside Higher Ed - 1 views

  • Amid all this progress, however, we seem to have lost our way. Too many of us have focused on the route we’re traveling: whether assessment should be value-added; the improvement versus accountability debate; entering assessment data into a database; pulling together a report for an accreditor. We’ve been so focused on the details of our route that we’ve lost sight of our destinatio
  • Our destination, which is what we should be focusing on, is the purpose of assessment. Over the last decades, we've consistently talked about two purposes of assessment: improvement and accountability. The thinking has been that improvement means using assessment to identify problems — things that need improvement — while accountability means using assessment to show that we're already doing a great job and need no improvement. A great deal has been written about the need to reconcile these two seemingly disparate purposes.
  • The most important purpose of assessment should be not improvement or accountability but their common aim: everyone wants students to get the best possible education
  • ...7 more annotations...
  • Our second common purpose of assessment should be making sure not only that students learn what’s important, but that their learning is of appropriate scope, depth, and rigo
  • Third, we need to accept how good we already are, so we can recognize success when we see i
  • And we haven’t figured out a way to tell the story of our effectiveness in 25 words or less, which is what busy people want and nee
  • Because we're not telling the stories of our successful outcomes in simple, understandable terms, the public continues to define quality using the outdated concept of inputs like faculty credentials, student aptitude, and institutional wealth — things that by themselves don’t say a whole lot about student learning.
  • And people like to invest in success. Because the public doesn't know how good we are at helping students learn, it doesn't yet give us all the support we need in our quest to give our students the best possible education.
  • But while virtually every college and university has had to make draconian budget cuts in the last couple of years, with more to come, I wonder how many are using solid, systematic evidence — including assessment evidence — to inform those decisions.
  • Now is the time to move our focus from the road we are traveling to our destination: a point at which we all are prudent, informed stewards of our resources… a point at which we each have clear, appropriate, justifiable, and externally-informed standards for student learning. Most importantly, now is the time to move our focus from assessment to learning, and to keeping our promises. Only then can we make higher education as great as it needs to be.
  •  
    Yes, this article resonnated with me too. Especially connecting assessment to teaching and learning. The most important purpose of assessment should be not improvement or accountability but their common aim: everyone wants students to get the best possible education.... today we seem to be devoting more time, money, thought, and effort to assessment than to helping faculty help students learn as effectively as possible. When our colleagues have disappointing assessment results, and they don't know what to do to improve them, I wonder how many have been made aware that, in some respects, we are living in a golden age of higher education, coming off a quarter-century of solid research on practices that promote deep, lasting learning. I wonder how many are pointed to the many excellent resources we now have on good teaching practices, including books, journals, conferences and, increasingly, teaching-learning centers right on campus. I wonder how many of the graduate programs they attended include the study and practice of contemporary research on effective higher education pedagogies. No wonder so many of us are struggling to make sense of our assessment results! Too many of us are separating work on assessment from work on improving teaching and learning, when they should be two sides of the same coin. We need to bring our work on teaching, learning, and assessment together.
Matthew Tedder

Post Mortem For A Dead Newspaper | Techdirt - 1 views

  •  
    Not education per se, but certainly educational.
Gary Brown

Online Colleges and States Are at Odds Over Quality Standards - Wired Campus - The Chro... - 1 views

  • the group called for a more uniform accreditation standard across state lines as well as a formal framework for getting a conversation on regulation started.
  • College officials claim that what states really mean when they discuss quality in online education is the credibility of online education in general. John F. Ebersole, president of Excelsior College, said “there is a bit of a double standard” when it comes to regulating online institutions; states, he feels, apply stricter standards to the online world.
  •  
    I note the underlying issue of "credibility" as the core of accreditation. It raises the question, again:  Why would standardized tests be presumed, as Excelsior does, to be a better indicator than a model of stakeholder endorsement?
Theron DesRosier

The Atlantic Century: Benchmarking EU and U.S. Innovation and Competitiveness | The Inf... - 1 views

  •  
    "ITIF uses 16 indicators to assess the global innovation-based competitiveness of 36 countries and 4 regions. This report finds that while the U.S. still leads the EU in innovation-based competitiveness, it ranks sixth overall. Moreover, the U.S. ranks last in progress toward the new knowledge-based innovation economy over the last decade."
Matthew Tedder

Six Rules For Social Networks - Forbes.com - 1 views

  •  
    Seems a level-headed and thoughtful look at what might make for successful social networking..
Nils Peterson

AAC&U News | April 2010 | Feature - 1 views

  • Comparing Rubric Assessments to Standardized Tests
  • First, the university, a public institution of about 40,000 students in Ohio, needed to comply with the Voluntary System of Accountability (VSA), which requires that state institutions provide data about graduation rates, tuition, student characteristics, and student learning outcomes, among other measures, in the consistent format developed by its two sponsoring organizations, the Association of Public and Land-grant Universities (APLU), and the Association of State Colleges and Universities (AASCU).
  • And finally, UC was accepted in 2008 as a member of the fifth cohort of the Inter/National Coalition for Electronic Portfolio Research, a collaborative body with the goal of advancing knowledge about the effect of electronic portfolio use on student learning outcomes.  
  • ...13 more annotations...
  • outcomes required of all UC students—including critical thinking, knowledge integration, social responsibility, and effective communication
  • “The wonderful thing about this approach is that full-time faculty across the university  are gathering data about how their  students are doing, and since they’ll be teaching their courses in the future, they’re really invested in rubric assessment—they really care,” Escoe says. In one case, the capstone survey data revealed that students weren’t doing as well as expected in writing, and faculty from that program adjusted their pedagogy to include more writing assignments and writing assessments throughout the program, not just at the capstone level. As the university prepares to switch from a quarter system to semester system in two years, faculty members are using the capstone survey data to assist their course redesigns, Escoe says.
  • the university planned a “dual pilot” study examining the applicability of electronic portfolio assessment of writing and critical thinking alongside the Collegiate Learning Assessment,
  • The rubrics the UC team used were slightly modified versions of those developed by AAC&U’s Valid Assessment of Learning in Undergraduate Education (VALUE) project. 
  • In the critical thinking rubric assessment, for example, faculty evaluated student proposals for experiential honors projects that they could potentially complete in upcoming years.  The faculty assessors were trained and their rubric assessments “normed” to ensure that interrater reliability was suitably high.
  • “It’s not some nitpicky, onerous administrative add-on. It’s what we do as we teach our courses, and it really helps close that assessment loop.”
  • There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points “in a black box”:
  • faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind—leading to results that would not correlate to a computer-scored test. 
  • “The CLA provides scores at the institutional level. It doesn’t give me a picture of how I can affect those specific students’ learning. So that’s where rubric assessment comes in—you can use it to look at data that’s compiled over time.”
  • Their portfolios are now more like real learning portfolios, not just a few artifacts, and we want to look at them as they go into their third and fourth years to see what they can tell us about students’ whole program of study.”  Hall and Robles are also looking into the possibility of forming relationships with other schools from NCEPR to exchange student e-portfolios and do a larger study on the value of rubric assessment of student learning.
  • “We’re really trying to stress that assessment is pedagogy,”
  • “We found no statistically significant correlation between the CLA scores and the portfolio scores,”
  • In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement.
    • Nils Peterson
       
      CLA did not provide information for continuous program improvement -- we've heard this argument before
  •  
    The lack of correlation might be rephrased--there appears to be no corrlation between what is useful for faculty who teach and what is useful for the VSA. A corollary question: Of what use is the VSA?
« First ‹ Previous 161 - 180 Next › Last »
Showing 20 items per page