Skip to main content

Home/ CTLT and Friends/ Group items matching "value-added" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Joshua Yeidel

Evaluating Teachers: The Important Role of Value-Added [pdf] - 1 views

    "We conclude that value-added data has an important role to play in teacher evaluation systems, but that there is much to be learned about how best to use value-added information in human resource decisions." No mention of the role of assessment in improvement.
S Spaeth

Seamless Services?: Lowering barriers to adding images to posts - 0 views

    The Zemanta blogging service helps me reconsider the value of a Flickr account. I established a Flickr account, SCSpaeth, several years ago and added pictures periodically. But I have never made much use of the pictures. While pictures can add to the interest in a blog post, finding images, adding them to the blog and documenting them properly took a lot of effort. So, I only added them when they added enough to compensate for the extra work. The Zemanta service changes the barriers to using more images.
Joshua Yeidel

Higher Education: Assessment & Process Improvement Group News | LinkedIn - 2 views

    So here it is: by definition, the value-added component of the D.C. IMPACT evaluation system defines 50 percent of all teachers in grades four through eight as ineffective or minimally effective in influencing their students' learning. And given the imprecision of the value-added scores, just by chance some teachers will be categorized as ineffective or minimally effective two years in a row. The system is rigged to label teachers as ineffective or minimally effective as a precursor to firing them.
    How assessment of value-added actually works in one setting: the Washington, D.C. public schools. This article actually works the numbers to show that the system is set up to put teachers in the firing zone. Note the tyranny of numerical ratings (some of them subjective) converted into meanings like "minimally effective".
Gary Brown

News: Assessing the Assessments - Inside Higher Ed - 2 views

  • The validity of a measure is based on evidence regarding the inferences and assumptions that are intended to be made and the uses to which the measure will be put. Showing that the three tests in question are comparable does not support Shulenburger's assertion regarding the value-added measure as a valid indicator of institutional effectiveness. The claim that public university groups have previously judged the value-added measure as appropriate does not tell us anything about the evidence upon which this judgment was based nor the conditions under which the judgment was reached. As someone familiar with the process, I would assert that there was no compelling evidence presented that these instruments and the value-added measure were validated for making this assertion (no such evidence was available at the time), which is the intended use in the VSA.
  • (however much the sellers of these tests tell you that those samples are "representative"), they provide an easy way out for academic administrators who want to avoid the time-and-effort consuming but incredibly valuable task of developing detailed major program learning outcome statements (even the specialized accrediting bodies don't get down to the level of discrete, operational statements that guide faculty toward appropriate assessment design)
  • f somebody really cared about "value added," they could look at each student's first essay in this course, and compare it with that same student's last essay in this course. This person could then evaluate each individual student's increased mastery of the subject-matter in the course (there's a lot) and also the increased writing skill, if any.
  • ...1 more annotation...
  • These skills cannot be separated out from student success in learning sophisticated subject-matter, because understanding anthropology, or history of science, or organic chemistry, or Japanese painting, is not a matter of absorbing individual facts, but learning facts and ways of thinking about them in a seamless, synthetic way. No assessment scheme that neglects these obvious facts about higher education is going to do anybody any good, and we'll be wasting valuable intellectual and financial resources if we try to design one.
    ongoing discussion of these tools. Note Longanecker's comment and ask me why.
Gary Brown

A Measure of Learning Is Put to the Test - Faculty - The Chronicle of Higher Education - 1 views

  • Others say those who take the test have little motivation to do well, which makes it tough to draw conclusions from their performance.
  • "Everything that No Child Left Behind signified during the Bush administration—we operate 180 degrees away from that," says Roger Benjamin, president of the Council for Aid to Education, which developed and promotes the CLA. "We don't want this to be a high-stakes test. We're putting a stake in the ground on classic liberal-arts issues. I'm willing to rest my oar there. These core abilities, these higher-order skills, are very important, and they're even more important in a knowledge economy where everyone needs to deal with a surplus of information." Only an essay test, like the CLA, he says, can really get at those skills.
  • "The CLA is really an authentic assessment process," says Pedro Reyes, associate vice chancellor for academic planning and assessment at the University of Texas system.
  • ...20 more annotations...
  • "The Board of Regents here saw that it would be an important test because it measures analytical ability, problem-solving ability, critical thinking, and communication. Those are the skills that you want every undergraduate to walk away with." (Other large systems that have embraced the CLA include California State University and the West Virginia system.)
  • value added
  • We began by administering a retired CLA question, a task that had to do with analyzing crime-reduction strategies,
  • performance task that mirrors the CLA
  • Mr. Ernsting and Ms. McConnell are perfectly sincere about using CLA-style tasks to improve instruction on their campuses. But at the same time, colleges have a less high-minded motive for familiarizing students with the CLA style: It just might improve their scores when it comes time to take the actual test.
  • by 2012, the CLA scores of more than 100 colleges will be posted, for all the world to see, on the "College Portrait" Web site of the Voluntary System of Accountability, an effort by more than 300 public colleges and universities to provide information about life and learning on their campuses.
  • If familiarizing students with CLA-style tasks does raise their scores, then the CLA might not be a pure, unmediated reflection of the full range of liberal-arts skills. How exactly should the public interpret the scores of colleges that do not use such training exercises?
  • Trudy W. Banta, a professor of higher education and senior adviser to the chancellor for academic planning and evaluation at Indiana University-Purdue University at Indianapolis, believes it is a serious mistake to publicly release and compare scores on the test. There is too much risk, she says, that policy makers and the public will misinterpret the numbers.
  • most colleges do not use a true longitudinal model: That is, the students who take the CLA in their first year do not take it again in their senior year. The test's value-added model is therefore based on a potentially apples-and-oranges comparison.
  • freshman test-takers' scores are assessed relative to their SAT and ACT scores, and so are senior test-takers' scores. For that reason, colleges cannot game the test by recruiting an academically weak pool of freshmen and a strong pool of seniors.
  • students do not always have much motivation to take the test seriously
  • seniors, who are typically recruited to take the CLA toward the end of their final semester, when they can already taste the graduation champagne.
  • Of the few dozen universities that had already chosen to publish CLA data on that site, roughly a quarter of the reports appeared to include erroneous descriptions of the year-to-year value-added scores.
  • It is clear that CLA scores do reflect some broad properties of a college education.
  • Students' CLA scores improved if they took courses that required a substantial amount of reading and writing. Many students didn't take such courses, and their CLA scores tended to stay flat.
  • Colleges that make demands on students can actually develop their skills on the kinds of things measured by the CLA.
  • Mr. Shavelson believes the CLA's essays and "performance tasks" offer an unusually sophisticated way of measuring what colleges do, without relying too heavily on factual knowledge from any one academic field.
  • Politicians and consumers want easily interpretable scores, while colleges need subtler and more detailed data to make internal improvements.
  • The CLA is used at more than 400 colleges
  • Since its debut a decade ago, it has been widely praised as a sophisticated alternative to multiple-choice tests
Joshua Yeidel

Students Know Good Teaching When They Get It, Survey Finds - - 2 views

    ... as measured by student evals and "value-added modeling".  Note some of the student eval items, though... e.g., students agree or disagree with "In this class, we learn to correct our mistakes."
Joshua Yeidel

Blog U.: The Challenge of Value-Added - Digital Tweed - Inside Higher Ed - 0 views

    Quoting a 1984 study, "higher education should ensure that the mounds of data already collected on students are converted into useful information and fed back [to campus officials and faculty] in ways that enhance student learning and lead to improvement in programs, teaching practices, and the environment in which teaching and learning take place." The example given is an analysis of test scores in the Los Angeles Unified School District by the LA Times.
    It's going to take some assessment (and political) smarts to deflect the notion that existing data can be re-purposed easily to assess "value-added".
S Spaeth

CEL | Daniel Goleman - Ecological Intelligence - 0 views

  • Psychologists conventionally view intelligence as residing within an individual. But the ecological abilities we need in order to survive today must be a collective intelligence, one that we learn and master as a species, and that resides in a distributed fashion among far-flung networks of people. The challenges we face are too varied, too subtle, and too complicated to be understood and overcome by a single person; their recognition and solution require intense efforts by a vastly diverse range of experts, businesspeople, activists — by all of us.
    The shared nature of ecological intelligence makes it synergistic with social intelligence, which gives us the capacity to coordinate and harmonize our efforts. The art of working together effectively, as mastered by a star performing team, combines abilities like empathy and perspective taking, candor and cooperation, to create person-to-person links that let information gain added value as it travels. Collaboration and the exchange of information are vital to amassing the essential ecological insights and necessary database that allow us to act for the greater good.
Joshua Yeidel

A Measure of Learning Is Put to the Test - Faculty - The Chronicle of Higher Education - 1 views

  • "The CLA is really an authentic assessment process,"
    • Joshua Yeidel
      What is the meaning of "authentic" in this statement? It certainly isn't "situated in the real world" or "of intrinsic value".
  • add CLA-style assignments to their liberal-arts courses.
    • Joshua Yeidel
      Maybe the best way to prepare for the test, but the best way to develop analytical ability, et. al.?
  • the CLA typically reports scores on a "value added" basis, controlling for the scores that students earned on the SAT or ACT while in high school.
    • Joshua Yeidel
      If SAT and ACT are measuring the same things as CLA, then why not just use them? If they are measuring different things, why "control for" their scores?
  • ...5 more annotations...
  • improved models of instruction.
  • it measures analytical ability, problem-solving ability, critical thinking, and communication.
  • "If a college pays attention to learning and helps students develop their skills—whether they do that by participating in our programs or by doing things on their own—they probably should do better on the CLA,"
    • Joshua Yeidel
      Just in case anyone missed the message: pay attention to learning, and you'll _probably_ do better on the CLA. Get students to practice CLA tasks, and you _will_ do better on the CLA.
  • "Standardized tests of generic skills—I'm not talking about testing in the major—are so much a measure of what students bring to college with them that there is very little variance left out of which we might tease the effects of college," says Ms. Banta, who is a longtime critic of the CLA. "There's just not enough variance there to make comparative judgments about the comparative quality of institutions."
    • Joshua Yeidel
      It's not clear what "standardized tests" means in this comment. Does the "lack of variance" apply to all assessments (including, e.g., e-portfolios)?
  • Can the CLA fill both of those roles?
    A summary of the current state of "thinking" with regard to CLA. Many fallacies and contradictions are (unintentionally) exposed. At least CLA appears to be more about skills than content (though the question of how it is graded isn't even raised), but the "performance task" approach is the smallest possible step in that direction.
Gary Brown

Renewed Debate Over the 3-Year B.A. - The Chronicle Review - The Chronicle of Higher Education - 0 views

  • Zemsky, chairman of the Learning Alliance for Higher Education, wrote in The Chronicle in August. Shifting to a three-year baccalaureate, Zemsky added, would force universities to "judge whether their shorter degree programs were achieving the same learning outcomes as their four-year programs had promised; they would find themselves in need of the performance measures they had hitherto eschewed." The idea has stirred some support, as well as considerable opposition.
  • the reality is that the question of whether or not this makes sense may have already been made for us by the Bologna Process, which has been moving toward mainstreaming and standardizing three-year degrees across the European Union and beyond (46 countries are participating) for some time now.
  • This idea treats an academic credit as a purchasable commodity, and a college experience as quantifiable, subject to rules of efficiency rather than humane values. In reality, so-called "credits" have no standard meaning or value. Furthermore, the idea on its own is superficial. Why not two years? One? Five?
  • ...3 more annotations...
  • For those who see college as a place to learn marketable skills, the less time and money it takes, the better. For others who see college as a place to learn to think and to learn about the world and others as broadly as possible, and to grow into one's own, why rush? (The Choice,
  • they might want to rethink not just what time of year and how long students are in the classroom, but how student accomplishment is measured.
  • The high schools are not going to suddenly become more rigorous because the colleges reduce their expectations.
    Today's rip-tide toward measures, this time from Alexander and Zemsky (and others), and the implications of standardized measures.
Judy Rumph

Views: Why Are We Assessing? - Inside Higher Ed - 1 views

  • Amid all this progress, however, we seem to have lost our way. Too many of us have focused on the route we’re traveling: whether assessment should be value-added; the improvement versus accountability debate; entering assessment data into a database; pulling together a report for an accreditor. We’ve been so focused on the details of our route that we’ve lost sight of our destinatio
  • Our destination, which is what we should be focusing on, is the purpose of assessment. Over the last decades, we've consistently talked about two purposes of assessment: improvement and accountability. The thinking has been that improvement means using assessment to identify problems — things that need improvement — while accountability means using assessment to show that we're already doing a great job and need no improvement. A great deal has been written about the need to reconcile these two seemingly disparate purposes.
  • The most important purpose of assessment should be not improvement or accountability but their common aim: everyone wants students to get the best possible education
  • ...7 more annotations...
  • Our second common purpose of assessment should be making sure not only that students learn what’s important, but that their learning is of appropriate scope, depth, and rigo
  • Third, we need to accept how good we already are, so we can recognize success when we see i
  • And we haven’t figured out a way to tell the story of our effectiveness in 25 words or less, which is what busy people want and nee
  • Because we're not telling the stories of our successful outcomes in simple, understandable terms, the public continues to define quality using the outdated concept of inputs like faculty credentials, student aptitude, and institutional wealth — things that by themselves don’t say a whole lot about student learning.
  • And people like to invest in success. Because the public doesn't know how good we are at helping students learn, it doesn't yet give us all the support we need in our quest to give our students the best possible education.
  • But while virtually every college and university has had to make draconian budget cuts in the last couple of years, with more to come, I wonder how many are using solid, systematic evidence — including assessment evidence — to inform those decisions.
  • Now is the time to move our focus from the road we are traveling to our destination: a point at which we all are prudent, informed stewards of our resources… a point at which we each have clear, appropriate, justifiable, and externally-informed standards for student learning. Most importantly, now is the time to move our focus from assessment to learning, and to keeping our promises. Only then can we make higher education as great as it needs to be.
    Yes, this article resonnated with me too. Especially connecting assessment to teaching and learning. The most important purpose of assessment should be not improvement or accountability but their common aim: everyone wants students to get the best possible education.... today we seem to be devoting more time, money, thought, and effort to assessment than to helping faculty help students learn as effectively as possible. When our colleagues have disappointing assessment results, and they don't know what to do to improve them, I wonder how many have been made aware that, in some respects, we are living in a golden age of higher education, coming off a quarter-century of solid research on practices that promote deep, lasting learning. I wonder how many are pointed to the many excellent resources we now have on good teaching practices, including books, journals, conferences and, increasingly, teaching-learning centers right on campus. I wonder how many of the graduate programs they attended include the study and practice of contemporary research on effective higher education pedagogies. No wonder so many of us are struggling to make sense of our assessment results! Too many of us are separating work on assessment from work on improving teaching and learning, when they should be two sides of the same coin. We need to bring our work on teaching, learning, and assessment together.
Gary Brown

Views: Asking Too Much (and Too Little) of Accreditors - Inside Higher Ed - 1 views

  • Senators want to know why accreditors haven’t protected the public interest.
  • Congress shouldn’t blame accreditors: it should blame itself. The existing accreditation system has neither ensured quality nor ferreted out fraud. Why? Because Congress didn’t want it to. If Congress truly wants to protect the public interest, it needs to create a system that ensures real accountability.
  • But turning accreditors into gatekeepers changed the picture. In effect, accreditors now held a gun to the heads of colleges and universities since federal financial aid wouldn’t flow unless the institution received “accredited” status.
  • ...10 more annotations...
  • Congress listened to higher education lobbyists and designated accreditors -- teams made up largely of administrators and faculty -- to be “reliable authorities” on educational quality. Intending to protect institutional autonomy, Congress appropriated the existing voluntary system by which institutions differentiated themselves.
  • A gatekeeping system using peer review is like a penal system that uses inmates to evaluate eligibility for parole. The conflicts of interest are everywhere -- and, surprise, virtually everyone is eligible!
  • accreditation is “premised upon collegiality and assistance; rather than requirements that institutions meet certain standards (with public announcements when they don’t."
  • Meanwhile, there is ample evidence that many accredited colleges are adding little educational value. The 2006 National Assessment of Adult Literacy revealed that nearly a third of college graduates were unable to compare two newspaper editorials or compute the cost of office items, prompting the Spellings Commission and others to raise concerns about accreditors’ attention to productivity and quality.
  • But Congress wouldn’t let them. Rather than welcoming accreditors’ efforts to enhance their public oversight role, Congress told accreditors to back off and let nonprofit colleges and universities set their own standards for educational quality.
  • ccreditation is nothing more than an outdated industrial-era monopoly whose regulations prevent colleges from cultivating the skills, flexibility, and innovation that they need to ensure quality and accountability.
  • there is a much cheaper and better way: a self-certifying regimen of financial accountability, coupled with transparency about graduation rates and student success. (See some alternatives here and here.)
  • Such a system would prioritize student and parent assessment over the judgment of institutional peers or the educational bureaucracy. And it would protect students, parents, and taxpayers from fraud or mismanagement by permitting immediate complaints and investigations, with a notarized certification from the institution to serve as Exhibit A
  • The only way to protect the public interest is to end the current system of peer review patronage, and demand that colleges and universities put their reputation -- and their performance -- on the line.
  • Anne D. Neal is president of the American Council of Trustees and Alumni. The views stated herein do not represent the views of the National Advisory Committee on Institutional Quality and Integrity, of which she is a member.
    The ascending view of accreditation.
Nils Peterson

U. of Phoenix Reports on Students' Academic Progress - Measuring Stick - The Chronicle of Higher Education - 0 views

  • In comparisons of seniors versus freshmen within the university, the 2,428 seniors slightly outperformed 4,003 freshmen in all categories except natural sciences, in which they were equivalent.
    • Nils Peterson
      This is the value added measure.
  • The University of Phoenix has released its third “Academic Annual Report,” a document that continues to be notable not so much for the depth of information it provides on its students’ academic progress but for its existence at all.
    • Nils Peterson
      Provides a range of measures, inc. demographics, satisfaction, indirect measures of percieved utility and direct measures using national tests.
  • The Phoenix academic report also includes findings on students’ performance relative to hundreds of thousands of students at nearly 400 peer institutions on two standardized tests
  • ...1 more annotation...
  • University of Phoenix seniors slightly underperformed a comparison group of 42,649 seniors at peer institutions in critical thinking, humanities, social sciences, and natural sciences, and moderately underperformed the peer group in reading, writing, and mathematics.
Gary Brown

Can We Afford Our State Colleges? - Brainstorm - The Chronicle of Higher Education - 0 views

  • xperts do not agree on the precise numbers, but over the past generation we have moved from an environment in which states paid for 70 percent of cost and students paid 30 percent, to a situation in which those numbers have exactly reversed. Increasingly, tuition accounts for the lion’s share of institutional budgets, with state appropriations playing a minority role.
  • The sense I got Friday was that higher-education professionals do not expect the ”good old days” to return
  • the apparent consensus that public education needs to be more productive, because there was no discussion of the definition of productivity. 
  • ...4 more annotations...
  • But even when instruction was (implicitly) assumed to be the measure of productivity, there was no discussion of measurable learning outcomes.
  • 5. It is true that public colleges do not measure learning outcomes. Neither does anyone else. U.S. universities resist this kind of accountability in every way they can think of. Since 1985, when the modern assessment movement gained traction, higher education can only be said to have been temporizing, getting ready to get ready through endless committees that go nowhere.
  • Most institutions continue to invoke apodictic notions of quality and refuse to define quality in modern terms (suitability to purpose; quality for whom and for what purpose) or to address the issue of value added, where career schools and community colleges will generally lead. At this time, there is virtually no institution-wide assessment system in place that would pass muster in a 501 measurement science course.
  • 6. Yes, public institutions need restructuring to make them more accountable and productive. Our independent colleges and universities need the same kind of restructuring and the agenda is rightfully one of public interest. The common perception that taxpayers do not support our private institutions is false.----------------------Robert W TuckerPresidentInterEd,
    Tucker's note in the comments again suggests the challenge and opportunity of the WSU model.
Gary Brown

Higher Education: Assessment & Process Improvement Group News | LinkedIn - 1 views

    lots about program effectiveness implied here, notably having good teachers in succession.
Gary Brown

Student-Survey Results: Too Useful to Keep Private - Commentary - The Chronicle of Higher Education - 0 views

  • "There are … disturbing signs that many students who do earn degrees have not actually mastered the reading, writing, and thinking skills we expect of college graduates. Over the past decade, literacy among college graduates has actually declined."
  • But a major contributing factor is that the customers of higher education—students, parents, and employers—have few true measures of quality on which to rely. Is a Harvard education really better than that from a typical flagship state university, or does Harvard just benefit from being able to enroll better students? Without measures of value added in higher education, that's difficult, if not impossible, to determine.
  • Yet what is remarkable about the survey is that participating institutions generally do not release the results so that parents and students can compare their performance with those of other colleges.
  • ...3 more annotations...
  • Requiring all colleges to make such information public would pressure them to improve their undergraduate teaching
  • It would empower prospective students and their parents with solid information about colleges' educational quality and help them make better choices. To make that happen, the federal government should simply require that any institution receiving federal support—Pell Grants, student loans, National Science Foundation grants, and so on—make its results public on the Web site of the National Survey of Student Engagement in an open, interactive way.
  • Indeed, a growing number of organizations in our economy now have to live with customer-performance measures. It's time higher education did the same.
    The whites of the eyes--the perceptions and assumptions behind the push for accountability. I note in particular the notion that higher education will understand comparisons of the NSSE as an incentive to improve.
1 - 16 of 16
Showing 20 items per page