Skip to main content

Home/ Education Links/ Group items tagged value-added

Rss Feed Group items tagged

Jeff Bernstein

What To Think About That Big New Teacher Value-Added Study - 1 views

  •  
    Nicholas Kristoff wrote yesterday about the "landmark" new teacher value-added study from Chetty, Friedman and Rockoff. It's worth being clear about why the study has garnered so much attention. It's not because it shows that teachers matter. Everyone knew or believed that already. It's because it shows that teachers vary in how much they matter. And, for the first time, it takes the value-added debate out of the standardized testing box.
Jeff Bernstein

Leading mathematician debunks 'value-added' - The Answer Sheet - The Washington Post - 0 views

  •  
    But the most common misuse of mathematics is simpler, more pervasive, and (alas) more insidious: mathematics employed as a rhetorical weapon-an intellectual credential to convince the public that an idea or a process is "objective" and hence better than other competing ideas or processes. This is mathematical intimidation. It is especially persuasive because so many people are awed by mathematics and yet do not understand it-a dangerous combination. The latest instance of the phenomenon is valued-added modeling (VAM), used to interpret test data. Value-added modeling pops up everywhere today, from newspapers to television to political campaigns. VAM is heavily promoted with unbridled and uncritical enthusiasm by the press, by politicians, and even by (some) educational experts, and it is touted as the modern, "scientific" way to measure educational success in everything from charter schools to individual teachers.
Jeff Bernstein

Hechinger Report | How New York City's value-added model compares to what other distric... - 0 views

  •  
    The Hechinger Report has spent the past 14 months reporting on teacher-effectiveness reforms around the country, and has examined value-added models in several states. New York City's formula, which was designed by researchers at the University of Wisconsin-Madison, has elements that make it more accurate than other models in some respects, but it also has elements that experts say may increase errors-a major concern for teachers whose job security is tied to their value-added ratings.
Jeff Bernstein

Shanker Blog » Revisiting The "5-10 Percent Solution" - 0 views

  •  
    In a post over a year ago, I discussed the common argument that dismissing the "bottom 5-10 percent" of teachers would increase U.S. test scores to the level of high-performing nations. This argument is based on a calculation by economist Eric Hanushek, which suggests that dismissing the lowest-scoring teachers based on their math value-added scores would, over a period of around ten years  (when the first cohort of students would have gone through the schooling system without the "bottom" teachers), increase U.S. math scores dramatically - perhaps to the level of high-performing nations such as Canada or Finland.* This argument is, to say the least, controversial, and it invokes the full spectrum of reactions. In my opinion, it's best seen as a policy-relevant illustration of the wide variation in test-based teacher effects, one that might suggest a potential of a course of action but can't really tell us how it will turn out in practice. To highlight this point, I want to take a look at one issue mentioned in that previous post - that is, how the instability of value-added scores over time (which Hanushek's simulation doesn't address directly) might affect the projected benefits of this type of intervention, and how this is turn might modulate one's view of the huge projected benefits. One (admittedly crude) way to do this is to use the newly-released New York City value-added data, and look at 2010 outcomes for the "bottom 10 percent" of math teachers in 2009.
Jeff Bernstein

Hechinger Report | Should value-added teacher ratings be adjusted for poverty? - 0 views

  •  
    In Washington, D.C., one of the first places in the country to use value-added teacher ratings to fire teachers, teacher-union president Nathan Saunders likes to point to the following statistic as proof that the ratings are flawed: Ward 8, one of the poorest areas of the city, has only 5 percent of the teachers defined as effective under the new evaluation system known as IMPACT, but more than a quarter of the ineffective ones. Ward 3, encompassing some of the city's more affluent neighborhoods, has nearly a quarter of the best teachers, but only 8 percent of the worst. The discrepancy highlights an ongoing debate about the value-added test scores that an increasing number of states-soon to include Florida-are using to evaluate teachers. Are the best, most experienced D.C. teachers concentrated in the wealthiest schools, while the worst are concentrated in the poorest schools? Or does the statistical model ignore the possibility that it's more difficult to teach a room full of impoverished children?
Jeff Bernstein

What is the Value in a High Value-Added Teacher? « The Core Knowledge Blog - 0 views

  •  
    While the studies of economists may add to the discussion about what makes teachers valuable in our lives, I believe that if we reduce teachers' value to dollars and cents, we run the risk of becoming, in Oscar Wilde's phrase, "the kind of people who know the price of everything, but the value of nothing."
Jeff Bernstein

A Legal Argument Against The Use of VAMs in Teacher Evaluation - 0 views

  •  
    "Value Added Models (VAMs) are irresistible. Purportedly they can ascertain a teacher's effectiveness by predicting the impact of a teacher on a student's test scores. Because test scores are the sin qua non of our education system, VAMs are alluring. They link a teacher directly to the most emphasized output in education today. What more can we want from an evaluative tool, especially in our pursuit of improving schools in the name of social justice? Taking this a step further, many see VAMs as the panacea for improving teacher quality. The theory seems straightforward. VAMs provide statistical predictions regarding a teacher's impact that can be compared to actual results. If a teacher cannot improve a student's test score in relatively positive ways, then they are ineffective. If they are ineffective, they can (and should) be dismissed (See, for instance, Hanushek, 2010). Consequently, state legislatures have rushed to codify VAMs into their statutes and regulations governing teacher evaluation. (See, for example, Florida General Laws, 2014). That has been a mistake. This paper argues for a complete reversal in policy course. To wit, state regulations that connect a teacher's continued employment to VAMs should be overhauled to eliminate the connection between evaluation and student test scores. The reasoning is largely legal, rather than educational. In sum, the legal costs of any use of VAMs in a performance-based termination far outweigh any value they may add.1 These risks are directly a function of the well-documented statistical flaws associated with VAMs (See, for example, Rothstein, 2010). The "value added" of VAMs in supporting a termination is limited, if it exists at all."
Jeff Bernstein

Gov. Andrew Cuomo and baloney - The Washington Post - 0 views

  •  
    "New York Gov. Andrew Cuomo's school reform proposals have infuriated educators across the state. Award-winning Principal Carol Burris of South Side High School is one of them and in this post, she  explains why. Burris, who has written frequently for this blog,  was named New York's 2013 High School Principal of the Year by the School Administrators Association of New York and the National Association of Secondary School Principals, and in 2010, was tapped as the 2010 New York State Outstanding Educator by the School Administrators Association of New York State. Burris has been exposing the botched school reform program in New York for years on this blog. Her most recent post was "Principal: 'There comes a time when rules must be broken…That time is now.'" (In this post, Burris refers to "value-added" scores, which refer to value-added measurement (VAM), which purports to be able to determine the "value" a teacher brings to student learning by plopping test scores into complicated formulas that can supposedly strip out all other factors, including the conditions in which a student lives.)"
Jeff Bernstein

Linda Darling-Hammond and Edward Haertel: 'Value-added' teacher evaluations not reliabl... - 0 views

  •  
    "It's becoming a familiar story: Great teachers get low scores from "value-added" teacher evaluation models. Newspapers across the country have published accounts of extraordinary teachers whose evaluations, based on their students' state test scores, seem completely out of sync with the reality of their practice. Los Angeles teachers have figured prominently in these reports. Researchers are not surprised by these stories, because dozens of studies have documented the serious flaws in these ratings, which are increasingly used to evaluate teachers' effectiveness. The ratings are based on value-added models such as the L.A. school district's Academic Growth over Time system, which uses complex statistical metrics to try to sort out the effects of student characteristics (such as socioeconomic status) from the effects of teachers on test scores. A study we conducted at Stanford University showed what these teachers are experiencing."
Jeff Bernstein

Shanker Blog » Value-Added Versus Observations, Part One: Reliability - 0 views

  •  
    Although most new teacher evaluations are still in various phases of pre-implementation, it's safe to say that classroom observations and/or value-added (VA) scores will be the most heavily-weighted components toward teachers' final scores, depending on whether teachers are in tested grades and subjects. One gets the general sense that many - perhaps most - teachers strongly prefer the former (observations, especially peer observations) over the latter (VA). One of the most common arguments against VA is that the scores are error-prone and unstable over time - i.e., that they are unreliable. And it's true that the scores fluctuate between years (also see here), with much of this instability due to measurement error, rather than "real" performance changes. On a related note, different model specifications and different tests can yield very different results for the same teacher/class. These findings are very important, and often too casually dismissed by VA supporters, but the issue of reliability is, to varying degrees, endemic to all performance measurement. Actually, many of the standard reliability-based criticisms of value-added could also be leveled against observations. Since we cannot observe "true" teacher performance, it's tough to say which is "better" or "worse," despite the certainty with which both "sides" often present their respective cases. And, the fact that both entail some level of measurement error doesn't by itself speak to whether they should be part of evaluations.*
Jeff Bernstein

Shanker Blog » Value-Added Versus Observations, Part Two: Validity - 0 views

  •  
    In a previous post, I compared value-added (VA) and classroom observations in terms of reliability - the degree to which they are free of error and stable over repeated measurements. But even the most reliable measures aren't useful unless they are valid - that is, unless they're measuring what we want them to measure. Arguments over the validity of teacher performance measures, especially value-added, dominate our discourse on evaluations. There are, in my view, three interrelated issues to keep in mind when discussing the validity of VA and observations. The first is definitional - in a research context, validity is less about a measure itself than the inferences one draws from it. The second point might follow from the first: The validity of VA and observations should be assessed in the context of how they're being used. Third and finally, given the difficulties in determining whether either measure is valid in and of itself, as well as the fact that so many states and districts are already moving ahead with new systems, the best approach at this point may be to judge validity in terms of whether the evaluations are improving outcomes. And, unfortunately, there is little indication that this is happening in most places.
Jeff Bernstein

The SAS Education Value-Added Assessment System (SAS® EVAAS®) in the Houston ... - 0 views

  •  
    The SAS Educational Value-Added Assessment System (SAS® EVAAS®) is the most widely used value-added system in the country. It is also self-proclaimed as "the most robust and reliable" system available, with its greatest benefit to help educators improve their teaching practices. This study critically examined the effects of SAS® EVAAS® as experienced by teachers, in one of the largest, high-needs urban school districts in the nation - the Houston Independent School District (HISD). Using a multiple methods approach, this study critically analyzed retrospective quantitative and qualitative data to better comprehend and understand the evidence collected from four teachers whose contracts were not renewed in the summer of 2011, in part given their low SAS® EVAAS® scores. This study also suggests some intended and unintended effects that seem to be occurring as a result of SAS® EVAAS® implementation in HISD. In addition to issues with reliability, bias, teacher attribution, and validity, high-stakes use of SAS® EVAAS® in this district seems to be exacerbating unintended effects.
Jeff Bernstein

Shanker Blog » The Weighting Game - 0 views

  •  
    A while back, I noted that states and districts should exercise caution in assigning weights (importance) to the components of their teacher evaluation systems before they know what the other components will be. For example, most states that have mandated new evaluation systems have specified that growth model estimates count for a certain proportion (usually 40-50 percent) of teachers' final scores (at least those in tested grades/subjects), but it's critical to note that the actual importance of these components will depend in no small part on what else is included in the total evaluation, and how it's incorporated into the system. In slightly technical terms, this distinction is between nominal weights (the percentage assigned) and effective weights (the percentage that actually ends up being the case). Consider an extreme hypothetical example - let's say a district implements an evaluation system in which half the final score is value-added and half is observations. But let's also say that every teacher gets the same observation score. In this case, even though the assigned (nominal) weight for value-added is 50 percent, the actual importance (effective weight) will be 100 percent, since every teacher receives the same observation score, and so all the variation between teachers' final scores will be determined by the value-added component.
Jeff Bernstein

Shanker Blog » If Newspapers Are Going To Publish Teachers' Value-Added Score... - 0 views

  •  
    I don't think there's any way to avoid publication, given that about a dozen newspapers will receive the data, and it's unlikely that every one of them will decline to do so. So, in addition to expressing my firm opposition, I would offer what I consider to be an absolutely necessary suggestion: If newspapers are going to publish the estimates, they need to publish the error margins too. Value-added and other growth model scores are statistical estimates, and must be interpreted as such. Imagine that a political poll found that a politician's approval rate was 40 percent, but, due to an unusually small sample of respondents, the error margin on this estimate was plus or minus 20 percentage points. Based on these results, the approval rate might actually be abysmal (20 percent), or it might be pretty good (60 percent). Should a newspaper publish the 40 percent result without mentioning that level of imprecision? Of course not. In fact, they should refuse to publish the result at all. Value-added estimates are no different.
Jeff Bernstein

Value-Added Measures in Education: The Best of the Alternatives is Simply Not Good Enough - 0 views

  •  
    On September 8, 2011 Teachers College Record published a book review of Douglas N. Harris's recent book Value-Added Measures in Education. In this commentary the author takes issue with not necessarily the book's What Every Educator Needs to Know content but the author's overall endorsement of value-added, and his and others' imprudent adoption of some highly complex assumptions.
Jeff Bernstein

Methods for Accounting for Co-Teaching in Value-Added Models - 0 views

  •  
    Isolating the effect of a given teacher on student achievement (value-added modeling) is complicated when the student is taught the same subject by more than one teacher. We consider three methods, which we call the Partial Credit Method, Teacher Team Method, and Full Roster Method, for estimating teacher effects in the presence of co-teaching. The Partial Credit Method apportions responsibility between teachers according to the fraction of the year a student spent with each. This method, however, has practical problems limiting its usefulness. As alternatives, we propose two methods that can be more stably estimated based on the premise that co-teachers share joint responsibility for the achievement gains of their shared students. The Teacher Team Method uses a single record for each student and a set of variables for each teacher or group of teachers with shared students, whereas the Full Roster Method contains a single variable for each teacher, but multiple records for shared students. We explore the properties of these two alternative methods and then compare the estimates generated using student achievement and teacher roster data from a large urban school district. We find that both methods produce very similar point estimates of teacher value added. However, the Full Roster Method better maintains the links between teachers and students and can be more robustly implemented in practice. 
Jeff Bernstein

Doug Harris Crunches Critics in Value-Added Smackdown - Rick Hess Straight Up - Educati... - 0 views

  •  
    The University of Wisconsin's Doug Harris has torched a couple of would-be critics for their inane, inept, and unfair review of his book Value-Added Measures in Education (Harvard Education Press 2011). For those who appreciate such things, his response is a classic dismemberment of the Education Review take penned by Arizona State University's Clarin Collins and Audrey Amrein-Beardsley. For everyone else, it's important because it sheds light on why it's so damn hard to sensibly discuss issues like value-added accountability. (Collins and Amrein-Beardsley also penned a re-rebuttal, which is fun primarily because it reads like a note from the kid you caught spray-painting your Prius who tells you, "It wasn't me, it wasn't spray paint, I was actually washing your car, and I was only trying to help hide that dent.")
Jeff Bernstein

Linda Darling-Hammond: Value-Added Evaluation Hurts Teaching - 0 views

  •  
    As student learning is the primary goal of teaching, it seems like common sense to evaluate teachers based on how much their students gain on state standardized tests. Indeed, many states have adopted this idea in response to federal incentives tied to much-needed funding. However, previous experience is not promising. Recently evaluated experiments in Tennessee and New York did not improve achievement when teachers were evaluated and rewarded based on student test scores. In the District of Columbia, contrary to expectations, reading scores on national tests dropped and achievement gaps grew after a new test-based teacher-evaluation system was installed. In Portugal, a study of test-based merit pay attributed score declines to the negative effects of teacher competition, leading to less collaboration and sharing of knowledge. I was once bullish on the idea of using "value-added methods" for assessing teacher effectiveness. I have since realized that these measures, while valuable for large-scale studies, are seriously flawed for evaluating individual teachers, and that rigorous, ongoing assessment by teaching experts serves everyone better. Indeed, reviews by the National Research Council, the RAND Corp., and the Educational Testing Service have all concluded that value-added estimates of teacher effectiveness should not be used to make high-stakes decisions about teachers. Why?
Jeff Bernstein

Shanker Blog » Value-Added In Teacher Evaluations: Built To Fail - 0 views

  •  
    "With all the controversy and acrimonious debate surrounding the use of value-added models in teacher evaluation, few seem to be paying much attention to the implementation details in those states and districts that are already moving ahead. This is unfortunate, because most new evaluation systems that use value-added estimates are literally being designed to fail."
Jeff Bernstein

Top School Jobs: What HR Should Know About Value-Added Data - 2 views

  •  
    As a growing number of states move toward legislation that would institute teacher merit pay, the debate around whether and how to use student test scores in high-stakes staffing decisions has become even more hotly contested. The majority of merit pay initiatives, such as those recently proposed in Ohio and Florida, rely to some extent on value-added estimation, the method of measuring a teacher's impact by tracking student growth on test scores from year to year. We recently exchanged e-mails with Steven Glazerman, a Senior Fellow at the policy research group Mathematica. Glazerman specializes in teacher recruitment, performance management, professional development, and compensation. According to Glazerman, a strong understanding of the constructive uses and limitations of value-added data can prove beneficial for district-level human resources practitioners.
‹ Previous 21 - 40 of 441 Next › Last »
Showing 20 items per page