Skip to main content

Home/ Education Links/ Group items tagged values

Rss Feed Group items tagged

Jeff Bernstein

Doug Harris Crunches Critics in Value-Added Smackdown - Rick Hess Straight Up - Educati... - 0 views

  •  
    The University of Wisconsin's Doug Harris has torched a couple of would-be critics for their inane, inept, and unfair review of his book Value-Added Measures in Education (Harvard Education Press 2011). For those who appreciate such things, his response is a classic dismemberment of the Education Review take penned by Arizona State University's Clarin Collins and Audrey Amrein-Beardsley. For everyone else, it's important because it sheds light on why it's so damn hard to sensibly discuss issues like value-added accountability. (Collins and Amrein-Beardsley also penned a re-rebuttal, which is fun primarily because it reads like a note from the kid you caught spray-painting your Prius who tells you, "It wasn't me, it wasn't spray paint, I was actually washing your car, and I was only trying to help hide that dent.")
Jeff Bernstein

Linda Darling-Hammond: Value-Added Evaluation Hurts Teaching - 0 views

  •  
    As student learning is the primary goal of teaching, it seems like common sense to evaluate teachers based on how much their students gain on state standardized tests. Indeed, many states have adopted this idea in response to federal incentives tied to much-needed funding. However, previous experience is not promising. Recently evaluated experiments in Tennessee and New York did not improve achievement when teachers were evaluated and rewarded based on student test scores. In the District of Columbia, contrary to expectations, reading scores on national tests dropped and achievement gaps grew after a new test-based teacher-evaluation system was installed. In Portugal, a study of test-based merit pay attributed score declines to the negative effects of teacher competition, leading to less collaboration and sharing of knowledge. I was once bullish on the idea of using "value-added methods" for assessing teacher effectiveness. I have since realized that these measures, while valuable for large-scale studies, are seriously flawed for evaluating individual teachers, and that rigorous, ongoing assessment by teaching experts serves everyone better. Indeed, reviews by the National Research Council, the RAND Corp., and the Educational Testing Service have all concluded that value-added estimates of teacher effectiveness should not be used to make high-stakes decisions about teachers. Why?
Jeff Bernstein

Shanker Blog » Value-Added In Teacher Evaluations: Built To Fail - 0 views

  •  
    "With all the controversy and acrimonious debate surrounding the use of value-added models in teacher evaluation, few seem to be paying much attention to the implementation details in those states and districts that are already moving ahead. This is unfortunate, because most new evaluation systems that use value-added estimates are literally being designed to fail."
Jeff Bernstein

Valuing Teachers : Education Next - 0 views

  •  
    "Many of us have had at some point in our lives a wonderful teacher, one whose value, in retrospect, seems inestimable. We do not pretend here to know how to calculate the life-transforming effects that such teachers can have with particular students. But we can calculate more prosaic economic values related to effective teaching, by drawing on a research literature that provides surprisingly precise estimates of the impact of student achievement levels on their lifetime earnings and by combining this with estimated impacts of more-effective teachers on student achievement."
Jeff Bernstein

Top School Jobs: What HR Should Know About Value-Added Data - 2 views

  •  
    As a growing number of states move toward legislation that would institute teacher merit pay, the debate around whether and how to use student test scores in high-stakes staffing decisions has become even more hotly contested. The majority of merit pay initiatives, such as those recently proposed in Ohio and Florida, rely to some extent on value-added estimation, the method of measuring a teacher's impact by tracking student growth on test scores from year to year. We recently exchanged e-mails with Steven Glazerman, a Senior Fellow at the policy research group Mathematica. Glazerman specializes in teacher recruitment, performance management, professional development, and compensation. According to Glazerman, a strong understanding of the constructive uses and limitations of value-added data can prove beneficial for district-level human resources practitioners.
Jeff Bernstein

Shanker Blog » The Stability Of Ohio's School Value-Added Ratings And Why It ... - 0 views

  •  
    I have discussed before how most testing data released to the public are cross-sectional, and how comparing them between years entails the comparison of two different groups of students. One way to address these issues is to calculate and release school- and district-level value-added scores. Value added estimates are not only longitudinal (i.e., they follow students over time), but the models go a long way toward for differences in the characteristics of students between schools and districts. Put simply, these models calculate "expectations" for student test score gains based on student (and sometimes school) characteristics, which are then used to gauge whether schools' students did better or worse than expected.
Jeff Bernstein

D.C. Update: Allegedly False Test Scores Used for Value-Added Calculations - Teaching N... - 0 views

  •  
    Student test scores from 100 D.C. public schools still under investigation for cheating were used in value-added calculations that were incorporated into some teachers' evaluations this year, according to DCPS spokesperson Fred Lewis. More than 200 D.C. teachers were terminated last week on the basis of their evaluation results. Only when "instances of cheating were confirmed" were affected student scores removed from the value-added model, Lewis said.
Jeff Bernstein

Value-Added Models and the Measurement of Teacher Productivity - 1 views

  •  
    Research on teacher productivity, and recently developed accountability systems for teachers, rely on value-added models to estimate the impact of teachers on student performance.  The authors test many of the central assumptions required to derive value-added models from an underlying structural cumulative achievement model and reject nearly all of them.  Moreover, they find that  teacher value added and other key parameter estimates are highly sensitive to model specification.  While estimates from commonly employed value-added models cannot be interpreted as causal teacher effects, employing richer models that impose fewer restrictions may reduce  the  bias in estimates of teacher productivity.  
Jeff Bernstein

Chetty, et al. on the American Statistical Association's Recent Position Statement on V... - 0 views

  •  
    "Over the last decade, teacher evaluation based on value-added models (VAMs) has become central to the public debate over education policy. In this commentary, we critique and deconstruct the arguments proposed by the authors of a highly publicized study that linked teacher value-added models to students' long-run outcomes, Chetty et al. (2014, forthcoming), in their response to the American Statistical Association statement on VAMs. We draw on recent academic literature to support our counter-arguments along main points of contention: causality of VAM estimates, transparency of VAMs, effect of non-random sorting of students on VAM estimates and sensitivity of VAMs to model specification. "
Jeff Bernstein

Stephen Caldas: Value-Added: The Emperor with No Clothes - 0 views

  •  
    "The trend to use value-added models to rate teachers and principals in New York is psychometrically indefensible."
Jeff Bernstein

Evaluating Teachers and Schools Using Student Growth Models - 0 views

  •  
    Interest in Student Growth Modeling (SGM) and Value Added Modeling (VAM) arises from educators concerned with measuring the effectiveness of teaching and other school activities through changes in student performance as a companion and perhaps even an alternative to status. Several formal statistical models have been proposed for year-to-year growth and these fall into at least three clusters: simple change (e.g., differences on a vertical scale), residualized change (e.g., simple linear or quantile regression techniques), and value tables  (varying salience of different achievement level outcomes across two years). Several of these methods have been implemented by states and districts.  This paper reviews relevant literature and reports results of a data-based comparison of six basic SGM models that may permit aggregating across teachers or schools to provide evaluative information.  Our investigation raises some issues that may compromise current efforts to implement VAM in teacher and school evaluations and makes suggestions for both practice and research based on the results.
Jeff Bernstein

Teacher evaluation: What it should look like - The Answer Sheet - The Washington Post - 0 views

  •  
    A new report from Stanford University researcher Linda Darling-Hammond details what the components of a comprehensive teacher evaluation system should look like at a time when such assessments have become one of the most contentious debates in education today. Much of the controversy swirls around the growing trend of using students' standardized test scores over time to help assess teacher effectiveness. This "value-added" method of assessment - which involves the use of complicated formulas that supposedly evaluate how much "value" a teacher adds to a student's achievement - is considered unreliable and not valid by many experts, though school reformers have glommed onto it with great zeal.
Jeff Bernstein

Scapegoating Teachers » Counterpunch - 0 views

  •  
    Unlike the Texas miracle, the Harvard-Columbia revelations are not based on fraudulent numbers. But what is deeply problematic is the spin that the authors give to their findings. The study examined the incomes of adults who, as children in the 4th through the 8th grades, had teachers of different "Value Added" scores, with Value Added defined as improvement in the scores of students on standardized tests. The study claims that the individuals who had excellent teachers as children have higher incomes as adults; we will examine the validity of this claim below. But first we must ask what these higher incomes mean. When they were children, these individuals were poor. What the H-C authors fail to mention is that even when they had excellent teachers as children and therefore have higher incomes as adults, these individuals, despite their higher incomes, remain poor.
Jeff Bernstein

Gregory Michie: What Value Is Added by Publicly Shaming Teachers? - 0 views

  •  
    Just when you think the climate of disrespect for teachers can't get any worse, it does. This past weekend, the Chicago Tribune's editorial board urged Illinois parents to demand that the state emulate New York City (and Los Angeles) by making individual teachers' "value-added" ratings available for public scrutiny.
Jeff Bernstein

Review of Gathering Feedback for Teaching: Combining High-Quality Observation with Stud... - 0 views

  •  
    This second report from the Measures of Effective Teaching (MET) project offers ground-breaking descriptive information regarding the use of classroom observation instruments to measure teacher performance. It finds that observation scores have somewhat low reliabilities and are weakly though positively related to value-added measures. Combining multiple observations can enhance reliabilities, and combining observation scores with student evaluations and test-score information can increase their ability to predict future teacher value-added. By highlighting the variability of classroom observation measures, the report makes an important contribution to research and provides a basis for the further development of observation rubrics as evaluation tools. Although the report raises concerns regarding the validity of classroom observation measures, we question the emphasis on validating observations with test-score gains. Observation scores may pick up different aspects of teacher quality than test-based measures, and it is possible that neither type of measure used in isolation captures a teacher's contribution to all the useful skills students learn. From this standpoint, the authors' conclusion that multiple measures of teacher effectiveness are needed appears justifiable. Unfortunately, however, the design calls for random assignment of students to teachers in the final year of data collection, but the classroom observations were apparently conducted prior to randomization, missing a valuable opportunity to assess correlations across measures under relatively bias-free conditions.
Jeff Bernstein

Researchers blast Chicago teacher evaluation reform - The Answer Sheet - The Washington... - 0 views

  •  
    Scores of professors and researchers from 16 universities throughout the Chicago metropolitan area have signed an open letter to the city's mayor, Rahm Emanuel, and Chicago school officials warning against implementing a teacher evaluation system that is based on standardized test scores. This is the latest protest against "value-added" teacher evaluation models that purport to measure how much "value" a teacher adds to a student's academic progress by using a complicated formula involving a standardized test score. Researchers have repeatedly warned against using these methods, but school reformers have been doing it in state after state anyway. A petition in New York State by principals and others against a test-based evaluation system there has been gaining ground.
Jeff Bernstein

Firing teachers based on bad (VAM) versus wrong (SGP) measures of effectivene... - 0 views

  •  
    In the near future my article with Preston Green and Joseph Oluwole on legal concerns regarding the use of Value-added modeling for making high stakes decisions will come out in the BYU Education and Law Journal. In that article, we expand on various arguments I first laid out in this blog post about how use of these noisy and potentially biased metrics is likely to lead to a flood of litigation challenging teacher dismissals. In short, as I have discussed on numerous occasions on this blog, value-added models attempt to estimate the effect of the individual teacher on growth in measured student outcomes. But, these models tend to produce very imprecise estimates with very large error ranges, jumping around a lot from year to year.  Further, individual teacher effectiveness estimates are highly susceptible to even subtle changes to model variables. And failure to address key omitted variables can lead to systemic model biases which may even lead to racially disparate teacher dismissals (see here & for follow up , here) .
Jeff Bernstein

Portability of Teacher Effectiveness Across School Settings - 0 views

  •  
    Redistributing highly effective teachers from low- to high-need schools is an education policy tool that is at the center of several major current policy initiatives. The underlying assumption is that  teacher productivity is portable across different schools settings. Using elementary and secondary school data from North Carolina and Florida, this paper investigates the validity of this assumption. Among teachers who switched between schools with substantially different poverty levels or academic performance levels, we find no change in those teachers' measured effectiveness before and after a school change. This pattern holds regardless of the direction of the school change. We also find that high-performing teachers' value-added dropped and low-performing teachers' value-added gained in the post-move years, primarily as a result of regression to the within-teacher mean and unrelated to school setting changes. Despite such shrinkages, high-performing teachers in the pre-move years still outperformed low-performing teachers after moving to schools with different settings.
Jeff Bernstein

John Thompson: The Center for American Progress Pushes the Good, Bad and Ugly in Teache... - 0 views

  •  
    The Center For American Progress has published another report justifying the firing of teachers today, based on statistical models that may some day become valid. "Designing High Quality Evaluation Systems," by John Tyler, recounts the standard reasons why of educators do not trust high-stakes test-driven algorithms, and even contributes a couple of new insights into problems that are unique to high school test scores. An urban teacher reading Tyler's evidence would likely conclude that he has written an ironclad indictment of value-added models for high-stakes purposes. But, as is usually true of CAP's researchers, he concludes that the work of economists in improving value-added models is so impressive that education will benefit from their experiments if educators don't blow it.
Jeff Bernstein

Education Week: When Test Scores Become a Commodity - 0 views

  •  
    The recent spate of cheating scandals in cities like Atlanta, Chicago, Los Angeles, and Washington presents an interesting conundrum. Those opposed to education reform schemes tied to the evaluation of student test scores and teacher compensation, or "value added" evaluation, claim that the teachers and administrators who were caught cheating were the victims, compelled to cheat out of fear for their livelihoods. On the other hand, value-added advocates solemnly pronounce that there is no excuse for cheating and that, moreover, cheating teachers and administrators provide the very evidence that reform is necessary. Both positions are valid. Can we work our way out?
« First ‹ Previous 41 - 60 of 620 Next › Last »
Showing 20 items per page