Skip to main content

Home/ Education Links/ Group items tagged value-added

Rss Feed Group items tagged

Jeff Bernstein

Review of Learning About Teaching | National Education Policy Center - 0 views

  •  
    The Bill & Melinda Gates Foundation's "Measures of Effective Teaching" (MET) Project seeks to validate the use of a teacher's estimated "value-added"-computed from the year-on-year test score gains of her students-as a measure of teaching effectiveness. Using data from six school districts, the initial report examines correlations between student survey responses and value-added scores computed both from state tests and from higher-order tests of conceptual understanding. The study finds that the measures are related, but only modestly. The report interprets this as support for the use of value-added as the basis for teacher evaluations. This conclusion is unsupported, as the data in fact indicate that a teachers' value-added for the state test is not strongly related to her effectiveness in a broader sense. Most notably, value-added for state assessments is correlated 0.5 or less with that for the alternative assessments, meaning that many teachers whose value-added for one test is low are in fact quite effective when judged by the other. As there is every reason to think that the problems with value-added measures apparent in the MET data would be worse in a high-stakes environment, the MET results are sobering about the value of student achievement data as a significant component of teacher evaluations.
Jeff Bernstein

Value-Added Measures in Education: What Every Educator Needs to Know reviewed by Michae... - 0 views

  •  
    Fewer topics in education arouse more controversy than value-added measures. There are disagreements about how value-added scores should be calculated. There are arguments about what value-added scores tell us about schools or teachers. There are differences of opinion about how value-added data should be used. The polemic received full public attention in August, 2010 when the LA Times published district teacher rankings based on individual teachers' value-added scores, custom-calculated for the newspaper by statisticians at the Rand Corporation. Union representatives were aghast, teachers were appalled, parents were intrigued, students were amused, and academic scholars were either supportive or critical. The problem was that Doug Harris's book Value-Added Measures in Education: What Every Educator Needs to Know had yet to be published, so the definitive resource for how best to assess the LA Times data was not available.
Jeff Bernstein

Measure For Measure: The Relationship Between Measures Of Instructional Practice In Mid... - 0 views

  •  
    Even as research has begun to document that teachers matter, there is less certainty about what attributes of teachers make the most difference in raising student achievement. Numerous studies have estimated the relationship between teachers' characteristics, such as work experience and academic performance, and their value-added to student achievement; but, few have explored whether instructional practices predict student test score gains. In this study, we ask what classroom practices, if any, differentiate teachers with high impact on student achievement in middle school English Language Arts from those with lower impact. In so doing, the study also explores to what extent value-added measures signal differences in instructional quality.  Even with the small sample used in our analysis, we find consistent evidence that high value-added teachers have a different profile of instructional practices than do low value-added teachers. Teachers in the fourth (top) quartile according to value-added scores score higher than second-quartile teachers on all 16 elements of instruction that we measured, and the differences are statistically significant for a subset of practices including explicit strategy instruction.
Jeff Bernstein

Analyzing Released NYC Value-Added Data Part 4 | Gary Rubinstein's Blog - 0 views

  •  
    Value-added has been getting a lot of media attention lately but, unfortunately, most stories are missing the point.  In Gotham Schools I read about a teacher who got a low score but it was because her score was based on students who were not assigned to her.  In The New York Times I read about three teachers who were rated in the single digits, but it was because they had high performing students and a few of their scores went down.  In The Washington Post I read about a teacher who was fired for getting low value-added on her IMPACT report, but it was because her students had inflated pretest scores because it is possible that the teachers from the year before cheated. Each of these stories makes it sound like there are very fixable flaws in value-added.  Get the student data more accurate, make some kind of curve for teachers of high performing students, get better test security so cheating can't affect the next year's teacher's score.  But the flaws in value-added go WAY beyond that, which is what I've been trying to show in my posts - not just some exceptional scenarios, but how it affects the majority of teachers.
Jeff Bernstein

The 'three great teacher' study - finally laid to rest | Gary Rubinstein's Blog - 0 views

  •  
    In today's New York Times there was a story about a research study which supposedly proved that students who had teachers with good value-added scores were more successful in life. This inspired me to complete something I have been working on for several months, off-and-on, a detailed analysis of the raw data supplied in the most quoted value-added study there is, a paper written in Dallas in 1997. This is the paper which 'proved' that students who had three effective teachers in a row got dramatically higher test scores than their unlucky peers who had three ineffective teachers in a row.  I've written about it previously much less formally here and here. The New York Times story frustrated me since I know that value-added does not correlate with future student income. Value-added does not correlate with teacher quality. Value-added doesn't correlate with principal evaluations. It doesn't correlate with anything including, as I'll demonstrate in this post, with itself.
Jeff Bernstein

IMPACTed Wisdom Truth? | Gary Rubinstein's Blog - 0 views

  •  
    Today, the day of the release of the New York City data, I received an email that I did not expect to come for at least a year.  In D.C. the evaluation process is called IMPACT.  About 500 teachers in D.C. belong to something called 'group one' which means that they teach something that can be measured with their value-added formula.  50% of their evaluation is based on their IVA (individual value-added), 35% is on their principal evaluation called their TLF (teaching and learning framework).  5% is on their SVA (school value added) and the remaining 10% on their CSC (commitment to school and community).  I wanted to test my theory that the value-added scores would not correlate with the principal evaluations so I had applied under the Freedom Of Information Act (FOIA) to D.C. schools requesting the principal evaluation scores and the value-added scores for all group one teachers (without their names.)  I fully expected to wait about a year or two and then be denied.  To my surprise, it only took a few months and they did provide a 500 row spreadsheet.
Jeff Bernstein

Shanker Blog » A Case For Value-Added In Low-Stakes Contexts - 0 views

  •  
    Most of the controversy surrounding value-added and other test-based models of teacher productivity centers on the high-stakes use of these estimates. This is unfortunate - no matter what you think about these methods in the high-stakes context, they have a great deal of potential to improve instruction. When supporters of value-added and other growth models talk about low-stakes applications, they tend to assert that the data will inspire and motivate teachers who are completely unaware that they're not raising test scores. In other words, confronted with the value-added evidence that their performance is subpar (at least as far as tests are an indication), teachers will rethink their approach. I don't find this very compelling. Value-added data will not help teachers - even those who believe in its utility - unless they know why their students' performance appears to be comparatively low. It's rather like telling a baseball player they're not getting hits, or telling a chef that the food is bad - it's not constructive.
Jeff Bernstein

What Nicholas Kristof Leaves Out: Discussing the Value of Teachers | FunnyMonkey - 1 views

  •  
    Kristof buries the fact that the study is based on value-added methodology and conflates student performance on test scores with good teaching. He alludes to value-added in the 11th paragraph, but never actually addresses the fact that test scores and value added analysis aren't infallible. The study authors (and this piece shouldn't detract from the worth and value of the study, which merits a read) are clear on this, even though Kristof is not.
Jeff Bernstein

Jay Mathews: Why rating teachers by test scores won't work - Class Struggle - The Washi... - 0 views

  •  
    I don't spend much time debunking our most powerful educational fad: value-added assessments to rate teachers. My colleague Valerie Strauss eviscerates value-added several times a week on her Answer Sheet blog with the verve of a Samurai warrior, so who needs me? Unfortunately, value-added is still growing in every corner of our nation, including D.C. schools, despite all that torn flesh and missing pieces. It's like those monsters lumbering through this year's action films.We've got to stop them! Let me fling my small, aged body in their way with the best argument against value-added I have seen in some time.
Jeff Bernstein

Analyzing Released NYC Value-Added Data Part 1 | Gary Rubinstein's Blog - 0 views

  •  
    The New York Times, yesterday, released the value-added data on 18,000 New York City teachers collected between 2007 and 2010.  Though teachers are irate and various newspapers, The New York Post, in particular, are gleeful, I have mixed feelings. For sure the 'reformers' have won a battle and have unfairly humiliated thousands of teachers who got inaccurate poor ratings.  But I am optimistic that this will be be looked at as one of the turning points in this fight.  Up until now, independent researchers like me were unable to support all our claims about how crude a tool value-added metrics still are, though they have been around for nearly 20 years.  But with the release of the data, I have been able to test many of my suspicions about value-added.  Now I have definitive and indisputable proof which I plan to write about for at least my next five blog posts.
Jeff Bernstein

Now I Understand Why Bill Gates Didn't Want The Value-Added Data Made Public ... - 0 views

  •  
    The problem, for them, is that they don't want the public to see for themselves that it's a complete and utter crock. Nor to see the little man behind the curtain. I present evidence of the fallacy of depending on "value-added" measurements in yet another graph - this time using what NYCPS says is the actual value-added scores of all of the many thousands of elementary school teachers for whom they have such value-added scores in the school years that ended in 2006 and in 2007.
Jeff Bernstein

Grant Wiggins: Value added - why its use makes me angry (OR: a good idea gone... - 0 views

  •  
    Alert readers (as Dave Barry likes to say) will have noted that I haven't blogged in a while. The reasons are multiple: heavy travel schedule, writing for the newest book, and full days of work on two large projects. But the key reason is anger. I have been so angry about the head-long rush into untested and poorly-thought-out value-added accountability models of schools and teachers in various states all around the country that I haven't found a calm mental space in which to get words on paper. Let me now try. Forgive me if I sputter. Here's the problem in a nutshell. Value-added Models (VAM) of accountability are now the rage. And it is understandable why this is so. They involve predictions about "appropriate" student gains of performance. If results - almost always measured via state standardized test scores - fall within or above the "expected" gains, then you are a "good" school or teacher. If the gains fall below the expected gains that you are a "bad" school or teacher. Such a system has been in place in Tennessee for over a decade. You may be aware that from that test interesting claims have been made about effective vs. ineffective teachers adding a whole extra year of gain. So, in the last few years, as accountability pressures have been ratcheted up in all states, more and more of such systems have been put in place, most recently in New York State where a truly byzantine formula is being used starting next year to hold principals and teachers accountable. It will surely fail (and be litigated). Let me try to explain why.
Jeff Bernstein

A Sociological Eye on Education | The worst eighth-grade math teacher in New York City - 0 views

  •  
    Using a statistical technique called value-added modeling, the Teacher Data Reports compare how students are predicted to perform on the state ELA and math tests, based on their prior year's performance, with their actual performance. Teachers whose students do better than predicted are said to have "added value"; those whose students do worse than predicted are "subtracting value." By definition, about half of all teachers will add value, and the other half will not. Carolyn Abbott was, in one respect, a victim of her own success. After a year in her classroom, her seventh-grade students scored at the 98th percentile of New York City students on the 2009 state test. As eighth-graders, they were predicted to score at the 97th percentile on the 2010 state test. However, their actual performance was at the 89th percentile of students across the city. That shortfall-the difference between the 97th percentile and the 89th percentile-placed Abbott near the very bottom of the 1,300 eighth-grade mathematics teachers in New York City. How could this happen? Anderson is an unusual school, as the students are often several years ahead of their nominal grade level. The material covered on the state eighth-grade math exam is taught in the fifth or sixth grade at Anderson. "I don't teach the curriculum they're being tested on," Abbott explained. "It feels like I'm being graded on somebody else's work."
Jeff Bernstein

Alan Singer: Measure for Mis-Measure with New York City Teacher Assessments - 0 views

  •  
    When Michael Bloomberg was elected Mayor of New York City in 2001, the unemployment rate was about 5%. Today it is 9%. That certainly qualifies as poor performance in office. Value decline rather than "value-added." Let's fire him. When Andrew Cuomo was first elected to state wide office as Attorney General in 2006, the unemployment rate was 4.5%. Today it is 8%. That certainly qualifies as poor performance in office. Value decline rather than "value-added." Let's fire him also.
Jeff Bernstein

Using Value-Added for Improvement, Not Shame - K-12 Talent Manager - Education Week - 0 views

  •  
    Value-added, along with other growth measures, are powerful because they level the playing field and measure the right thing -- student academic progress. Students come to teachers each year with vastly different levels of achievement, and the teacher's goal is to "add value" or growth. If we only measured achievement, why would any educator ever want to teach in a place with a disproportionate number of low-performing students? Value-added information should not be used to name, blame, and shame; it should be a catalyst to uncover, discover, and recover.
Jeff Bernstein

The Long-Term Impacts of Teachers: Teacher Value-Added and Student Outcomes in Adulthood - 1 views

  •  
    Are teachers' impacts on students' test scores ("value-added") a good measure of their quality? This question has sparked debate largely because of disagreement about (1) whether value-added (VA) provides unbiased estimates of teachers' impacts on student achievement and (2) whether high-VA teachers improve students' long-term outcomes. We address these two issues by analyzing school district data from grades 3-8 for 2.5 million children linked to tax records on parent characteristics and adult outcomes. We find no evidence of bias in VA estimates using previously unobserved parent characteristics and a quasi-experimental research design based on changes in teaching staff. Students assigned to high-VA teachers are more likely to attend college, attend higher- ranked colleges, earn higher salaries, live in higher SES neighborhoods, and save more for retirement. They are also less likely to have children as teenagers. Teachers have large impacts in all grades from 4 to 8. On average, a one standard deviation improvment in teacher VA in a single grade raises earnings by about 1% at age 28. Replacing a teacher whose VA is in the bottom 5% with an average teacher would increase students' lifetime income by more than $250,000 for the average classroom in our sample. We conclude that good teachers create substantial economic value and that test score impacts are helpful in identifying such teachers.
Jeff Bernstein

Houston, You Have a Problem! | National Education Policy Center - 0 views

  •  
    Education Policy Analysis Archives recently published an article by Audrey Amrein-Beardsley and Clarin Collins that effectively exposes the Houston Independent School District use of a value-added teacher evaluation system as a disaster. The Educational Value-Added Assessment System (EVAAS) is alleged by its creators, the European software giant SAS, to be the "the most robust and reliable" system of teacher evaluation ever invented. Amrein-Beardsley and Collins demonstrate to the contrary that EVAAS is a psychometric bad joke and a nightmare to teachers. EVAAS produces "value-added" measures for the same teachers that jump around willy-nilly from large and negative to large and positive from year-to-year when neither the general nature of the students nor the nature of the teaching differs across time. In defense of the EVAAS one could note that this is common to all such systems of attributing students' test scores to teachers' actions so that EVAAS might still lay claim to being "most robust and reliable"-since they are all unreliable and who knows what "robust" means?
Jeff Bernstein

Review of The Long-Term Impacts of Teachers: Teacher Value-Added and Student Outcomes i... - 1 views

  •  
    This NBER report concludes that teachers whose students tend to show high gains on their test scores (called "high value-added teachers") also contribute to later student success in young adulthood, as indicated by outcomes such as college attendance and future earnings. To support this claim, it is not sufficient for researchers to show an observed association between teacher value-added and later outcomes in young adulthood. It is also necessary to rule out plausible alternative explanations-for example, that parents who did the most to promote their offspring's long-term success also endeavored to secure high value-added teachers for their children. This review explains that, for the most part, the evidence needed to rule out these alternatives is missing from the report. Thus, policy-makers should tread cautiously in their reaction: the case has not been proved.
Jeff Bernstein

Teachers Matter. Now What? | The Nation - 0 views

  •  
    Given the widespread, non-ideological worries about the reliability of standardized test scores when they are used in high-stakes ways, it makes good sense for reform-minded teachers' unions to embrace value-added as one measure of teacher effectiveness, while simultaneously pushing for teachers' rights to a fair-minded appeals process. What's more, just because we know that teachers with high value-added ratings are better for children, it doesn't necessarily follow that we should pay such teachers more for good evaluation scores alone. Why not use value-added to help identify the most effective teachers, but then require these professionals to mentor their peers in order to earn higher pay?
Jeff Bernstein

Education Radio: Audit Culture, Teacher Evaluation and the Pillaging of Public Education - 0 views

  •  
    In this weeks' program we look at the attempt by education reformers to impose value added measures on teacher evaluation as an example of how neoliberal forces have used the economic crisis to blackmail schools into practices that do not serve teaching and learning, but do serve the corporate profiteers as they work to privatize public education and limit the goals of education to vocational training for corporate hegemony. These processes constrict possibilities for educational experiences that are critical, relational and transformative. We see that in naming these processes and taking risks both individually and collectively we can begin to speak back to and overcome these forces. In this program we speak with Sean Feeney, principal from Long Island New York, about the stance he and other principals have taken against the imposition of value added measures in the new Annual Professional Performance Review in New York State. We also speak with Celia Oyler, professor of education at Teachers College Columbia University, and Karen Lewis, president of the Chicago Teachers Union, about the impact of value added measures on teacher education and the corporate powers behind these measures.
1 - 20 of 441 Next › Last »
Showing 20 items per page