Skip to main content

Home/ Education Links/ Group items tagged evaluation

Rss Feed Group items tagged

Jeff Bernstein

Christopher Cerf: N.J. set to pilot new teacher evaluation systems | NJ.com - 1 views

  •  
    This week, we are taking an important step toward developing a fair, consistent and learning-centered evaluation system by providing 10 districts across the state with $1.1 million to collaboratively design and implement state-of-the-art educator evaluation systems. This pilot will be a critical step toward launching a statewide initiative in 2012.
Jeff Bernstein

Education Week: Getting Serious About Teacher Evaluation - 1 views

  •  
    You can hardly open a newspaper or major magazine today without finding a story about another incarnation or overhaul of teacher evaluation. But underlying nearly all these detailed descriptions of state and local programs is a near-unanimous and long-standing assumption: Whoever is in charge of improving teachers shouldn't also be in charge of evaluating them.
Jeff Bernstein

Subjective and Objective Evaluations of Teacher Effectiveness: Evidence from New York City - 1 views

  •  
    A substantial literature documents large variation in teacher effectiveness at raising student achievement, providing motivation to identify highly effective and ineffective teachers early in their careers. Using data from New York City public schools, we estimate whether subjective evaluations of teacher effectiveness have predictive power for the achievement gains made by teachers' future students. We find that these subjective evaluations have substantial power, comparable with and complementary to objective measures of teacher effectiveness taken from a teacher's first year in the classroom.
Jeff Bernstein

Subjective and objective evaluations of teacher effectiveness: Evidence from New York City - 1 views

  •  
    A substantial literature documents large variation in teacher effectiveness at raising student achievement, providing motivation to identify highly effective and ineffective teachers early in their careers. Using data from New York City public schools, we estimate whether subjective evaluations of teacher effectiveness have predictive power for the achievement gains made by teachers' future students. We find that these subjective evaluations have substantial power, comparable with and complementary to objective measures of teacher effectiveness taken from a teacher's first year in the classroom.
Jeff Bernstein

How Not To Improve New Teacher Evaluation Systems | Shanker Institute - 0 views

  •  
    "Granted, whether and how to alter new evaluations are difficult decisions, and there is no tried and true playbook. That said, New York Governor Andrew Cuomo's proposals provide a stunning example of how not to approach these changes. To see why, let's look at some sound general principles for improving teacher evaluation systems based on the first rounds of results, and how they compare with the New York approach.*"
Jeff Bernstein

The fundamental flaws of 'value added' teacher evaluation - 0 views

  •  
    "Evaluating teachers by the test scores of their students has been perhaps the most controversial education reform of the year because while it has been pushed in a majority of states with the support of the Obama administration, assessment experts have warned against the practice for a variety of reasons. Here Jack Jennings, found and former president of the non-profit Center on Education Policy explains the problem."
Jeff Bernstein

Real Reform versus Fake Reformy Distractions: More Implications from NJ & MA ... - 0 views

  •  
    Recently, I responded to an absurd and downright disturbing Op-Ed by a Connecticut education reform organization that claimed that Connecticut needed to move quickly to adopt teacher evaluation/tenure reforms and expand charter schooling because a) Connecticut has a larger achievement gap and lower outcomes for low income students than Massachusetts or New Jersey and b) New Jersey and Massachusetts were somehow outpacing Connecticut in adopting new reformy policies regarding teacher evaluation. Now, the latter assertion is questionable enough to begin with, but the most questionable assertion was that any recent policy changes that may have occurred in New Jersey or Massachusetts explain why low income children in those states do better, and have done better at a faster rate than low income kids in Connecticut. Put simply, bills presently on the table, or legislation and regulations adopted and not yet phased in do not explain the gains in student outcomes of the past 20 years. Note that I stick to comparisons among these states because income related achievement gaps are most comparable among them (that is, the characteristics of the populations that fall above and below the income thresholds for free/reduced lunch are relatively comparable among these states, but not so much to states in other regions of the country). I'm not really providing much new information in this post, but I am elaborating on my previous point about the potential relevance of funding equity - school finance - reforms - and providing additional illustrations.
Jeff Bernstein

Shanker Blog » Value-Added Versus Observations, Part One: Reliability - 0 views

  •  
    Although most new teacher evaluations are still in various phases of pre-implementation, it's safe to say that classroom observations and/or value-added (VA) scores will be the most heavily-weighted components toward teachers' final scores, depending on whether teachers are in tested grades and subjects. One gets the general sense that many - perhaps most - teachers strongly prefer the former (observations, especially peer observations) over the latter (VA). One of the most common arguments against VA is that the scores are error-prone and unstable over time - i.e., that they are unreliable. And it's true that the scores fluctuate between years (also see here), with much of this instability due to measurement error, rather than "real" performance changes. On a related note, different model specifications and different tests can yield very different results for the same teacher/class. These findings are very important, and often too casually dismissed by VA supporters, but the issue of reliability is, to varying degrees, endemic to all performance measurement. Actually, many of the standard reliability-based criticisms of value-added could also be leveled against observations. Since we cannot observe "true" teacher performance, it's tough to say which is "better" or "worse," despite the certainty with which both "sides" often present their respective cases. And, the fact that both entail some level of measurement error doesn't by itself speak to whether they should be part of evaluations.*
Jeff Bernstein

As testing starts, critics plan post-teacher evaluation deal efforts | GothamSchools - 0 views

  •  
    Carol Burris, the principal of a Long Island high school, isn't done fighting. Even after her statewide principals petition failed to sway lawmakers from passing a teacher evaluation bill last month, she's hoping her newest effort - a poll - will do the trick. Beginning today, Burris is sending out surveys to principals, teachers, and parents about New York State's high-stakes testing policy "to give voice to the concerns that we are hearing from all three groups," she said. "We have no intention of not continuing our fight."
Jeff Bernstein

Shanker Blog » Value-Added Versus Observations, Part Two: Validity - 0 views

  •  
    In a previous post, I compared value-added (VA) and classroom observations in terms of reliability - the degree to which they are free of error and stable over repeated measurements. But even the most reliable measures aren't useful unless they are valid - that is, unless they're measuring what we want them to measure. Arguments over the validity of teacher performance measures, especially value-added, dominate our discourse on evaluations. There are, in my view, three interrelated issues to keep in mind when discussing the validity of VA and observations. The first is definitional - in a research context, validity is less about a measure itself than the inferences one draws from it. The second point might follow from the first: The validity of VA and observations should be assessed in the context of how they're being used. Third and finally, given the difficulties in determining whether either measure is valid in and of itself, as well as the fact that so many states and districts are already moving ahead with new systems, the best approach at this point may be to judge validity in terms of whether the evaluations are improving outcomes. And, unfortunately, there is little indication that this is happening in most places.
Jeff Bernstein

The Toxic Trifecta, Bad Measurement & Evolving Teacher Evaluation Policies « ... - 0 views

  •  
    This post contains my preliminary thoughts in development for a forthcoming article dealing with the intersection between statistical and measurement issues in teacher evaluation and teachers' constitutional rights where those measures are used for making high stakes decisions.
Jeff Bernstein

When Rater Reliability Is Not Enough - 0 views

  •  
    In recent years, interest has grown in using classroom observation as a means to several ends, including teacher development, teacher evaluation, and impact evaluation of classroom-based interventions. Although education practitioners and researchers have developed numerous observational instruments for these purposes, many developers fail to specify important criteria regarding instrument use. In this article, the authors argue that for classroom observation to succeed in its aims, improved observational systems must be developed. These systems should include not only observational instruments but also scoring designs capable of producing reliable and cost-efficient scores and processes for rater recruitment, training, and certification. To illustrate how such a system might be developed and improved, the authors provide an empirical example that applies generalizability theory to data from a mathematics observational instrument.
Jeff Bernstein

An unintended consequence of value-added teacher evaluation - The Answer Sheet - The Wa... - 0 views

  •  
    A high school teacher in New York sent me the following e-mail, which discusses a most unfortunate unintended consequence of the state's new teacher and principal evaluation that depends largely on how well students do on standardized test scores.
Jeff Bernstein

BTF stages walkout on education chief - Schools - The Buffalo News - 0 views

  •  
    The state education commissioner found himself thrust before a grouchy convention of statewide teachers in Buffalo, where local union members are locked in an acrid dispute over proposals for teacher evaluations. Not only did New York State United Teachers members from across the state blast King and state plans to implement teacher evaluations on criteria that include attendance, the commissioner could only stand by as approximately 60 Buffalo teachers walked out of the Buffalo Niagara Convention Center in protest.
Jeff Bernstein

Brave Principals « Diane Ravitch's blog - 0 views

  •  
    The principals of New York State are amazing. When the State Education Department began creating its "educator evaluation system," it called together the principals and showed them what it was up to. It showed them a video of guys building a plane while it was flying. This was called, in self-congratulatory parlance, "building a plane in mid-air." A few principals noticed that the guys building the exterior of the plane were wearing parachutes, but the passengers didn't have parachutes. The principals realized that they, their staff, and their students were the passengers. The ones with the parachutes were the overseers at the New York State Education Department. For them, it was a lark, but the evaluation system they created was do-or-die for the hapless passengers. The principals rose up in revolt, led by Carol Burris and Sean Feeney.
Jeff Bernstein

Houston, You Have a Problem! | National Education Policy Center - 0 views

  •  
    Education Policy Analysis Archives recently published an article by Audrey Amrein-Beardsley and Clarin Collins that effectively exposes the Houston Independent School District use of a value-added teacher evaluation system as a disaster. The Educational Value-Added Assessment System (EVAAS) is alleged by its creators, the European software giant SAS, to be the "the most robust and reliable" system of teacher evaluation ever invented. Amrein-Beardsley and Collins demonstrate to the contrary that EVAAS is a psychometric bad joke and a nightmare to teachers. EVAAS produces "value-added" measures for the same teachers that jump around willy-nilly from large and negative to large and positive from year-to-year when neither the general nature of the students nor the nature of the teaching differs across time. In defense of the EVAAS one could note that this is common to all such systems of attributing students' test scores to teachers' actions so that EVAAS might still lay claim to being "most robust and reliable"-since they are all unreliable and who knows what "robust" means?
Jeff Bernstein

As Deadline Nears, a Compromise on Teacher Evaluations - SchoolBook - 0 views

  •  
    New York State education officials and the state teachers' union reached an agreement on a new teacher evaluation system on Thursday, just hours before a deadline imposed by Gov. Andrew M. Cuomo, who had threatened to break the impasse by imposing his own way to judge the quality of a teacher's work, according to a number of people directly involved. The agreement allows school districts to base up to 40 percent of a teacher's annual review on student performance on state standardized tests, as long as half of that portion is used to analyze the progress of specific groups of students, like those who are not proficient in English or have special needs. The remaining 60 percent is to be based on subjective measures, like classroom observations and professional development projects.
Jeff Bernstein

Rise & Shine: Cheers, jeers, and explainers on evaluations deal | GothamSchools - 0 views

  •  
    Links to articles on the evaluations deal
Jeff Bernstein

Observers Get Key Role in Teacher Evaluation Process - NYTimes.com - 0 views

  •  
    The New York City teachers' union has long called the process used by the city's Education Department for reviewing and dismissing struggling teachers partisan and unfair. But now, as part of an agreement reached Thursday, the Education Department and the United Federation of Teachers will put into effect an evaluation system that will bring independent observers into the city's classrooms to monitor the weakest teachers.
Jeff Bernstein

City, state reach deals on teacher evaluations - NY Daily News - 0 views

  •  
    The sticking point had been the appeals process for teachers who receive negative performance ratings. Under the new agreement, teachers who are rated ineffective by their principal will be monitored during the next year by an independent educator, according to a source. If the principal still finds the teacher ineffective after a second year and the independent monitor agrees, then the burden of proof will be on the teacher to fight the firing, the source said. If the monitor disagrees, the city will be responsible for proving that the teacher should be canned. This new system is modeled after the much-hyped teacher evaluation plan in New Haven, CT.
« First ‹ Previous 81 - 100 of 736 Next › Last »
Showing 20 items per page