Skip to main content

Home/ Education Links/ Group items matching "values" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
1More

I dare you to measure the "value" I add « No Sleep 'til Summer:: - 0 views

  •  
    (When i wrote this, I had no idea just how deeply this would speak to people and how widely it would spread. So, I think a better title is I Dare You to Measure the Value WE Add, and I invite you to share below your value as you see it.)
1More

Teachers Matter. Now What? | The Nation - 0 views

  •  
    Given the widespread, non-ideological worries about the reliability of standardized test scores when they are used in high-stakes ways, it makes good sense for reform-minded teachers' unions to embrace value-added as one measure of teacher effectiveness, while simultaneously pushing for teachers' rights to a fair-minded appeals process. What's more, just because we know that teachers with high value-added ratings are better for children, it doesn't necessarily follow that we should pay such teachers more for good evaluation scores alone. Why not use value-added to help identify the most effective teachers, but then require these professionals to mentor their peers in order to earn higher pay?
1More

Education Radio: Audit Culture, Teacher Evaluation and the Pillaging of Public Education - 0 views

  •  
    In this weeks' program we look at the attempt by education reformers to impose value added measures on teacher evaluation as an example of how neoliberal forces have used the economic crisis to blackmail schools into practices that do not serve teaching and learning, but do serve the corporate profiteers as they work to privatize public education and limit the goals of education to vocational training for corporate hegemony. These processes constrict possibilities for educational experiences that are critical, relational and transformative. We see that in naming these processes and taking risks both individually and collectively we can begin to speak back to and overcome these forces. In this program we speak with Sean Feeney, principal from Long Island New York, about the stance he and other principals have taken against the imposition of value added measures in the new Annual Professional Performance Review in New York State. We also speak with Celia Oyler, professor of education at Teachers College Columbia University, and Karen Lewis, president of the Chicago Teachers Union, about the impact of value added measures on teacher education and the corporate powers behind these measures.
1More

What To Think About That Big New Teacher Value-Added Study - 1 views

  •  
    Nicholas Kristoff wrote yesterday about the "landmark" new teacher value-added study from Chetty, Friedman and Rockoff. It's worth being clear about why the study has garnered so much attention. It's not because it shows that teachers matter. Everyone knew or believed that already. It's because it shows that teachers vary in how much they matter. And, for the first time, it takes the value-added debate out of the standardized testing box.
1More

Leading mathematician debunks 'value-added' - The Answer Sheet - The Washington Post - 0 views

  •  
    But the most common misuse of mathematics is simpler, more pervasive, and (alas) more insidious: mathematics employed as a rhetorical weapon-an intellectual credential to convince the public that an idea or a process is "objective" and hence better than other competing ideas or processes. This is mathematical intimidation. It is especially persuasive because so many people are awed by mathematics and yet do not understand it-a dangerous combination. The latest instance of the phenomenon is valued-added modeling (VAM), used to interpret test data. Value-added modeling pops up everywhere today, from newspapers to television to political campaigns. VAM is heavily promoted with unbridled and uncritical enthusiasm by the press, by politicians, and even by (some) educational experts, and it is touted as the modern, "scientific" way to measure educational success in everything from charter schools to individual teachers.
1More

Daniel Willingham: What science can - and can't - do for education - The Answer Sheet -... - 0 views

  •  
    "Education is a not a scientific enterprise. The purpose is not to describe the world, but to change it, to make it more similar to some ideal that we envision. (I wrote about this distinction at some length in my new book. I also discussed on this brief video.) Thus science is ideally value-neutral. Yes, scientists seldom live up to that ideal; they have a point of view that shapes how they interpret data, generate theories, etc., but neutrality is an agreed-upon goal, and lack of neutrality is a valid criticism of how someone does science. Education, in contrast, must entail values, because it entails selecting goals. We want to change the world - we want kids to learn things --facts, skills, values. Well, which ones? There's no better or worse answer to this question from a scientific point of view."
1More

Hechinger Report | How New York City's value-added model compares to what other distric... - 0 views

  •  
    The Hechinger Report has spent the past 14 months reporting on teacher-effectiveness reforms around the country, and has examined value-added models in several states. New York City's formula, which was designed by researchers at the University of Wisconsin-Madison, has elements that make it more accurate than other models in some respects, but it also has elements that experts say may increase errors-a major concern for teachers whose job security is tied to their value-added ratings.
1More

Shanker Blog » Revisiting The "5-10 Percent Solution" - 0 views

  •  
    In a post over a year ago, I discussed the common argument that dismissing the "bottom 5-10 percent" of teachers would increase U.S. test scores to the level of high-performing nations. This argument is based on a calculation by economist Eric Hanushek, which suggests that dismissing the lowest-scoring teachers based on their math value-added scores would, over a period of around ten years  (when the first cohort of students would have gone through the schooling system without the "bottom" teachers), increase U.S. math scores dramatically - perhaps to the level of high-performing nations such as Canada or Finland.* This argument is, to say the least, controversial, and it invokes the full spectrum of reactions. In my opinion, it's best seen as a policy-relevant illustration of the wide variation in test-based teacher effects, one that might suggest a potential of a course of action but can't really tell us how it will turn out in practice. To highlight this point, I want to take a look at one issue mentioned in that previous post - that is, how the instability of value-added scores over time (which Hanushek's simulation doesn't address directly) might affect the projected benefits of this type of intervention, and how this is turn might modulate one's view of the huge projected benefits. One (admittedly crude) way to do this is to use the newly-released New York City value-added data, and look at 2010 outcomes for the "bottom 10 percent" of math teachers in 2009.
1More

Hechinger Report | Should value-added teacher ratings be adjusted for poverty? - 0 views

  •  
    In Washington, D.C., one of the first places in the country to use value-added teacher ratings to fire teachers, teacher-union president Nathan Saunders likes to point to the following statistic as proof that the ratings are flawed: Ward 8, one of the poorest areas of the city, has only 5 percent of the teachers defined as effective under the new evaluation system known as IMPACT, but more than a quarter of the ineffective ones. Ward 3, encompassing some of the city's more affluent neighborhoods, has nearly a quarter of the best teachers, but only 8 percent of the worst. The discrepancy highlights an ongoing debate about the value-added test scores that an increasing number of states-soon to include Florida-are using to evaluate teachers. Are the best, most experienced D.C. teachers concentrated in the wealthiest schools, while the worst are concentrated in the poorest schools? Or does the statistical model ignore the possibility that it's more difficult to teach a room full of impoverished children?
1More

A Legal Argument Against The Use of VAMs in Teacher Evaluation - 0 views

  •  
    "Value Added Models (VAMs) are irresistible. Purportedly they can ascertain a teacher's effectiveness by predicting the impact of a teacher on a student's test scores. Because test scores are the sin qua non of our education system, VAMs are alluring. They link a teacher directly to the most emphasized output in education today. What more can we want from an evaluative tool, especially in our pursuit of improving schools in the name of social justice? Taking this a step further, many see VAMs as the panacea for improving teacher quality. The theory seems straightforward. VAMs provide statistical predictions regarding a teacher's impact that can be compared to actual results. If a teacher cannot improve a student's test score in relatively positive ways, then they are ineffective. If they are ineffective, they can (and should) be dismissed (See, for instance, Hanushek, 2010). Consequently, state legislatures have rushed to codify VAMs into their statutes and regulations governing teacher evaluation. (See, for example, Florida General Laws, 2014). That has been a mistake. This paper argues for a complete reversal in policy course. To wit, state regulations that connect a teacher's continued employment to VAMs should be overhauled to eliminate the connection between evaluation and student test scores. The reasoning is largely legal, rather than educational. In sum, the legal costs of any use of VAMs in a performance-based termination far outweigh any value they may add.1 These risks are directly a function of the well-documented statistical flaws associated with VAMs (See, for example, Rothstein, 2010). The "value added" of VAMs in supporting a termination is limited, if it exists at all."
1More

Gov. Andrew Cuomo and baloney - The Washington Post - 0 views

  •  
    "New York Gov. Andrew Cuomo's school reform proposals have infuriated educators across the state. Award-winning Principal Carol Burris of South Side High School is one of them and in this post, she  explains why. Burris, who has written frequently for this blog,  was named New York's 2013 High School Principal of the Year by the School Administrators Association of New York and the National Association of Secondary School Principals, and in 2010, was tapped as the 2010 New York State Outstanding Educator by the School Administrators Association of New York State. Burris has been exposing the botched school reform program in New York for years on this blog. Her most recent post was "Principal: 'There comes a time when rules must be broken…That time is now.'" (In this post, Burris refers to "value-added" scores, which refer to value-added measurement (VAM), which purports to be able to determine the "value" a teacher brings to student learning by plopping test scores into complicated formulas that can supposedly strip out all other factors, including the conditions in which a student lives.)"
1More

Linda Darling-Hammond and Edward Haertel: 'Value-added' teacher evaluations not reliabl... - 0 views

  •  
    "It's becoming a familiar story: Great teachers get low scores from "value-added" teacher evaluation models. Newspapers across the country have published accounts of extraordinary teachers whose evaluations, based on their students' state test scores, seem completely out of sync with the reality of their practice. Los Angeles teachers have figured prominently in these reports. Researchers are not surprised by these stories, because dozens of studies have documented the serious flaws in these ratings, which are increasingly used to evaluate teachers' effectiveness. The ratings are based on value-added models such as the L.A. school district's Academic Growth over Time system, which uses complex statistical metrics to try to sort out the effects of student characteristics (such as socioeconomic status) from the effects of teachers on test scores. A study we conducted at Stanford University showed what these teachers are experiencing."
1More

Shanker Blog » Value-Added Versus Observations, Part One: Reliability - 0 views

  •  
    Although most new teacher evaluations are still in various phases of pre-implementation, it's safe to say that classroom observations and/or value-added (VA) scores will be the most heavily-weighted components toward teachers' final scores, depending on whether teachers are in tested grades and subjects. One gets the general sense that many - perhaps most - teachers strongly prefer the former (observations, especially peer observations) over the latter (VA). One of the most common arguments against VA is that the scores are error-prone and unstable over time - i.e., that they are unreliable. And it's true that the scores fluctuate between years (also see here), with much of this instability due to measurement error, rather than "real" performance changes. On a related note, different model specifications and different tests can yield very different results for the same teacher/class. These findings are very important, and often too casually dismissed by VA supporters, but the issue of reliability is, to varying degrees, endemic to all performance measurement. Actually, many of the standard reliability-based criticisms of value-added could also be leveled against observations. Since we cannot observe "true" teacher performance, it's tough to say which is "better" or "worse," despite the certainty with which both "sides" often present their respective cases. And, the fact that both entail some level of measurement error doesn't by itself speak to whether they should be part of evaluations.*
1More

Shanker Blog » Value-Added Versus Observations, Part Two: Validity - 0 views

  •  
    In a previous post, I compared value-added (VA) and classroom observations in terms of reliability - the degree to which they are free of error and stable over repeated measurements. But even the most reliable measures aren't useful unless they are valid - that is, unless they're measuring what we want them to measure. Arguments over the validity of teacher performance measures, especially value-added, dominate our discourse on evaluations. There are, in my view, three interrelated issues to keep in mind when discussing the validity of VA and observations. The first is definitional - in a research context, validity is less about a measure itself than the inferences one draws from it. The second point might follow from the first: The validity of VA and observations should be assessed in the context of how they're being used. Third and finally, given the difficulties in determining whether either measure is valid in and of itself, as well as the fact that so many states and districts are already moving ahead with new systems, the best approach at this point may be to judge validity in terms of whether the evaluations are improving outcomes. And, unfortunately, there is little indication that this is happening in most places.
1More

The SAS Education Value-Added Assessment System (SAS® EVAAS®) in the Houston ... - 0 views

  •  
    The SAS Educational Value-Added Assessment System (SAS® EVAAS®) is the most widely used value-added system in the country. It is also self-proclaimed as "the most robust and reliable" system available, with its greatest benefit to help educators improve their teaching practices. This study critically examined the effects of SAS® EVAAS® as experienced by teachers, in one of the largest, high-needs urban school districts in the nation - the Houston Independent School District (HISD). Using a multiple methods approach, this study critically analyzed retrospective quantitative and qualitative data to better comprehend and understand the evidence collected from four teachers whose contracts were not renewed in the summer of 2011, in part given their low SAS® EVAAS® scores. This study also suggests some intended and unintended effects that seem to be occurring as a result of SAS® EVAAS® implementation in HISD. In addition to issues with reliability, bias, teacher attribution, and validity, high-stakes use of SAS® EVAAS® in this district seems to be exacerbating unintended effects.
1More

Shanker Blog » The Weighting Game - 0 views

  •  
    A while back, I noted that states and districts should exercise caution in assigning weights (importance) to the components of their teacher evaluation systems before they know what the other components will be. For example, most states that have mandated new evaluation systems have specified that growth model estimates count for a certain proportion (usually 40-50 percent) of teachers' final scores (at least those in tested grades/subjects), but it's critical to note that the actual importance of these components will depend in no small part on what else is included in the total evaluation, and how it's incorporated into the system. In slightly technical terms, this distinction is between nominal weights (the percentage assigned) and effective weights (the percentage that actually ends up being the case). Consider an extreme hypothetical example - let's say a district implements an evaluation system in which half the final score is value-added and half is observations. But let's also say that every teacher gets the same observation score. In this case, even though the assigned (nominal) weight for value-added is 50 percent, the actual importance (effective weight) will be 100 percent, since every teacher receives the same observation score, and so all the variation between teachers' final scores will be determined by the value-added component.
1More

Shanker Blog » If Newspapers Are Going To Publish Teachers' Value-Added Score... - 0 views

  •  
    I don't think there's any way to avoid publication, given that about a dozen newspapers will receive the data, and it's unlikely that every one of them will decline to do so. So, in addition to expressing my firm opposition, I would offer what I consider to be an absolutely necessary suggestion: If newspapers are going to publish the estimates, they need to publish the error margins too. Value-added and other growth model scores are statistical estimates, and must be interpreted as such. Imagine that a political poll found that a politician's approval rate was 40 percent, but, due to an unusually small sample of respondents, the error margin on this estimate was plus or minus 20 percentage points. Based on these results, the approval rate might actually be abysmal (20 percent), or it might be pretty good (60 percent). Should a newspaper publish the 40 percent result without mentioning that level of imprecision? Of course not. In fact, they should refuse to publish the result at all. Value-added estimates are no different.
1More

Value-Added Measures in Education: The Best of the Alternatives is Simply Not Good Enough - 0 views

  •  
    On September 8, 2011 Teachers College Record published a book review of Douglas N. Harris's recent book Value-Added Measures in Education. In this commentary the author takes issue with not necessarily the book's What Every Educator Needs to Know content but the author's overall endorsement of value-added, and his and others' imprudent adoption of some highly complex assumptions.
1More

Methods for Accounting for Co-Teaching in Value-Added Models - 0 views

  •  
    Isolating the effect of a given teacher on student achievement (value-added modeling) is complicated when the student is taught the same subject by more than one teacher. We consider three methods, which we call the Partial Credit Method, Teacher Team Method, and Full Roster Method, for estimating teacher effects in the presence of co-teaching. The Partial Credit Method apportions responsibility between teachers according to the fraction of the year a student spent with each. This method, however, has practical problems limiting its usefulness. As alternatives, we propose two methods that can be more stably estimated based on the premise that co-teachers share joint responsibility for the achievement gains of their shared students. The Teacher Team Method uses a single record for each student and a set of variables for each teacher or group of teachers with shared students, whereas the Full Roster Method contains a single variable for each teacher, but multiple records for shared students. We explore the properties of these two alternative methods and then compare the estimates generated using student achievement and teacher roster data from a large urban school district. We find that both methods produce very similar point estimates of teacher value added. However, the Full Roster Method better maintains the links between teachers and students and can be more robustly implemented in practice. 
1More

Evaluating Our Values - Teacher in a Strange Land - Education Week Teacher - 0 views

  •  
    Articles are written every year bemoaning the fact that young Americans are woefully ignorant about civics. Here's a radical theory to consider: Young people don't know civics because we don't teach them civics! We made a decision in that moment with those twelve boys that practice with writing a brief constructed response was of higher value than becoming competent, prepared, participatory citizens. Does that decision mesh with your own values?
‹ Previous 21 - 40 of 620 Next › Last »
Showing 20 items per page