Skip to main content

Home/ Education Links/ Group items matching "scoring" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Jeff Bernstein

Measure For Measure: The Relationship Between Measures Of Instructional Practice In Middle School English Language Arts And Teachers' Value-added Scores - 0 views

  •  
    Even as research has begun to document that teachers matter, there is less certainty about what attributes of teachers make the most difference in raising student achievement. Numerous studies have estimated the relationship between teachers' characteristics, such as work experience and academic performance, and their value-added to student achievement; but, few have explored whether instructional practices predict student test score gains. In this study, we ask what classroom practices, if any, differentiate teachers with high impact on student achievement in middle school English Language Arts from those with lower impact. In so doing, the study also explores to what extent value-added measures signal differences in instructional quality.  Even with the small sample used in our analysis, we find consistent evidence that high value-added teachers have a different profile of instructional practices than do low value-added teachers. Teachers in the fourth (top) quartile according to value-added scores score higher than second-quartile teachers on all 16 elements of instruction that we measured, and the differences are statistically significant for a subset of practices including explicit strategy instruction.
Jeff Bernstein

Addition through Subtraction: Are Rising Test Scores in Connecticut School Districts Related to the Exclusion of Students with Disabilities? (January 2012) - 0 views

  •  
    This report finds that the exclusion of thousands of students with disabilities from reported Connecticut Mastery Test results has distorted reported trends in test scores. Following test scores from year to year in the same grade, the study finds that statewide improvements in standard Connecticut Mastery Test (CMT) scores reported by the Connecticut State Department of Education (SDE) between 2008 and 2009 -- the period of the largest reported gains -- were largely the result of the exclusion of students with disabilities from these standard test results, rather than overall improvements in performance. For example, 84% of the reported improvement in 4th grade math proficiency between 2008 and 2009 and 69% of the improvement in 8th grade reading proficiency could be attributed to the exclusion of these students. Much of the reported improvements in later years could also be attributed to this exclusion, though there were some modest overall gains as well.
Jeff Bernstein

Don't Worry About Your Test Scores - Finding Common Ground - Education Week - 0 views

  •  
    "We were cautioned our test scores would go down. We were told not to worry. We were told not to take these lower scores on our 2013 state assessments...personally. You'll have to excuse me, but I do. It's a bit personal when I work with students who cry because they're worried about their score on state assessments or teachers...good teachers...who worry day and night that the state assessments will make them look as if they are poor teachers."
Jeff Bernstein

Linda Darling-Hammond and Edward Haertel: 'Value-added' teacher evaluations not reliable - latimes.com - 0 views

  •  
    "It's becoming a familiar story: Great teachers get low scores from "value-added" teacher evaluation models. Newspapers across the country have published accounts of extraordinary teachers whose evaluations, based on their students' state test scores, seem completely out of sync with the reality of their practice. Los Angeles teachers have figured prominently in these reports. Researchers are not surprised by these stories, because dozens of studies have documented the serious flaws in these ratings, which are increasingly used to evaluate teachers' effectiveness. The ratings are based on value-added models such as the L.A. school district's Academic Growth over Time system, which uses complex statistical metrics to try to sort out the effects of student characteristics (such as socioeconomic status) from the effects of teachers on test scores. A study we conducted at Stanford University showed what these teachers are experiencing."
Jeff Bernstein

How Testing Is Hurting Teaching - SchoolBook - 0 views

  •  
    The New York State tests, going on now in middle and elementary schools, have always been high stakes for students, particularly in fourth and seventh grades, when their scores determine whether they end up in the very awful school they are zoned for or the very attractive magnet school that draws from a larger and more competitive pool. But the stakes have recently become equally high for teachers, whose ability to teach is being determined by their ability to improve students' test scores. Many people think it's about time. Teachers need to be held accountable for the work they are being paid to do, and many, many teachers need to get better at teaching. But tying teacher performance to student test scores is having an opposite effect: It's producing worse teachers.
Jeff Bernstein

Robo-Readers Used to Grade Test Essays - NYTimes.com - 0 views

  •  
    While his research is limited, because E.T.S. is the only organization that has permitted him to test its product, he says the automated reader can be easily gamed, is vulnerable to test prep, sets a very limited and rigid standard for what good writing is, and will pressure teachers to dumb down writing instruction. The e-Rater's biggest problem, he says, is that it can't identify truth. He tells students not to waste time worrying about whether their facts are accurate, since pretty much any fact will do as long as it is incorporated into a well-structured sentence. "E-Rater doesn't care if you say the War of 1812 started in 1945," he said. Mr. Perelman found that e-Rater prefers long essays. A 716-word essay he wrote that was padded with more than a dozen nonsensical sentences received a top score of 6; a well-argued, well-written essay of 567 words was scored a 5. An automated reader can count, he said, so it can set parameters for the number of words in a good sentence and the number of sentences in a good paragraph. "Once you understand e-Rater's biases," he said, "it's not hard to raise your test score."
Jeff Bernstein

On Report Cards for N.Y.C. Schools, Invisible Line Divides 'A' and 'F' - NYTimes.com - 0 views

  •  
    Public School 30 and Public School 179 are about as alike as two schools can be. They are two blocks apart in the South Bronx. Both are 98 percent black and Latino. At P.S. 30, 97 percent of the children qualify for subsidized lunches; at P.S. 179, 93 percent. During city quality reviews - when Education Department officials make on-site inspections - both scored "proficient." The two have received identical grades for "school environment," a rating that includes attendance and a survey of parents', teachers' and students' opinions of a school. On the state math test, P.S. 30 did better in 2011, with 41 percent of students scoring proficient - a 3 or 4 - versus 29 percent for P.S. 179. But on the state English test, P.S. 179 did better, with 36 percent of its students scoring proficient compared with 32 percent for P.S. 30. And yet, when the department calculated the most recent progress report grades, P.S. 30 received an A. And P.S. 179 received an F.
Jeff Bernstein

Tim R. Sass: Charter Schools and Student AChievement in Florida - 0 views

  •  
    I utilize longitudinal data covering all public school students in Florida to study the performance of charter schools and their competitive impact on traditional public schools. Controlling for student-level fixed effects, I find achievement initially is lower in charters. However, by their fifth year of operation new charter schools reach a par with the average traditional public school in math and produce higher reading achievement scores than their traditional public school counterparts. Among charters, those targeting at-risk and special education students demonstrate lower student achievement, while charter schools managed by for-profit entities perform no differently on average than charters run by nonprofits. Controlling for preexisting traditional public school quality, competition from charter schools is associated with modest increases in math scores and unchanged reading scores in nearby traditional public schools
Jeff Bernstein

Researchers blast Chicago teacher evaluation reform - The Answer Sheet - The Washington Post - 0 views

  •  
    Scores of professors and researchers from 16 universities throughout the Chicago metropolitan area have signed an open letter to the city's mayor, Rahm Emanuel, and Chicago school officials warning against implementing a teacher evaluation system that is based on standardized test scores. This is the latest protest against "value-added" teacher evaluation models that purport to measure how much "value" a teacher adds to a student's academic progress by using a complicated formula involving a standardized test score. Researchers have repeatedly warned against using these methods, but school reformers have been doing it in state after state anyway. A petition in New York State by principals and others against a test-based evaluation system there has been gaining ground.
Jeff Bernstein

Does the Model Matter? Exploring the Relationship Between Different Student Achievement-Based Teacher Assessments - 0 views

  •  
    "Our findings are consistent with research that finds models including student background and classroom characteristics are highly correlated with simpler specifications that only include a single-subject lagged test score, while value-added models estimated with school or student fixed effects have a lower correlation. Interestingly, teacher effectiveness estimates based on median student growth percentiles are highly correlated with estimates from VAMs that include only a lagged test score and those that also include lagged scores and student background characteristics, despite the fact that the two methods for estimating teacher effectiveness are, at least conceptually, quite different. However, even when the correlations between job performance estimates generated by different models are quite high, differences in the composition of students in teachers' classrooms can have sizable effects on the differences in their effectiveness estimates."
Jeff Bernstein

High Performing Charter Schools: Beating The Odds, Or Beating The Test? | OurFuture.org - 0 views

  •  
    ""Odds-beating charter school." Those words are like an impenetrable shield for those who operate such places. They are also the holy grail of the education reform movement, which is constantly seeking shortcuts to radically increase measures of educational achievement, which these days is pretty much defined by increased math and language test scores. One problem with radical test score gains, as many researchers have noted, is that miraculous improvements in test scores over short periods of time are more often the result of cheating, student skimming, or other test manipulation. We've seen this pattern repeated all over the nation, starting with the so-called Texas Miracle under former US education secretary Rod Paige's oversight."
Jeff Bernstein

Test Scores Often Misused In Policy Decisions - 0 views

  •  
    Education policies that affect millions of students have long been tied to test scores, but a new paper suggests those scores are regularly misinterpreted. According to the new research out of Mathematica, a statistical research group, the comparisons sometimes used to judge school performance are more indicative of demographic change than actual learning.
Jeff Bernstein

Using Test Scores to Evaluate Teachers Is Based on the Wrong Values - SchoolBook - 0 views

  •  
    I should be a cheerleader for the New York evaluation system for educators known as the Annual Professional Performance Review system, or A.P.P.R. I am the principal of a very successful high school where students get great test scores. I have a wonderfully supportive superintendent. My personal "score," in all probability, will be high. The right question to ask, however, is not whether this evaluation system is good or bad for adults, but rather whether it is good or bad for students.
Jeff Bernstein

Does President Obama Know What Race to the Top Is? - Bridging Differences - Education Week - 0 views

  •  
    I don't know about you, but I am growing convinced that President Barack Obama doesn't know what Race to the Top is. I don't think he really understands what his own administration is doing to education. In his State of the Union address last week, he said that he wanted teachers to "stop teaching to the test." He also said that teachers should teach with "creativity and passion." And he said that schools should reward the best teachers and replace those who weren't doing a good job. To "reward the best" and "fire the worst," states and districts are relying on test scores. The Race to the Top says they must. Deconstruct this. Teachers would love to "stop teaching to the test," but Race to the Top makes test scores the measure of every teacher. If teachers take the President's advice (and they would love to!), their students might not get higher test scores every year, and teachers might be fired, and their schools might be closed. Why does President Obama think that teachers can "stop teaching to the test" when their livelihood, their reputation, and the survival of their school depends on the outcome of those all-important standardized tests?
Jeff Bernstein

Teachers Tell Parents to See Test Scores as 'Snapshots' - SchoolBook - 0 views

  •  
    On Monday, parents and guardians will be able to access individual student scores from this year's state tests. SchoolBook asked some teachers to help put the scores in the context of classroom learning. Their overall response: consider the test results as a snapshot and take them with a proverbial grain of salt, or two.
Jeff Bernstein

New Study Shows Irrelevance of Gains on State Tests. « Diane Ravitch's blog - 0 views

  •  
    An important new study  by Professors Adam Maltese of Indiana University and Craig Hochbein of the University of Louisville sheds new light on the validity of state scores. This study found that rising scores on the state tests did not correlate with improved performance on the ACT. In fact, students at "declining" schools did just as well and sometimes better than students where the scores were going up. The study was published in the Journal of Research in Science Teaching. Its title is ""The Consequences of 'School Improvement': Examining the Association Between Two Standardized Assessments Measuring School Improvement and Student Science Achievement."
Jeff Bernstein

Shanker Blog » Guessing About NAEP Results - 1 views

  •  
    Every two years, the release of data from the National Assessment of Educational Progress (NAEP) generates a wave of research and commentary trying to explain short- and long-term trends. For instance, there have been a bunch of recent attempts to "explain" an increase in aggregate NAEP scores during the late 1990s and 2000s. Some analyses postulate that the accountability provisions of NCLB were responsible, while more recent arguments have focused on the "effect" (or lack thereof) of newer market-based reforms - for example, looking to NAEP data to "prove" or "disprove" the idea that changes in teacher personnel and other policies have (or have not) generated "gains" in student test scores. The basic idea here is that, for every increase or decrease in cross-sectional NAEP scores over a given period of time (both for all students and especially for subgroups such as minority and low-income students), there must be "something" in our education system that explains it. In many (but not all) cases, these discussions consist of little more than speculation.
Jeff Bernstein

IMPACTed Wisdom Truth? | Gary Rubinstein's Blog - 0 views

  •  
    Today, the day of the release of the New York City data, I received an email that I did not expect to come for at least a year.  In D.C. the evaluation process is called IMPACT.  About 500 teachers in D.C. belong to something called 'group one' which means that they teach something that can be measured with their value-added formula.  50% of their evaluation is based on their IVA (individual value-added), 35% is on their principal evaluation called their TLF (teaching and learning framework).  5% is on their SVA (school value added) and the remaining 10% on their CSC (commitment to school and community).  I wanted to test my theory that the value-added scores would not correlate with the principal evaluations so I had applied under the Freedom Of Information Act (FOIA) to D.C. schools requesting the principal evaluation scores and the value-added scores for all group one teachers (without their names.)  I fully expected to wait about a year or two and then be denied.  To my surprise, it only took a few months and they did provide a 500 row spreadsheet.
Jeff Bernstein

Linda Darling-Hammond: Value-Added Evaluation Hurts Teaching - 0 views

  •  
    As student learning is the primary goal of teaching, it seems like common sense to evaluate teachers based on how much their students gain on state standardized tests. Indeed, many states have adopted this idea in response to federal incentives tied to much-needed funding. However, previous experience is not promising. Recently evaluated experiments in Tennessee and New York did not improve achievement when teachers were evaluated and rewarded based on student test scores. In the District of Columbia, contrary to expectations, reading scores on national tests dropped and achievement gaps grew after a new test-based teacher-evaluation system was installed. In Portugal, a study of test-based merit pay attributed score declines to the negative effects of teacher competition, leading to less collaboration and sharing of knowledge. I was once bullish on the idea of using "value-added methods" for assessing teacher effectiveness. I have since realized that these measures, while valuable for large-scale studies, are seriously flawed for evaluating individual teachers, and that rigorous, ongoing assessment by teaching experts serves everyone better. Indeed, reviews by the National Research Council, the RAND Corp., and the Educational Testing Service have all concluded that value-added estimates of teacher effectiveness should not be used to make high-stakes decisions about teachers. Why?
Jeff Bernstein

LAUSD won't release teacher names with 'value-added' scores - latimes.com - 0 views

  •  
    The Los Angeles Unified School District has declined to release to The Times the names of teachers and their scores indicating their effectiveness in raising student performance. The nation's second-largest school district calculated confidential "academic growth over time" ratings for about 12,000 math and English teachers last year. This fall, the district issued new ones to about 14,000 instructors that can also be viewed by their principals. The scores are based on an analysis of a student's performance on several years of standardized tests and estimate a teacher's role in raising or lowering student achievement.
‹ Previous 21 - 40 of 562 Next › Last »
Showing 20 items per page