Skip to main content

Home/ Middle School Matters/ Group items tagged test

Rss Feed Group items tagged

Troy Patterson

The Test of the Common Core | E. D. Hirsch, Jr. - 0 views

  • Here's the follow-up post to "Why I'm For the Common Core." It explains why we should be leery of the forthcoming "core-aligned" tests -- especially those in English Language Arts that people are rightly anxious about.
  • These tests could endanger the promise of the Common Core.
  • The first thing I'd want to do if I were younger would be to launch an effective court challenge to value-added teacher evaluations on the basis of test scores in reading comprehension. The value-added approach to teacher evaluation in reading is unsound both technically and in its curriculum-narrowing effects. The connection between job ratings and tests in ELA has been a disaster for education.
  • ...6 more annotations...
  • My analysis of them showed what anyone immersed in reading research would have predicted: The value-added data are modestly stable for math, but are fuzzy and unreliable for reading.
  • Math tests are based on the school curriculum. What a teacher does in the math classroom affects student test scores. But reading-comprehension tests are not based on the school curriculum. (How could they be if there's no set curriculum?) Rather, they are based on the general knowledge that students have gained over their life span from all sources -- most of them outside the school.
  • The whole project is unfair to teachers, ill-conceived, and educationally disastrous. The teacher-rating scheme has usurped huge amounts of teaching time in anxious test-prep. Paradoxically, the evidence shows that test-prep ceases to be effective after about six lessons.
  • the inadequate theories of reading-comprehension that have dominated the schools -- mainly the unfounded theory that, when students reach a certain level of "reading skill," they can read anything at that level.
  • The Common Core-aligned tests of reading comprehension will naturally contain text passages and questions about those passages. To the extent such tests claim to assess "critical thinking" and "general" reading-comprehension skill, we should hold on to our wallets. They will be only rough indexes of reading ability -- probably no better than the perfectly adequate and well-validated reading tests they mean to replace.
  • The solution to the test-prep conundrum is this: First, institute in every participating state the specific and coherent curriculum that the Common Core Standards explicitly call for. (It's passing odd to introduce "Common Core" tests before there's an actual core to be tested.)
Troy Patterson

CURMUDGUCATION: Norms vs. Standards - 1 views

  • A standards-referenced test compares every student to the standard set by the test giver. A norm-referenced test compares every student to every other student. The lines between different levels of achievement will be set after the test has been taken and corrected. Then the results are laid out, and the lines between levels (cut scores) are set.
  • When I give my twenty word spelling test, I can't set the grade levels until I correct it. Depending on the results, I may "discover" that an A is anything over a fifteen, twelve is Doing Okay, and anything under nine is failing. Or I may find that twenty is an A, nineteen is okay, and eighteen or less is failing. If you have ever been in a class where grades are curved, you were in a class that used norm referencing.
  • With standards reference, we can set a solid immovable line between different levels of achievement, and we can do it before the test is even given. This week I'm giving a spelling test consisting of twenty words. Before I even give the test, I can tell my class that if they get eighteen or more correct, they get an A, if they get sixteen correct, they did okay, and if the get thirteen or less correct, they fail.
  • ...4 more annotations...
  • Norm referencing is why, even in this day and age, you can't just take the SAT on a computer and have your score the instant you click on the final answer-- the SAT folks can't figure out your score until they have collected and crunched all the results. And in the case of the IQ test, 100 is always set to be "normal."
  • There are several important implications and limitations for norm-referencing. One is that they are lousy for showing growth, or lack thereof.
  • Normed referencing also gets us into the Lake Wobegon Effect.
  • On a standards-referenced test, it is possible for everyone to get an A. On a normed-referenced test, it is not possible for everyone to get an A. Nobody has to flunk a standards-referenced test. Somebody has to flunk a norm-referenced test.
  •  
    "Ed History 101"
Troy Patterson

The Sabermetrics of Effort - Jonah Lehrer - 0 views

  • The fundamental premise of Moneyball is that the labor market of sports is inefficient, and that many teams systematically undervalue particular athletic skills that help them win. While these skills are often subtle – and the players that possess them tend to toil in obscurity - they can be identified using sophisticated statistical techniques, aka sabermetrics. Home runs are fun. On-base percentage is crucial.
  • The wisdom of the moneyball strategy is no longer controversial. It’s why the A’s almost always outperform their payroll,
  • However, the triumph of moneyball creates a paradox, since its success depends on the very market inefficiencies it exposes. The end result is a relentless search for new undervalued skills, those hidden talents that nobody else seems to appreciate. At least not yet.
  • ...14 more annotations...
  •  One study found that baseball players significantly improved their performance in the final year of their contracts, just before entering free-agency. (Another study found a similar trend among NBA players.) What explained this improvement? Effort. Hustle. Blood, sweat and tears. The players wanted a big contract, so they worked harder.
  • If a player runs too little during a game, it’s not because his body gives out – it’s because his head doesn’t want to.
  • despite the obvious impact of effort, it’s surprisingly hard to isolate as a variable of athletic performance. Weimer and Wicker set out to fix this oversight. Using data gathered from three seasons and 1514 games of the Bundesliga – the premier soccer league in Germany – the economists attempted to measure individual effort as a variable of player performance,
  • So did these differences in levels of effort matter? The answer is an emphatic yes: teams with players that run longer distances are more likely to win the game,
  • As the economists note, “teams where some players run a lot while others are relatively lazy have a higher winning probability.”
  • There is a larger lesson here, which is that our obsession with measuring talent has led us to neglect the measurement of effort. This is a blind spot that extends far beyond the realm of professional sports.
  • Maximum tests are high-stakes assessments that try to measure a person’s peak level of performance. Think here of the SAT, or the NFL Combine, or all those standardized tests we give to our kids. Because these tests are relatively short, we assume people are motivated enough to put in the effort while they’re being measured. As a result, maximum tests are good at quantifying individual talent, whether it’s scholastic aptitude or speed in the 40-yard dash.
  • Unfortunately, the brevity of maximum tests means they are not very good at predicting future levels of effort. Sackett has demonstrated this by comparing the results from maximum tests to field studies of typical performance, which is a measure of how people perform when they are not being tested.
  • As Sackett came to discover, the correlation between these two assessments is often surprisingly low: the same people identified as the best by a maximum test often unperformed according to the measure of typical performance, and vice versa.
  • What accounts for the mismatch between maximum tests and typical performance? One explanation is that, while maximum tests are good at measuring talent, typical performance is about talent plus effort.
  • In the real world, you can’t assume people are always motivated to try their hardest. You can’t assume they are always striving to do their best. Clocking someone in a sprint won’t tell you if he or she has the nerve to run a marathon, or even 12 kilometers in a soccer match.
  • With any luck, these sabermetric innovations will trickle down to education, which is still mired in maximum high-stakes tests that fail to directly measure or improve the levels of effort put forth by students.
  • After all, those teams with the hardest workers (and not just the most talented ones) significantly increase their odds of winning.
  • Old-fashioned effort just might be the next on-base percentage.
Troy Patterson

This Week In Education: Thompson: How Houston's Test and Punish Policies Fail - 0 views

  • I often recall Houston's Apollo 20 experiment, designed to bring "No Excuses" charter school methods to neighborhood schools. Its output-driven, reward and punish policies failed.  It was incredibly expensive, costing $52 million and it didn't increase reading scores. Intensive math tutoring produced test score gains in that subject. The only real success was due to the old-fashioned, win-win, input-driven method of hiring more counselors.
  • Michels finds no evidence that Grier's test-driven accountability has benefitted students, but he describes the great success of constructive programs that build on kids' strengths and provide them more opportunities.
  • With the help of local philanthropies, however, Houston has introduced a wide range of humane, holistic, and effective programs. Michels starts with Las Americas Newcomer School, which is "on paper a failing school." It offers group therapy and social workers who help immigrants "navigate bureaucratic barriers—like proof of residency or vaccination records." He then describes outstanding early education programs that are ready to be scaled up, such as  the Gabriela Mistral Center for Early Childhood, and Project Grad which has provided counseling and helped more than 7,600 students go to college.
  • ...5 more annotations...
  • Children who attended the Neighborhood Centers' Head Start program produce higher test scores - as high as 94% proficient in 3rd grade reading.
  • It agreed with the program's chief advocate, Roland Fryer, that the math tutoring showed results but doubted that the score increases were sustainable."
  • but who says, “At the end of the day, you need to show up on time, you need to have the right mindset for work and you probably need to read, write and understand science." In other words, test scores might be important, but it is the immeasurable social and emotional factors that really matter.
  • What if we shifted the focus from the weaknesses of students and teachers to a commitment to building on the positive?
  • Grier's test and punish policies have already failed and been downsized. Of course, I would like to hear an open acknowledgement that test-driven reform was a dead end. But, mostly likely, systems will just let data-driven accountability quietly shrivel and die. Then, we can commit to the types of  Win Win policies that have a real chance of helping poor children of color.
Troy Patterson

BBC - Future - Psychology: A simple trick to improve your memory - 0 views

  • One of the interesting things about the mind is that even though we all have one, we don't have perfect insight into how to get the best from it.
  • Karpicke and Roediger asked students to prepare for a test in various ways, and compared their success
  • On the final exam differences between the groups were dramatic. While dropping items from study didn’t have much of an effect, the people who dropped items from testing performed relatively poorly: they could only remember about 35% of the word pairs, compared to 80% for people who kept testing items after they had learnt them.
  • ...3 more annotations...
  • dropping items entirely from your revision, which is the advice given by many study guides, is wrong. You can stop studying them if you've learnt them, but you should keep testing what you've learnt if you want to remember them at the time of the final exam.
  • the researchers had the neat idea of asking their participants how well they would remember what they had learnt. All groups guessed at about 50%. This was a large overestimate for those who dropped items from test (and an underestimate from those who kept testing learnt items).
  • But the evidence has a moral for teachers as well: there's more to testing than finding out what students know – tests can also help us remember.
Troy Patterson

Principal: Why our new educator evaluation system is unethical - 0 views

  • A few years ago, a student at my high school was having a terrible time passing one of the exams needed to earn a Regents Diploma.
  • Mary has a learning disability that truly impacts her retention and analytical thinking.
  • Because she was a special education student, at the time there was an easier exam available, the RCT, which she could take and then use to earn a local high school diploma instead of the Regents Diploma.
  • ...16 more annotations...
  • Regents Diploma serves as a motivator for our students while providing an objective (though imperfect) measure of accomplishment.
  • If they do not pass a test the first time, it is not awful if they take it again—we use it as a diagnostic, help them fill the learning gaps, and only the passing score goes on the transcript
  • in Mary’s case, to ask her to take that test yet once again would have been tantamount to child abuse.
  • Mary’s story, therefore, points to a key reason why evaluating teachers and principals by test scores is wrong.
  • It illustrates how the problems with value-added measures of performance go well beyond the technicalities of validity and reliability.
  • The basic rule is this: No measure of performance used for high-stakes purposes should put the best interests of students in conflict with the best interests of the adults who serve them.
  • I will just point out that under that system I may be penalized if future students like Mary do not achieve a 65 on the Regents exam.
  • Mary and I can still make the choice to say “enough”, but it may cost me a “point”, if a majority of students who had the same middle school scores on math and English tests that she did years before, pass the test.
  • But I can also be less concerned about the VAM-based evaluation system because it’s very likely to be biased in favor of those like me who lead schools that have only one or two students like Mary every year.
  • When we have an ELL (English language learner) student with interrupted education arrive at our school, we often consider a plan that includes an extra year of high school.
  • last few years “four year graduation rates” are of high importance
  • four-year graduation rate as a high-stakes measure has resulted in the proliferation of “credit recovery” programs of dubious quality, along with teacher complaints of being pressured to pass students with poor attendance and grades, especially in schools under threat of closure.
  • On the one hand, they had a clear incentive to “test prep” for the recent Common Core exams, but they also knew that test prep was not the instruction that their students needed and deserved.
  • in New York and in many other Race to the Top states, continue to favor “form over substance” and allow the unintended consequences of a rushed models to be put in place.
  • Creating bell curves of relative educator performance may look like progress and science, but these are measures without meaning, and they do not help schools improve.
  • We can raise every bar and continue to add high-stakes measures. Or we can acknowledge and respond to the reality that school improvement takes time, capacity building, professional development, and financial support at the district, state and national levels.
Ron King

Ethical & Effective Ways to Prepare Students for Testing (MiddleWeb) - 0 views

  •  
    "So much rides on the results of standardized tests these days. They're even talking about making student scores worth 50 percent of my own evaluation and using them to determine my pay! I don't want to spend weeks "drilling and killing" my students with test-prep work sheets. What am I supposed to do?" - A teacher's question
Troy Patterson

A surprising new argument against using kids' test scores to grade their teachers - The... - 1 views

  • This dispute is just one example of the mathematical acrobatics required to isolate the effect of one teacher on their students' test scores, when so many other factors inside and outside the school's walls affect how students perform.
  • When a teacher whose students do well on tests moves to a school where test scores were improving the previous year, and average scores continue improving after that teacher arrives, it is hard to know how much of that continued improvement is due to the new teacher and how much to other factors.
Ron King

Connecting test scores to teacher evaluations: Why not? | Dangerously Irrelevant - 0 views

  •  
    Mike Wiser at The Quad-City Times reported today on the controversy here in Iowa around connecting student test scores to teacher evaluations (aka 'value-added modeling' or 'VAM'). Last week I shared the research and prevailing opinion of scholars supporting why this should not be done.
Troy Patterson

Updating Data-Driven Instruction and the Practice of Teaching | Larry Cuban on School R... - 0 views

  • I am talking about data-driven instruction–a way of making teaching less subjective, more objective, less experience-based, more scientific.
  • Data-driven instruction, advocates say, is scientific and consistent with how successful businesses have used data for decades to increase their productivity.
  • Of course, teachers had always assessed learning informally before state- and district-designed tests. Teachers accumulated information (oops! data) from pop quizzes, class discussions, observing students in pairs and small groups, and individual conferences.
  • ...10 more annotations...
  • Based on these data, teachers revised lessons. Teachers leaned heavily on their experience with students and the incremental learning they had accumulated from teaching 180 days, year after year.
  • Teachers’ informal assessments of students gathered information directly and  would lead to altered lessons.
  • In the 1990s and, especially after No Child Left Behind became law in 2002, the electronic gathering of data, disaggregating information by groups and individuals, and then applying lessons learned from analysis of tests and classroom practices became a top priority.
  • Now, principals and teachers are awash in data.
  • How do teachers use the massive data available to them on student performance?
  • studied four elementary school grade-level teams in how they used data to improve lessons. She found that supportive principals and superintendents and habits of collaboration increased use of data to alter lessons in two of the cases but not in the other two.
  • Julie Marsh and her colleagues found 15 where teachers used annual tests, for example, in basic ways to target weaknesses in professional development or to schedule double periods of language arts for English language learners.
  • These researchers admitted, however, that they could not connect student achievement to the 36 instances of basic to complex data-driven decisions  in these two districts.
  • Of these studies, the expert panel found 64 that used experimental or quasi-experimental designs and only six–yes, six–met the Institute of Education Sciences standard for making causal claims about data-driven decisions improving student achievement. When reviewing these six studies, however, the panel found “low evidence” (rather than “moderate” or “strong” evidence) to support data-driven instruction. In short, the assumption that data-driven instructional decisions improve student test scores is, well, still an assumption not a fact.
  • Numbers may be facts. Numbers may be objective. Numbers may smell scientific. But we give meaning to these numbers. Data-driven instruction may be a worthwhile reform but as an evidence-based educational practice linked to student achievement, rhetoric notwithstanding, it is not there yet.
Troy Patterson

Message to My Freshman Students | Keith M. Parsons - 1 views

  • Your teachers were not allowed to teach, but were required to focus on preparing you for those all-important standardized tests.
  • Your teachers were held responsible if you failed, and expected to show that they had tried hard to avoid that dreaded result.
  • First, I am your professor, not your teacher. There is a difference.
  • ...7 more annotations...
  • Teachers are evaluated on the basis of learning outcomes, generally as measured by standardized tests. If you don't learn, then your teacher is blamed.
  • We should not foolishly expect them to listen to us, but instead cater to their conditioned craving for constant stimulation.
  • Hogwash. You need to learn to listen.
  • Critical listening means that are not just hearing but thinking about what you are hearing. Critical listening questions and evaluates what is being said and seeks key concepts and unifying themes. Your high school curriculum would have served you better had it focused more on developing your listening skills rather than drilling you on test-taking.
  • For an academic, there is something sacred about a citation. The proper citation of a source is a small tribute to the hard work, diligence, intelligence and integrity of someone dedicated enough to make a contribution to knowledge.
  • For you, citations and bibliographies are pointless hoops to jump through and you often treat these requirements carelessly.
  • Your professor still harbors the traditional view that universities are about education. If your aim is to get a credential, then for you courses will be obstacles in your path. For your professor, a course is an opportunity for you to make your world richer and yourself stronger.
Troy Patterson

Minnesota schools hit glitches with online testing - TwinCities.com - 0 views

  •  
    "just turn your security off"
Troy Patterson

Hybrid Classes Outlearn Traditional Classes -- THE Journal - 0 views

  • Students in hybrid classrooms outperformed their peers in traditional classes in all grades and subjects, according to the newest study from two organizations that work with schools in establishing hybrid instruction.
  • The results come out of those classes where students either took the Pennsylvania System of School Assessment (PSSA) tests or Keystone Exams to measure academic achievement.
  • In one example, hybrid learning eighth grade math students at Hatboro-Horsham School District (PA) passed the PSSA tests and Keystone Exams at a rate10 percent higher than their non-hybrid peers in five schools.
  • ...4 more annotations...
  • In another example, third grade math students in the hybrid learning program at Pennsylvania's Indiana Area School District outperformed students in traditional classes by 10 percentage points on the PSSA exams.
  • scored proficient or advanced on PSSA tests at a rate 23 percent higher than the previous year with gains in all subjects: reading (up 20 percent), math (up 24 percent) and science (up 27 percent).
  • "We use a rigorous accountability system that helps us measure and report on hybrid classroom outcomes," said Dellicker President and CEO Kevin Dellicker.
  • The cost of implementing hybrid learning through the Institute's model could be considered modest. During the 2013-2014 school year, according to the report, the schools spent an average of $220 per student (not including computing devices) to transform their learning models.
1 - 20 of 82 Next › Last »
Showing 20 items per page