Skip to main content

Home/ Middle School Matters/ Group items tagged performance

Rss Feed Group items tagged

Troy Patterson

The Sabermetrics of Effort - Jonah Lehrer - 0 views

  • The fundamental premise of Moneyball is that the labor market of sports is inefficient, and that many teams systematically undervalue particular athletic skills that help them win. While these skills are often subtle – and the players that possess them tend to toil in obscurity - they can be identified using sophisticated statistical techniques, aka sabermetrics. Home runs are fun. On-base percentage is crucial.
  • The wisdom of the moneyball strategy is no longer controversial. It’s why the A’s almost always outperform their payroll,
  • However, the triumph of moneyball creates a paradox, since its success depends on the very market inefficiencies it exposes. The end result is a relentless search for new undervalued skills, those hidden talents that nobody else seems to appreciate. At least not yet.
  • ...14 more annotations...
  •  One study found that baseball players significantly improved their performance in the final year of their contracts, just before entering free-agency. (Another study found a similar trend among NBA players.) What explained this improvement? Effort. Hustle. Blood, sweat and tears. The players wanted a big contract, so they worked harder.
  • If a player runs too little during a game, it’s not because his body gives out – it’s because his head doesn’t want to.
  • despite the obvious impact of effort, it’s surprisingly hard to isolate as a variable of athletic performance. Weimer and Wicker set out to fix this oversight. Using data gathered from three seasons and 1514 games of the Bundesliga – the premier soccer league in Germany – the economists attempted to measure individual effort as a variable of player performance,
  • So did these differences in levels of effort matter? The answer is an emphatic yes: teams with players that run longer distances are more likely to win the game,
  • As the economists note, “teams where some players run a lot while others are relatively lazy have a higher winning probability.”
  • There is a larger lesson here, which is that our obsession with measuring talent has led us to neglect the measurement of effort. This is a blind spot that extends far beyond the realm of professional sports.
  • Maximum tests are high-stakes assessments that try to measure a person’s peak level of performance. Think here of the SAT, or the NFL Combine, or all those standardized tests we give to our kids. Because these tests are relatively short, we assume people are motivated enough to put in the effort while they’re being measured. As a result, maximum tests are good at quantifying individual talent, whether it’s scholastic aptitude or speed in the 40-yard dash.
  • Unfortunately, the brevity of maximum tests means they are not very good at predicting future levels of effort. Sackett has demonstrated this by comparing the results from maximum tests to field studies of typical performance, which is a measure of how people perform when they are not being tested.
  • As Sackett came to discover, the correlation between these two assessments is often surprisingly low: the same people identified as the best by a maximum test often unperformed according to the measure of typical performance, and vice versa.
  • What accounts for the mismatch between maximum tests and typical performance? One explanation is that, while maximum tests are good at measuring talent, typical performance is about talent plus effort.
  • In the real world, you can’t assume people are always motivated to try their hardest. You can’t assume they are always striving to do their best. Clocking someone in a sprint won’t tell you if he or she has the nerve to run a marathon, or even 12 kilometers in a soccer match.
  • With any luck, these sabermetric innovations will trickle down to education, which is still mired in maximum high-stakes tests that fail to directly measure or improve the levels of effort put forth by students.
  • After all, those teams with the hardest workers (and not just the most talented ones) significantly increase their odds of winning.
  • Old-fashioned effort just might be the next on-base percentage.
Ron King

Can't We Do Better? - NYTimes.com - 0 views

  •  
    THE latest results in the Program for International Student Assessment, or PISA, which compare how well 15-year-olds in 65 cities and countries can apply math, science and reading skills to solve real-world problems were released last week, and it wasn't pretty for the home team. Andreas Schleicher, who manages PISA, told the Department of Education: "Three years ago, I came here with a special report benchmarking the U.S. against some of the best performing and rapidly improving education systems. Most of them have pulled further ahead, whether it is Brazil that advanced from the bottom, Germany and Poland that moved from adequate to good, or Shanghai and Singapore that moved from good to great. The math results of top-performer Shanghai are now two-and-a-half school years ahead even of those in Massachusetts - itself a leader within the U.S."
Troy Patterson

Principal: Why our new educator evaluation system is unethical - 0 views

  • A few years ago, a student at my high school was having a terrible time passing one of the exams needed to earn a Regents Diploma.
  • Mary has a learning disability that truly impacts her retention and analytical thinking.
  • Because she was a special education student, at the time there was an easier exam available, the RCT, which she could take and then use to earn a local high school diploma instead of the Regents Diploma.
  • ...16 more annotations...
  • Regents Diploma serves as a motivator for our students while providing an objective (though imperfect) measure of accomplishment.
  • If they do not pass a test the first time, it is not awful if they take it again—we use it as a diagnostic, help them fill the learning gaps, and only the passing score goes on the transcript
  • in Mary’s case, to ask her to take that test yet once again would have been tantamount to child abuse.
  • Mary’s story, therefore, points to a key reason why evaluating teachers and principals by test scores is wrong.
  • It illustrates how the problems with value-added measures of performance go well beyond the technicalities of validity and reliability.
  • The basic rule is this: No measure of performance used for high-stakes purposes should put the best interests of students in conflict with the best interests of the adults who serve them.
  • I will just point out that under that system I may be penalized if future students like Mary do not achieve a 65 on the Regents exam.
  • Mary and I can still make the choice to say “enough”, but it may cost me a “point”, if a majority of students who had the same middle school scores on math and English tests that she did years before, pass the test.
  • But I can also be less concerned about the VAM-based evaluation system because it’s very likely to be biased in favor of those like me who lead schools that have only one or two students like Mary every year.
  • When we have an ELL (English language learner) student with interrupted education arrive at our school, we often consider a plan that includes an extra year of high school.
  • last few years “four year graduation rates” are of high importance
  • four-year graduation rate as a high-stakes measure has resulted in the proliferation of “credit recovery” programs of dubious quality, along with teacher complaints of being pressured to pass students with poor attendance and grades, especially in schools under threat of closure.
  • On the one hand, they had a clear incentive to “test prep” for the recent Common Core exams, but they also knew that test prep was not the instruction that their students needed and deserved.
  • in New York and in many other Race to the Top states, continue to favor “form over substance” and allow the unintended consequences of a rushed models to be put in place.
  • Creating bell curves of relative educator performance may look like progress and science, but these are measures without meaning, and they do not help schools improve.
  • We can raise every bar and continue to add high-stakes measures. Or we can acknowledge and respond to the reality that school improvement takes time, capacity building, professional development, and financial support at the district, state and national levels.
Shawn McGirr

Arcademic Skill Builders: Online Educational Games - 4 views

  •  
    Academic games that track performance.
Ron King

Policy Analysis for California Education (PACE) - 0 views

  •  
    Policy Analysis for California Education (PACE) is an independent, non-partisan research center based at Stanford University, the University of California - Berkeley, and the University of Southern California. PACE seeks to define and sustain a long-term strategy for comprehensive policy reform and continuous improvement in performance at all levels of California's education system, from early childhood to post-secondary education and training. PACE bridges the gap between research and policy, working with scholars from California's leading universities and with state and local policymakers to increase the impact of academic research on educational policy in California.
Troy Patterson

Experts Say Measuring Non-Cognitive Skills Won't Work, But Districts Still Try | MindSh... - 0 views

  • Federal education law now requires one non-academic measure of school progress, which has led some districts to consider including students’ social and emotional growth as a performance measure.
  • She writes that even the researchers who popularized terms like “grit” think using it to measure school effectiveness is a bad idea:
Troy Patterson

Trouble with Rubrics - 0 views

  • She realized that her students, presumably grown accustomed to rubrics in other classrooms, now seemed “unable to function unless every required item is spelled out for them in a grid and assigned a point value.  Worse than that,” she added, “they do not have confidence in their thinking or writing skills and seem unwilling to really take risks.”[5]
  • This is the sort of outcome that may not be noticed by an assessment specialist who is essentially a technician, in search of practices that yield data in ever-greater quantities.
  • The fatal flaw in this logic is revealed by a line of research in educational psychology showing that students whose attention is relentlessly focused on how well they’re doing often become less engaged with what they're doing.
  • ...12 more annotations...
  • it’s shortsighted to assume that an assessment technique is valuable in direct proportion to how much information it provides.
  • Studies have shown that too much attention to the quality of one’s performance is associated with more superficial thinking, less interest in whatever one is doing, less perseverance in the face of failure, and a tendency to attribute the outcome to innate ability and other factors thought to be beyond one’s control.
  • As one sixth grader put it, “The whole time I’m writing, I’m not thinking about what I’m saying or how I’m saying it.  I’m worried about what grade the teacher will give me, even if she’s handed out a rubric.  I’m more focused on being correct than on being honest in my writing.”[8]
  • she argues, assessment is “stripped of the complexity that breathes life into good writing.”
  • High scores on a list of criteria for excellence in essay writing do not mean that the essay is any good because quality is more than the sum of its rubricized parts.
  • Wilson also makes the devastating observation that a relatively recent “shift in writing pedagogy has not translated into a shift in writing assessment.”
  • Teachers are given much more sophisticated and progressive guidance nowadays about how to teach writing but are still told to pigeonhole the results, to quantify what can’t really be quantified.
  • Consistent and uniform standards are admirable, and maybe even workable, when we’re talking about, say, the manufacture of DVD players.  The process of trying to gauge children’s understanding of ideas is a very different matter, however.
  • Rubrics are, above all, a tool to promote standardization, to turn teachers into grading machines or at least allow them to pretend that what they’re doing is exact and objective. 
  • The appeal of rubrics is supposed to be their high interrater reliability, finally delivered to language arts.
  • Just as it’s possible to raise standardized test scores as long as you’re willing to gut the curriculum and turn the school into a test-preparation factory, so it’s possible to get a bunch of people to agree on what rating to give an assignment as long as they’re willing to accept and apply someone else’s narrow criteria for what merits that rating. 
  • Once we check our judgment at the door, we can all learn to give a 4 to exactly the same things.
Troy Patterson

BBC - Future - Psychology: A simple trick to improve your memory - 0 views

  • One of the interesting things about the mind is that even though we all have one, we don't have perfect insight into how to get the best from it.
  • Karpicke and Roediger asked students to prepare for a test in various ways, and compared their success
  • On the final exam differences between the groups were dramatic. While dropping items from study didn’t have much of an effect, the people who dropped items from testing performed relatively poorly: they could only remember about 35% of the word pairs, compared to 80% for people who kept testing items after they had learnt them.
  • ...3 more annotations...
  • dropping items entirely from your revision, which is the advice given by many study guides, is wrong. You can stop studying them if you've learnt them, but you should keep testing what you've learnt if you want to remember them at the time of the final exam.
  • the researchers had the neat idea of asking their participants how well they would remember what they had learnt. All groups guessed at about 50%. This was a large overestimate for those who dropped items from test (and an underestimate from those who kept testing learnt items).
  • But the evidence has a moral for teachers as well: there's more to testing than finding out what students know – tests can also help us remember.
Troy Patterson

Updating Data-Driven Instruction and the Practice of Teaching | Larry Cuban on School R... - 0 views

  • I am talking about data-driven instruction–a way of making teaching less subjective, more objective, less experience-based, more scientific.
  • Data-driven instruction, advocates say, is scientific and consistent with how successful businesses have used data for decades to increase their productivity.
  • Of course, teachers had always assessed learning informally before state- and district-designed tests. Teachers accumulated information (oops! data) from pop quizzes, class discussions, observing students in pairs and small groups, and individual conferences.
  • ...10 more annotations...
  • Based on these data, teachers revised lessons. Teachers leaned heavily on their experience with students and the incremental learning they had accumulated from teaching 180 days, year after year.
  • Teachers’ informal assessments of students gathered information directly and  would lead to altered lessons.
  • In the 1990s and, especially after No Child Left Behind became law in 2002, the electronic gathering of data, disaggregating information by groups and individuals, and then applying lessons learned from analysis of tests and classroom practices became a top priority.
  • Now, principals and teachers are awash in data.
  • How do teachers use the massive data available to them on student performance?
  • studied four elementary school grade-level teams in how they used data to improve lessons. She found that supportive principals and superintendents and habits of collaboration increased use of data to alter lessons in two of the cases but not in the other two.
  • Julie Marsh and her colleagues found 15 where teachers used annual tests, for example, in basic ways to target weaknesses in professional development or to schedule double periods of language arts for English language learners.
  • These researchers admitted, however, that they could not connect student achievement to the 36 instances of basic to complex data-driven decisions  in these two districts.
  • Of these studies, the expert panel found 64 that used experimental or quasi-experimental designs and only six–yes, six–met the Institute of Education Sciences standard for making causal claims about data-driven decisions improving student achievement. When reviewing these six studies, however, the panel found “low evidence” (rather than “moderate” or “strong” evidence) to support data-driven instruction. In short, the assumption that data-driven instructional decisions improve student test scores is, well, still an assumption not a fact.
  • Numbers may be facts. Numbers may be objective. Numbers may smell scientific. But we give meaning to these numbers. Data-driven instruction may be a worthwhile reform but as an evidence-based educational practice linked to student achievement, rhetoric notwithstanding, it is not there yet.
Troy Patterson

A surprising new argument against using kids' test scores to grade their teachers - The... - 1 views

  • This dispute is just one example of the mathematical acrobatics required to isolate the effect of one teacher on their students' test scores, when so many other factors inside and outside the school's walls affect how students perform.
  • When a teacher whose students do well on tests moves to a school where test scores were improving the previous year, and average scores continue improving after that teacher arrives, it is hard to know how much of that continued improvement is due to the new teacher and how much to other factors.
Troy Patterson

10 Things I Wish I Knew My First Year Of Teaching - 1 views

  • 1. Prioritize—and then prioritize again.
  • 2. It’s not your classroom.
  • 3. Students won’t always remember the content, but many will never forget how you made them feel.
  • ...7 more annotations...
  • 4. Get cozy with the school custodians, secretary, librarian.
  • 5. Longer hours isn’t sustainable.
  • 6. Student behavior is a product.
  • 7. Don’t get sucked into doing too much outside of your class.
  • 8. Help other teachers.
  • 9. Reaching students emotionally matters. A lot.
  • 10. Literacy is everything for academic performance.
Troy Patterson

Curiosity Is a Unique Marker of Academic Success - The Atlantic - 0 views

  • Yet in actual schools, curiosity is drastically underappreciated.
  • The power of curiosity to contribute not only to high achievement, but also to a fulfilling existence, cannot be emphasized enough.
  • When Orville Wright, of the Wright brothers fame, was told by a friend that he and his brother would always be an example of how far someone can go in life with no special advantages, he emphatically responded, “to say we had no special advantages … the greatest thing in our favor was growing up in a family where there was always much encouragement to intellectual curiosity.”
  • ...11 more annotations...
  • They initiated their study in 1979, and have been assessing the participants based on a wide range of variables (e.g., school performance, IQ, leadership, happiness) across multiple contexts (laboratory and home) since.
  • Cognitive giftedness matters.
  • While intellectually gifted children were not different than the comparison group with respect to their temperament, behavioral, social, or emotional functioning, they did differ in regards to their advanced sensory and motor functioning starting at age 1.5, their ability to understand the meaning of words starting at age 1, and their ability to both understand and communicate information thereafter.
  • Parents of intellectually gifted children reported similar observations and were more likely than those of average children to say that their kids actively elicited stimulation by, for example, requesting intellectual extracurricular activities.
  • The researchers also measured what they described as academic intrinsic motivation and identified the top 19 percent of the 111 adolescent participants as “motivationally gifted,” displaying extreme enjoyment of school and of learning of challenging, difficult, and novel tasks and an orientation toward mastery, curiosity, and persistence.
  • Interestingly, they found very little correspondence between intellectual giftedness and motivational giftedness.
  • Students with gifted curiosity outperformed their peers on a wide range of educational outcomes, including math and reading, SAT scores, and college attainment. According to ratings from teachers, the motivationally gifted students worked harder and learned more.
  • suggest that gifted curiosity is a distinct characteristic that contributes uniquely to academic success.
  • “Motivation should not be considered simply a catalyst for the development of other forms of giftedness, but should be nurtured in its own right,”
  • All in all, the Fullerton study is proof that giftedness is not something an individual is either born with or without—giftedness is clearly a developmental process.
  • “giftedness is not a chance event … giftedness will blossom when children’s cognitive ability, motivation and enriched environments coexist and meld together to foster its growth.”
1 - 18 of 18
Showing 20 items per page