Skip to main content

Home/ Middle School Matters/ Group items tagged professional

Rss Feed Group items tagged

Troy Patterson

Free Technology for Teachers: Teachers Interviewing Teachers - Reflective Practice - 0 views

  •  
    Great idea for Professional Development.
Troy Patterson

Updating Data-Driven Instruction and the Practice of Teaching | Larry Cuban on School R... - 0 views

  • I am talking about data-driven instruction–a way of making teaching less subjective, more objective, less experience-based, more scientific.
  • Data-driven instruction, advocates say, is scientific and consistent with how successful businesses have used data for decades to increase their productivity.
  • Of course, teachers had always assessed learning informally before state- and district-designed tests. Teachers accumulated information (oops! data) from pop quizzes, class discussions, observing students in pairs and small groups, and individual conferences.
  • ...10 more annotations...
  • Based on these data, teachers revised lessons. Teachers leaned heavily on their experience with students and the incremental learning they had accumulated from teaching 180 days, year after year.
  • Teachers’ informal assessments of students gathered information directly and  would lead to altered lessons.
  • In the 1990s and, especially after No Child Left Behind became law in 2002, the electronic gathering of data, disaggregating information by groups and individuals, and then applying lessons learned from analysis of tests and classroom practices became a top priority.
  • Now, principals and teachers are awash in data.
  • How do teachers use the massive data available to them on student performance?
  • studied four elementary school grade-level teams in how they used data to improve lessons. She found that supportive principals and superintendents and habits of collaboration increased use of data to alter lessons in two of the cases but not in the other two.
  • Julie Marsh and her colleagues found 15 where teachers used annual tests, for example, in basic ways to target weaknesses in professional development or to schedule double periods of language arts for English language learners.
  • These researchers admitted, however, that they could not connect student achievement to the 36 instances of basic to complex data-driven decisions  in these two districts.
  • Of these studies, the expert panel found 64 that used experimental or quasi-experimental designs and only six–yes, six–met the Institute of Education Sciences standard for making causal claims about data-driven decisions improving student achievement. When reviewing these six studies, however, the panel found “low evidence” (rather than “moderate” or “strong” evidence) to support data-driven instruction. In short, the assumption that data-driven instructional decisions improve student test scores is, well, still an assumption not a fact.
  • Numbers may be facts. Numbers may be objective. Numbers may smell scientific. But we give meaning to these numbers. Data-driven instruction may be a worthwhile reform but as an evidence-based educational practice linked to student achievement, rhetoric notwithstanding, it is not there yet.
Troy Patterson

The Sabermetrics of Effort - Jonah Lehrer - 0 views

  • The fundamental premise of Moneyball is that the labor market of sports is inefficient, and that many teams systematically undervalue particular athletic skills that help them win. While these skills are often subtle – and the players that possess them tend to toil in obscurity - they can be identified using sophisticated statistical techniques, aka sabermetrics. Home runs are fun. On-base percentage is crucial.
  • The wisdom of the moneyball strategy is no longer controversial. It’s why the A’s almost always outperform their payroll,
  • However, the triumph of moneyball creates a paradox, since its success depends on the very market inefficiencies it exposes. The end result is a relentless search for new undervalued skills, those hidden talents that nobody else seems to appreciate. At least not yet.
  • ...14 more annotations...
  •  One study found that baseball players significantly improved their performance in the final year of their contracts, just before entering free-agency. (Another study found a similar trend among NBA players.) What explained this improvement? Effort. Hustle. Blood, sweat and tears. The players wanted a big contract, so they worked harder.
  • If a player runs too little during a game, it’s not because his body gives out – it’s because his head doesn’t want to.
  • despite the obvious impact of effort, it’s surprisingly hard to isolate as a variable of athletic performance. Weimer and Wicker set out to fix this oversight. Using data gathered from three seasons and 1514 games of the Bundesliga – the premier soccer league in Germany – the economists attempted to measure individual effort as a variable of player performance,
  • So did these differences in levels of effort matter? The answer is an emphatic yes: teams with players that run longer distances are more likely to win the game,
  • As the economists note, “teams where some players run a lot while others are relatively lazy have a higher winning probability.”
  • There is a larger lesson here, which is that our obsession with measuring talent has led us to neglect the measurement of effort. This is a blind spot that extends far beyond the realm of professional sports.
  • Maximum tests are high-stakes assessments that try to measure a person’s peak level of performance. Think here of the SAT, or the NFL Combine, or all those standardized tests we give to our kids. Because these tests are relatively short, we assume people are motivated enough to put in the effort while they’re being measured. As a result, maximum tests are good at quantifying individual talent, whether it’s scholastic aptitude or speed in the 40-yard dash.
  • Unfortunately, the brevity of maximum tests means they are not very good at predicting future levels of effort. Sackett has demonstrated this by comparing the results from maximum tests to field studies of typical performance, which is a measure of how people perform when they are not being tested.
  • As Sackett came to discover, the correlation between these two assessments is often surprisingly low: the same people identified as the best by a maximum test often unperformed according to the measure of typical performance, and vice versa.
  • What accounts for the mismatch between maximum tests and typical performance? One explanation is that, while maximum tests are good at measuring talent, typical performance is about talent plus effort.
  • In the real world, you can’t assume people are always motivated to try their hardest. You can’t assume they are always striving to do their best. Clocking someone in a sprint won’t tell you if he or she has the nerve to run a marathon, or even 12 kilometers in a soccer match.
  • With any luck, these sabermetric innovations will trickle down to education, which is still mired in maximum high-stakes tests that fail to directly measure or improve the levels of effort put forth by students.
  • After all, those teams with the hardest workers (and not just the most talented ones) significantly increase their odds of winning.
  • Old-fashioned effort just might be the next on-base percentage.
‹ Previous 21 - 25 of 25
Showing 20 items per page