Skip to main content

Home/ Middle School Matters/ Group items tagged giving

Rss Feed Group items tagged

Troy Patterson

CURMUDGUCATION: Norms vs. Standards - 1 views

  • A standards-referenced test compares every student to the standard set by the test giver. A norm-referenced test compares every student to every other student. The lines between different levels of achievement will be set after the test has been taken and corrected. Then the results are laid out, and the lines between levels (cut scores) are set.
  • When I give my twenty word spelling test, I can't set the grade levels until I correct it. Depending on the results, I may "discover" that an A is anything over a fifteen, twelve is Doing Okay, and anything under nine is failing. Or I may find that twenty is an A, nineteen is okay, and eighteen or less is failing. If you have ever been in a class where grades are curved, you were in a class that used norm referencing.
  • With standards reference, we can set a solid immovable line between different levels of achievement, and we can do it before the test is even given. This week I'm giving a spelling test consisting of twenty words. Before I even give the test, I can tell my class that if they get eighteen or more correct, they get an A, if they get sixteen correct, they did okay, and if the get thirteen or less correct, they fail.
  • ...4 more annotations...
  • Norm referencing is why, even in this day and age, you can't just take the SAT on a computer and have your score the instant you click on the final answer-- the SAT folks can't figure out your score until they have collected and crunched all the results. And in the case of the IQ test, 100 is always set to be "normal."
  • There are several important implications and limitations for norm-referencing. One is that they are lousy for showing growth, or lack thereof.
  • Normed referencing also gets us into the Lake Wobegon Effect.
  • On a standards-referenced test, it is possible for everyone to get an A. On a normed-referenced test, it is not possible for everyone to get an A. Nobody has to flunk a standards-referenced test. Somebody has to flunk a norm-referenced test.
  •  
    "Ed History 101"
Troy Patterson

Trouble with Rubrics - 0 views

  • She realized that her students, presumably grown accustomed to rubrics in other classrooms, now seemed “unable to function unless every required item is spelled out for them in a grid and assigned a point value.  Worse than that,” she added, “they do not have confidence in their thinking or writing skills and seem unwilling to really take risks.”[5]
  • This is the sort of outcome that may not be noticed by an assessment specialist who is essentially a technician, in search of practices that yield data in ever-greater quantities.
  • The fatal flaw in this logic is revealed by a line of research in educational psychology showing that students whose attention is relentlessly focused on how well they’re doing often become less engaged with what they're doing.
  • ...12 more annotations...
  • it’s shortsighted to assume that an assessment technique is valuable in direct proportion to how much information it provides.
  • Studies have shown that too much attention to the quality of one’s performance is associated with more superficial thinking, less interest in whatever one is doing, less perseverance in the face of failure, and a tendency to attribute the outcome to innate ability and other factors thought to be beyond one’s control.
  • As one sixth grader put it, “The whole time I’m writing, I’m not thinking about what I’m saying or how I’m saying it.  I’m worried about what grade the teacher will give me, even if she’s handed out a rubric.  I’m more focused on being correct than on being honest in my writing.”[8]
  • she argues, assessment is “stripped of the complexity that breathes life into good writing.”
  • High scores on a list of criteria for excellence in essay writing do not mean that the essay is any good because quality is more than the sum of its rubricized parts.
  • Wilson also makes the devastating observation that a relatively recent “shift in writing pedagogy has not translated into a shift in writing assessment.”
  • Teachers are given much more sophisticated and progressive guidance nowadays about how to teach writing but are still told to pigeonhole the results, to quantify what can’t really be quantified.
  • Consistent and uniform standards are admirable, and maybe even workable, when we’re talking about, say, the manufacture of DVD players.  The process of trying to gauge children’s understanding of ideas is a very different matter, however.
  • Rubrics are, above all, a tool to promote standardization, to turn teachers into grading machines or at least allow them to pretend that what they’re doing is exact and objective. 
  • The appeal of rubrics is supposed to be their high interrater reliability, finally delivered to language arts.
  • Just as it’s possible to raise standardized test scores as long as you’re willing to gut the curriculum and turn the school into a test-preparation factory, so it’s possible to get a bunch of people to agree on what rating to give an assignment as long as they’re willing to accept and apply someone else’s narrow criteria for what merits that rating. 
  • Once we check our judgment at the door, we can all learn to give a 4 to exactly the same things.
Ron King

Metacognition: The Gift That Keeps Giving | Edutopia - 0 views

  •  
    By teaching students to "drive their own brain" through metacognition, we provide a concrete way to guide them think about how they can best learn.
  •  
    By teaching students to "drive their own brain" through metacognition, we provide a concrete way to guide them think about how they can best learn.
Troy Patterson

Activity: Feedback Action Planning Template | - 0 views

  •  
    "I've been doing a ton of tinkering this year with the way that I give students feedback in my classroom.  My goal is to steal Dylan Wiliam's idea that our goal should be to turn feedback into detective work.  That just feels right to me."
Troy Patterson

The Sabermetrics of Effort - Jonah Lehrer - 0 views

  • The fundamental premise of Moneyball is that the labor market of sports is inefficient, and that many teams systematically undervalue particular athletic skills that help them win. While these skills are often subtle – and the players that possess them tend to toil in obscurity - they can be identified using sophisticated statistical techniques, aka sabermetrics. Home runs are fun. On-base percentage is crucial.
  • The wisdom of the moneyball strategy is no longer controversial. It’s why the A’s almost always outperform their payroll,
  • However, the triumph of moneyball creates a paradox, since its success depends on the very market inefficiencies it exposes. The end result is a relentless search for new undervalued skills, those hidden talents that nobody else seems to appreciate. At least not yet.
  • ...14 more annotations...
  •  One study found that baseball players significantly improved their performance in the final year of their contracts, just before entering free-agency. (Another study found a similar trend among NBA players.) What explained this improvement? Effort. Hustle. Blood, sweat and tears. The players wanted a big contract, so they worked harder.
  • If a player runs too little during a game, it’s not because his body gives out – it’s because his head doesn’t want to.
  • despite the obvious impact of effort, it’s surprisingly hard to isolate as a variable of athletic performance. Weimer and Wicker set out to fix this oversight. Using data gathered from three seasons and 1514 games of the Bundesliga – the premier soccer league in Germany – the economists attempted to measure individual effort as a variable of player performance,
  • So did these differences in levels of effort matter? The answer is an emphatic yes: teams with players that run longer distances are more likely to win the game,
  • As the economists note, “teams where some players run a lot while others are relatively lazy have a higher winning probability.”
  • There is a larger lesson here, which is that our obsession with measuring talent has led us to neglect the measurement of effort. This is a blind spot that extends far beyond the realm of professional sports.
  • Maximum tests are high-stakes assessments that try to measure a person’s peak level of performance. Think here of the SAT, or the NFL Combine, or all those standardized tests we give to our kids. Because these tests are relatively short, we assume people are motivated enough to put in the effort while they’re being measured. As a result, maximum tests are good at quantifying individual talent, whether it’s scholastic aptitude or speed in the 40-yard dash.
  • Unfortunately, the brevity of maximum tests means they are not very good at predicting future levels of effort. Sackett has demonstrated this by comparing the results from maximum tests to field studies of typical performance, which is a measure of how people perform when they are not being tested.
  • As Sackett came to discover, the correlation between these two assessments is often surprisingly low: the same people identified as the best by a maximum test often unperformed according to the measure of typical performance, and vice versa.
  • What accounts for the mismatch between maximum tests and typical performance? One explanation is that, while maximum tests are good at measuring talent, typical performance is about talent plus effort.
  • In the real world, you can’t assume people are always motivated to try their hardest. You can’t assume they are always striving to do their best. Clocking someone in a sprint won’t tell you if he or she has the nerve to run a marathon, or even 12 kilometers in a soccer match.
  • With any luck, these sabermetric innovations will trickle down to education, which is still mired in maximum high-stakes tests that fail to directly measure or improve the levels of effort put forth by students.
  • After all, those teams with the hardest workers (and not just the most talented ones) significantly increase their odds of winning.
  • Old-fashioned effort just might be the next on-base percentage.
Troy Patterson

What Doesn't Work: Literacy Practices We Should Abandon | Edutopia - 0 views

  • 1. "Look Up the List" Vocabulary Instruction
  • 2. Giving Students Prizes for Reading
  • 3. Weekly Spelling Tests
  • ...3 more annotations...
  • 4. Unsupported Independent Reading
  • 5. Taking Away Recess as Punishment
  • 5 Less-Than-Optimal Practices To help us analyze and maximize use of instructional time, here are five common literacy practices in U.S. schools that research suggests are not optimal use of instructional time:
Troy Patterson

10 Realities About Bullying at School and Online | MindShift | KQED News - 0 views

  • “most educators aren’t aware of the function bullying serves in school,”
  • The majority of kids don’t bully other kids and haven’t been victimized
  • Kids pick on others as a way to secure their standing among their peers or to move up a notch.
  • ...11 more annotations...
  • aggression is intrinsic to status and escalates with increases in peer status until the pinnacle of the social hierarchy is attained.”
  • Children from single-parent homes, and those with less educated parents, are no more apt to bully than kids with married and learned parents. African-Americans and other minorities show the same rates of bullying as their white counterparts.
  • The popular notion of bullies as sullen social outcasts who come from broken homes is a myth.
  • What adults call bullying kids call drama.
  • Cyber-bullying is just an extension of what’s happening in the classrooms, halls, and cafeteria
  • online cruelty merely makes visible what kids are doing in person behind the backs of adults.
  • ust another way for kids to express hostility towards targets they’ve already gone after—or are in retaliation against those who have attacked them in school.
  • Kids don’t intervene because doing so would jeopardize their own standing, they lack the tools to assist, and because they don’t think it will help anyway.
  • Adolescents are fixated on their social standing, and anything that jeopardizes their fragile position will be avoided.
  • students receive scant training on how to help in such a way that it won’t backfire.
  • “Asking students to be empowered and responsible bystanders is tantamount to telling them to be good readers or safe drivers without giving them instructions, guidance, and opportunities to practice,”
Felipp Tam

Two Thumbs Up for Hotels Cagayan de Oro - 1 views

Among the many hotels Cagayan de Oro, I consider Cagayan de Oro Hotels to be the best hotel I have ever stayed at. They offer great amenities that make your stay a very relaxing one. As member of ...

travel hotels of in Park Hotel map tour Tourism National

started by Felipp Tam on 14 Jun 11 no follow-up yet
Troy Patterson

Updating Data-Driven Instruction and the Practice of Teaching | Larry Cuban on School R... - 0 views

  • I am talking about data-driven instruction–a way of making teaching less subjective, more objective, less experience-based, more scientific.
  • Data-driven instruction, advocates say, is scientific and consistent with how successful businesses have used data for decades to increase their productivity.
  • Of course, teachers had always assessed learning informally before state- and district-designed tests. Teachers accumulated information (oops! data) from pop quizzes, class discussions, observing students in pairs and small groups, and individual conferences.
  • ...10 more annotations...
  • Based on these data, teachers revised lessons. Teachers leaned heavily on their experience with students and the incremental learning they had accumulated from teaching 180 days, year after year.
  • Teachers’ informal assessments of students gathered information directly and  would lead to altered lessons.
  • In the 1990s and, especially after No Child Left Behind became law in 2002, the electronic gathering of data, disaggregating information by groups and individuals, and then applying lessons learned from analysis of tests and classroom practices became a top priority.
  • Now, principals and teachers are awash in data.
  • How do teachers use the massive data available to them on student performance?
  • studied four elementary school grade-level teams in how they used data to improve lessons. She found that supportive principals and superintendents and habits of collaboration increased use of data to alter lessons in two of the cases but not in the other two.
  • Julie Marsh and her colleagues found 15 where teachers used annual tests, for example, in basic ways to target weaknesses in professional development or to schedule double periods of language arts for English language learners.
  • These researchers admitted, however, that they could not connect student achievement to the 36 instances of basic to complex data-driven decisions  in these two districts.
  • Of these studies, the expert panel found 64 that used experimental or quasi-experimental designs and only six–yes, six–met the Institute of Education Sciences standard for making causal claims about data-driven decisions improving student achievement. When reviewing these six studies, however, the panel found “low evidence” (rather than “moderate” or “strong” evidence) to support data-driven instruction. In short, the assumption that data-driven instructional decisions improve student test scores is, well, still an assumption not a fact.
  • Numbers may be facts. Numbers may be objective. Numbers may smell scientific. But we give meaning to these numbers. Data-driven instruction may be a worthwhile reform but as an evidence-based educational practice linked to student achievement, rhetoric notwithstanding, it is not there yet.
Troy Patterson

How to Make a Quiz Work Harder for You | Cult of Pedagogy - 0 views

  • Assessments should give us loads of information about what our students understand, what they don’t understand, and how well we’ve taught them.
  • It took me years of teaching before I realized I was using my tests and quizzes to sort out, reward and punish my students, rather than measure and inform my teaching. I needed to make my assessments work harder for me.
  • I could identify specific misconceptions students had about the material and get better at addressing those the next time around. I also became a much better test maker.
  • ...1 more annotation...
  • The best part about this system is you only need a pencil, an answer key, and a few extra minutes.
1 - 20 of 21 Next ›
Showing 20 items per page