Skip to main content

Home/ Middle School Matters/ Group items tagged us

Rss Feed Group items tagged

Ron King

Phillips Exeter Academy | Hands On Math - 0 views

  •  
    The Exeter Mathematics Institute courses are tailored and designed for the needs of each school district. These sample course descriptions are examples of courses that have been offered in various school districts over the years. In most of these courses, we use the same Exeter Mathematics Problem Sets that are used during the Exeter school year. In the Geometer's Sketchpad course and in all of the hands-on courses, we use materials that have been specifically developed by Exeter Math Institute instructors. All of these materials are available using the links below
Ron King

The differences between a geek and a nerd - 0 views

  •  
    Curious about how people use "geek" and "nerd" to describe themselves and if there was any difference between the two terms, Burr Settles analyzed words used in tweets that contained the two. Settles used pointwise mutual information (PMI), which essentially provided a measure of the geekness or nerdiness of a term.
Troy Patterson

Updating Data-Driven Instruction and the Practice of Teaching | Larry Cuban on School R... - 0 views

  • I am talking about data-driven instruction–a way of making teaching less subjective, more objective, less experience-based, more scientific.
  • Data-driven instruction, advocates say, is scientific and consistent with how successful businesses have used data for decades to increase their productivity.
  • Of course, teachers had always assessed learning informally before state- and district-designed tests. Teachers accumulated information (oops! data) from pop quizzes, class discussions, observing students in pairs and small groups, and individual conferences.
  • ...10 more annotations...
  • Based on these data, teachers revised lessons. Teachers leaned heavily on their experience with students and the incremental learning they had accumulated from teaching 180 days, year after year.
  • Teachers’ informal assessments of students gathered information directly and  would lead to altered lessons.
  • In the 1990s and, especially after No Child Left Behind became law in 2002, the electronic gathering of data, disaggregating information by groups and individuals, and then applying lessons learned from analysis of tests and classroom practices became a top priority.
  • Now, principals and teachers are awash in data.
  • How do teachers use the massive data available to them on student performance?
  • studied four elementary school grade-level teams in how they used data to improve lessons. She found that supportive principals and superintendents and habits of collaboration increased use of data to alter lessons in two of the cases but not in the other two.
  • Julie Marsh and her colleagues found 15 where teachers used annual tests, for example, in basic ways to target weaknesses in professional development or to schedule double periods of language arts for English language learners.
  • These researchers admitted, however, that they could not connect student achievement to the 36 instances of basic to complex data-driven decisions  in these two districts.
  • Of these studies, the expert panel found 64 that used experimental or quasi-experimental designs and only six–yes, six–met the Institute of Education Sciences standard for making causal claims about data-driven decisions improving student achievement. When reviewing these six studies, however, the panel found “low evidence” (rather than “moderate” or “strong” evidence) to support data-driven instruction. In short, the assumption that data-driven instructional decisions improve student test scores is, well, still an assumption not a fact.
  • Numbers may be facts. Numbers may be objective. Numbers may smell scientific. But we give meaning to these numbers. Data-driven instruction may be a worthwhile reform but as an evidence-based educational practice linked to student achievement, rhetoric notwithstanding, it is not there yet.
Ron King

Response: Using -- Not Misusing -- Ability Groups In The Classroom - 0 views

  •  
    This week's "question of the week" is: "What does research say about use of ability groups/tracking, and how have you seen it used or misused? What are workable alternatives?"
Ron King

Assessment Training Institute - Tools & Resources: Rubric Evaluations - 0 views

  •  
    n Chapter 2 of the book Creating & Recognizing Quality Rubrics, we describe a "rubric for rubrics" designed to assist educators to be thoughtful consumers and developers of rubrics for instructional use in the classroom. The various quality levels described by the Rubric for Rubrics are illustrated with many sample classroom rubrics. All classroom rubrics discussed in the book have been evaluated using the Rubric for Rubrics; these evaluations are included on a CD that accompanies the book.
Ron King

20 Education Technology Tools Everybody Should Know About - Edudemic - Edudemic - 0 views

  •  
    Although educators tend to feel like they are left all on their own to deal with students that are getting crazier by the day, there are plenty of technology resources that can make their teaching job more effective. Educators should definitely start using some of the online solutions that are meant to promote modern education and take the classroom organization to the next level. In this article, we will cover 20 education technology tools that educators should start using as soon as possible.
Ron King

This Video Uses Jelly Beans To Show You How Much You're Wasting Your Life (Video) | Eli... - 1 views

  •  
    This Video Uses Jelly Beans To Show You How Much You're Wasting Your Life (Video). VERY COOL VIDEO!!!
Troy Patterson

Note Taking Skills for 21st Century Students @coolcatteacher - 0 views

  •  
    "We want them DRAWING. Why? So they can use all parts of their brain. Using symbols and notes and such can help connect ideas in powerful ways. So, at this point, I take my students on a visual notetaking journey."
Troy Patterson

Free Technology for Teachers: Pros & Cons of Using Blog Posts for School Announcements - 0 views

  •  
    "Pros & Cons of Using Blog Posts for School Announcements"
Troy Patterson

Official Google Enterprise Blog: A bridge to the cloud: Google Cloud Connect for Micros... - 0 views

  •  
    "For those of you who have not made the full move to Google Docs and are still using Microsoft Office, Google has something great to offer. With Cloud Connect, people can continue to use the familiar Office interface, while reaping many of the benefits of web-based collaboration that Google Docs users already enjoy. Users of Office 2003, 2007 and 2010 can sync their Office documents to the Google cloud, without ever leaving Office. Once synced, documents are backed-up, given a unique URL, and can be accessed from anywhere (including mobile devices) at any time through Google Docs. And because the files are stored in the cloud, people always have access to the current version."
Troy Patterson

Copyright Cops | edte.ch - 0 views

  •  
    According to the most recent EU Kids Online research over one third of 9-12 year olds and three quarters of 13-16 year olds who use the internet in Europe have their own profile on a social networking site. I discovered this wonderful film from Julio Secchin which not only depicts the way that our youngsters use the web but also some of the wider implications.
Troy Patterson

Hybrid Classes Outlearn Traditional Classes -- THE Journal - 0 views

  • Students in hybrid classrooms outperformed their peers in traditional classes in all grades and subjects, according to the newest study from two organizations that work with schools in establishing hybrid instruction.
  • The results come out of those classes where students either took the Pennsylvania System of School Assessment (PSSA) tests or Keystone Exams to measure academic achievement.
  • In one example, hybrid learning eighth grade math students at Hatboro-Horsham School District (PA) passed the PSSA tests and Keystone Exams at a rate10 percent higher than their non-hybrid peers in five schools.
  • ...4 more annotations...
  • In another example, third grade math students in the hybrid learning program at Pennsylvania's Indiana Area School District outperformed students in traditional classes by 10 percentage points on the PSSA exams.
  • scored proficient or advanced on PSSA tests at a rate 23 percent higher than the previous year with gains in all subjects: reading (up 20 percent), math (up 24 percent) and science (up 27 percent).
  • "We use a rigorous accountability system that helps us measure and report on hybrid classroom outcomes," said Dellicker President and CEO Kevin Dellicker.
  • The cost of implementing hybrid learning through the Institute's model could be considered modest. During the 2013-2014 school year, according to the report, the schools spent an average of $220 per student (not including computing devices) to transform their learning models.
Troy Patterson

Principal: Why our new educator evaluation system is unethical - 0 views

  • A few years ago, a student at my high school was having a terrible time passing one of the exams needed to earn a Regents Diploma.
  • Mary has a learning disability that truly impacts her retention and analytical thinking.
  • Because she was a special education student, at the time there was an easier exam available, the RCT, which she could take and then use to earn a local high school diploma instead of the Regents Diploma.
  • ...16 more annotations...
  • Regents Diploma serves as a motivator for our students while providing an objective (though imperfect) measure of accomplishment.
  • If they do not pass a test the first time, it is not awful if they take it again—we use it as a diagnostic, help them fill the learning gaps, and only the passing score goes on the transcript
  • in Mary’s case, to ask her to take that test yet once again would have been tantamount to child abuse.
  • Mary’s story, therefore, points to a key reason why evaluating teachers and principals by test scores is wrong.
  • It illustrates how the problems with value-added measures of performance go well beyond the technicalities of validity and reliability.
  • The basic rule is this: No measure of performance used for high-stakes purposes should put the best interests of students in conflict with the best interests of the adults who serve them.
  • I will just point out that under that system I may be penalized if future students like Mary do not achieve a 65 on the Regents exam.
  • Mary and I can still make the choice to say “enough”, but it may cost me a “point”, if a majority of students who had the same middle school scores on math and English tests that she did years before, pass the test.
  • But I can also be less concerned about the VAM-based evaluation system because it’s very likely to be biased in favor of those like me who lead schools that have only one or two students like Mary every year.
  • When we have an ELL (English language learner) student with interrupted education arrive at our school, we often consider a plan that includes an extra year of high school.
  • last few years “four year graduation rates” are of high importance
  • four-year graduation rate as a high-stakes measure has resulted in the proliferation of “credit recovery” programs of dubious quality, along with teacher complaints of being pressured to pass students with poor attendance and grades, especially in schools under threat of closure.
  • On the one hand, they had a clear incentive to “test prep” for the recent Common Core exams, but they also knew that test prep was not the instruction that their students needed and deserved.
  • in New York and in many other Race to the Top states, continue to favor “form over substance” and allow the unintended consequences of a rushed models to be put in place.
  • Creating bell curves of relative educator performance may look like progress and science, but these are measures without meaning, and they do not help schools improve.
  • We can raise every bar and continue to add high-stakes measures. Or we can acknowledge and respond to the reality that school improvement takes time, capacity building, professional development, and financial support at the district, state and national levels.
Ron King

Examples of Formative Assessment (West Virginia DOE) - 0 views

  •  
    When incorporated into classroom practice, the formative assessment process provides information needed to adjust teaching and learning while they are still happening. The process serves as practice for the student and a check for understanding during the learning process. The formative assessment process guides teachers in making decisions about future instruction. Here are a few examples that may be used in the classroom during the formative assessment process to collect evidence of student learning.
Troy Patterson

What Doesn't Work: Literacy Practices We Should Abandon | Edutopia - 0 views

  • 1. "Look Up the List" Vocabulary Instruction
  • 2. Giving Students Prizes for Reading
  • 3. Weekly Spelling Tests
  • ...3 more annotations...
  • 4. Unsupported Independent Reading
  • 5. Taking Away Recess as Punishment
  • 5 Less-Than-Optimal Practices To help us analyze and maximize use of instructional time, here are five common literacy practices in U.S. schools that research suggests are not optimal use of instructional time:
Troy Patterson

The Sabermetrics of Effort - Jonah Lehrer - 0 views

  • The fundamental premise of Moneyball is that the labor market of sports is inefficient, and that many teams systematically undervalue particular athletic skills that help them win. While these skills are often subtle – and the players that possess them tend to toil in obscurity - they can be identified using sophisticated statistical techniques, aka sabermetrics. Home runs are fun. On-base percentage is crucial.
  • The wisdom of the moneyball strategy is no longer controversial. It’s why the A’s almost always outperform their payroll,
  • However, the triumph of moneyball creates a paradox, since its success depends on the very market inefficiencies it exposes. The end result is a relentless search for new undervalued skills, those hidden talents that nobody else seems to appreciate. At least not yet.
  • ...14 more annotations...
  •  One study found that baseball players significantly improved their performance in the final year of their contracts, just before entering free-agency. (Another study found a similar trend among NBA players.) What explained this improvement? Effort. Hustle. Blood, sweat and tears. The players wanted a big contract, so they worked harder.
  • If a player runs too little during a game, it’s not because his body gives out – it’s because his head doesn’t want to.
  • despite the obvious impact of effort, it’s surprisingly hard to isolate as a variable of athletic performance. Weimer and Wicker set out to fix this oversight. Using data gathered from three seasons and 1514 games of the Bundesliga – the premier soccer league in Germany – the economists attempted to measure individual effort as a variable of player performance,
  • So did these differences in levels of effort matter? The answer is an emphatic yes: teams with players that run longer distances are more likely to win the game,
  • As the economists note, “teams where some players run a lot while others are relatively lazy have a higher winning probability.”
  • There is a larger lesson here, which is that our obsession with measuring talent has led us to neglect the measurement of effort. This is a blind spot that extends far beyond the realm of professional sports.
  • Maximum tests are high-stakes assessments that try to measure a person’s peak level of performance. Think here of the SAT, or the NFL Combine, or all those standardized tests we give to our kids. Because these tests are relatively short, we assume people are motivated enough to put in the effort while they’re being measured. As a result, maximum tests are good at quantifying individual talent, whether it’s scholastic aptitude or speed in the 40-yard dash.
  • Unfortunately, the brevity of maximum tests means they are not very good at predicting future levels of effort. Sackett has demonstrated this by comparing the results from maximum tests to field studies of typical performance, which is a measure of how people perform when they are not being tested.
  • As Sackett came to discover, the correlation between these two assessments is often surprisingly low: the same people identified as the best by a maximum test often unperformed according to the measure of typical performance, and vice versa.
  • What accounts for the mismatch between maximum tests and typical performance? One explanation is that, while maximum tests are good at measuring talent, typical performance is about talent plus effort.
  • In the real world, you can’t assume people are always motivated to try their hardest. You can’t assume they are always striving to do their best. Clocking someone in a sprint won’t tell you if he or she has the nerve to run a marathon, or even 12 kilometers in a soccer match.
  • With any luck, these sabermetric innovations will trickle down to education, which is still mired in maximum high-stakes tests that fail to directly measure or improve the levels of effort put forth by students.
  • After all, those teams with the hardest workers (and not just the most talented ones) significantly increase their odds of winning.
  • Old-fashioned effort just might be the next on-base percentage.
1 - 20 of 229 Next › Last »
Showing 20 items per page