The Quest for Quality - Educational Leadership - 11 views
-
In the past, few educators, policymakers, or parents would have considered questioning the accuracy of these tests.
-
Denise Krefting on 05 Jun 11I was a teacher who didn't question cut scores. In fact they made life easier for me- but there really was no real learning beyond the assessment. This transition to continual learning makes so much more sense!
-
denise carlson on 05 Jun 11This sentence is so true. I remember bringing home ITBS scores to my parents. As long as the scores were in the 90th percentile or better they were pleased. I don't remember them ever digging deeper to ask the teacher what I actually knew or did not know. To them it was an important test and whatever the results said must have been the truth. I'm glad we're not there anymore.
-
Cindy Blinkinsop on 07 Jun 11Very true. We never questioned ITBS or ITED scores - we believed they were the one and only true assessment of a student's abilities. My how things are changing! There are so many factors to consider (region, vocabulary, did the student eat breakfast, did the student get enough rest, etc).
-
Natalie Smithhart on 01 Oct 11I can remember as a child being very worried about my ITBS score, I was never a good test taker and I knew how "important" these tests were. I am glad that these days we use more authentic types of assessments also.
-
Lora Lehmkuhl on 04 Oct 11I just reviewed ITED scores with our son. I recently read that ITED scores are closely related to scores one might expect for ACTs. This really worries me as a parent since we have a special needs child whose vision problems have greatly affected his performance in school. He plans to take the ACT test this spring and I know he's not prepared to accept a low score. Convincing him that he needs to take practice tests and study has been really difficult.
-
-
The assessor must begin with a clear picture of why he or she is conducting the assessment.
-
Using this with the concept of backward design shows us how many options all fit together.
-
I too value the "Begin with the end in mind" method. I find it easier, after establishing learning goals, to determine how I'll assess them then let that direct my method of instruction.
-
The "end product" might have different meaning to the student. For example, I teach a cooking class and the end product is often the food prepared. It can be difficult to convince the student that a standard muffin has specific characteristics. We review the characteristics before beginning the lab. In the eyes of the student, if it is edible it's just fine! You wouldn't believe how many times students have mixed up baking soda for baking powder and have been completely satisfied with a pancake that tastes like soap.
-
Strangely, after all the staff development, I think some teachers don't know why they are giving certain assessments. Part of this may be that they are philosophically opposed to so much testing but I think there is still a lack of understanding about the concepts being taught: the minutiae are more clear.
-
-
four categories of learning targets are
-
These targets could define four different assessments given quarterly. Don't we give informal assessments that cover some of these targets?
-
I like checklist type information because it helps me to evaluate and plan my own instruction. I can use these criteria to make sure I plan for all these targets in my instruction.
-
- ...58 more annotations...
-
-
Do the results provide clear direction for what to do next?
-
I would love to see our inservices allow for time to have such reflections on our assessments and allow us to redirect our planning. How much more would we see student growth if we not only reflected and redirected but also shared our observations with colleagues who also have the students (cross-curricular and at the next level) to have growth be specific and continual rather than a 9 month experiment that restarts from Ground Zero the following year!
-
Yes! There's so much research that values reflection, and yet it's something that one almost feels "guilty" doing on contract time.
-
Selecting an assessment method that is incapable of reflecting the intended learning will compromise the accuracy of the results.
-
This shows how important it is to set your learning targets and then make sure your assessment gives you the information that you are seeking in regards to those targets.
-
Without proper training, I'm sure this happens all too often. Teachers often teach and test based on their own experiences and not based on best-practices.
-
If you can't determine an assessment to match your learning target, could it be that your learning target needs revision?
-
I couldn't agree with you more! Some teachers refuse to open up to the latest in best practice, assuming that '36' years of teaching for example, has given them enough info to have 'all' the answers. And if the assessment is too difficult to create to match the target, why yes, revise the target. It seems we need to think outside the box, and to remind ourselves to keep updated and in touch with the world.
-
I think many times, the catch here is the gradebook. Many stakeholders(parents, students, administrators, etc.) have very rigid expectations for grading and equate assessment and grading. Teachers don't know how to manage both effectively, and tend to default to the needs of the gradebook for survival.
-
-
-
I have found it useful for another person to look at the assessment. Especially someone in a different curriculum area.
-
That sounds like a good idea. Why, specifically, do you use someone from a different curriculum area? I can think of some ideas, but I don't know if they are the ones you are considering.
-
-
After defining inference as "a conclusion drawn from the information available,"
-
a student might assess how strong his or her thesis statement is by using phrases from a rubric,
-
If we don't begin with clear statements of the intended learning—clear and understandable to everyone, including students—we won't end up with sound assessments.
-
I remember once writing a test item that had a term in it that my sophomore biology students didn't understand. Some asked me what the word meant, but what about those who were too embarrassed to ask?
-
This helps solidify the Iowa Core characteristic of effective instruction--assessment for learning and why it is part of the Iowa Core.
-
I can relate this to my children and the way that my husband and I differ on how we give directions. For example, he may say, "Your job is to be good." To a three and a five year old, "be good" is a very vague term. I might say something along the lines of, "Your job is to listen without interupting me, use good manners like saying, 'please and thank you,' and to sit down while we're eating dinner."
-
-
Figure 2 (page 18) clarifies which assessment methods are most likely to produce accurate results for different learning targets.
-
I have seen this chart from Stiggins work before and have found it to be quite useful. This reminds me of why we need to take the written portion to get an Iowa Driver's license, as well as taking Driver's Ed. or taking the Driving portion (of the test) to get a Driver's License. We need to know both the factual "stuff" (like what a STOP sign means), as well as the skill of being able to actually drive a vehicle.
-
-
new levels of testing that include benchmark, interim, and common assessments.
-
And I wonder how much Professional Development teachers (new and old) have been given to support them as they face the new assessment expectations. I think too much is taken for granted...teachers need training if all of this testing and data is to make a real difference for our students.
-
Totally agree!! Teachers need to know not only how and why they are collecting data. But how to use the data to make instructional decisions.
-
-
the use of multiple measures does not, by itself, translate into high-quality evidence.
-
and the students themselves
-
I think that we often forget about this part of the equation! I remember all too often getting a computer generated page back with test results that I couldn't understand and I'm sure that this is still happening nationwide. We must not forget that our jargon must be translated to the student and the parent so that all stakeholders are on the same page.
-
-
test plan.
-
noise distractions
-
I once had to ask that they stop mowing the grass just outside my classroom window while my students took the FCAT Reading test in Florida...minor details like this can make a HUGE difference for the kids testing! I couldn't believe that my administrators hadn't considered all of the details.
-
This can be major for some students - I took a professional knnowledge test years ago in an auditorium and the monitors were talking softly at the front but it really carried - they had no idea and I didn't say anything but noise doesn't normally bother me so I know it bothered others.
-
-
assessment literate
-
Clear Learning Targets
-
function CheckKaLogin() { if (getQuerystring('kalogin') != "") { window.location.href = window.location.href.replace('?kalogin=1', ''); } } function getQuerystring(key, default_) { if (default_==null) default_=""; key = key.replace(/[\[]/,"\\\[").replace(/[\]]/,"\\\]"); var regex = new RegExp("[\\?&]"+key+"=([^&#]*)"); var qs = regex.exec(window.location.href); if(qs == null) return default_; else return qs[1]; } window.onload = function() { if (getQuerystring('kalogin') != "" ) { // window.location.href = window.location.href.replace('?kalogin=1', ''); //alert('kalogin'); } } .smallf { font-size:9px; } MEMBER SIGN IN Username / Customer ID / E-mail Password Forgot your Username or Password? JOIN ASCD | MEMBER BENEFITS Register for ASCD EDge var userNameField='dnn_ctr898_ViewLoginModule_txtUserName';var passwordField='dnn_ctr898_ViewLoginModule_txtPassword';var loginField='dnn_ctr898_ViewLoginModule_btnSignIn'; function printPage() { window.print(); } //function sendData() //{ // window.open('/dnn/desktopmodules/VCMPrintSendArticleModule/sendfriend.htm'); //} function sendData(data) { // Initialize packed or we get the word 'undefined' var packed = ""; for (i = 0; (i < data.length); i++) { if (i > 0) { packed += ","; } packed += escape(data[i]); } window.location = "/dnn/desktopmodules/VCMPrintSendArticleModule/SendFriend.htm?" + packed; } function openWindow(url) { window.open(url, 'mywindow', 'width=350,height=370,directories=no,location=no,menubar=no,scrollbars=no,status=no,toolbar=no,resizable=yes'); } Print This ArticleSend to a Friend OAS_AD('Right'); Online Store ASCD's Top 5 Books Classroom Instruction That Works Enhancing Professional Practice, 2nd Edition The Art and Science of Teaching http://shop.ascd.org/productdisplay
-
aim for the lowest possible reading leve
-
Use a reading score from a state accountability test as a diagnostic instrument for reading group placement.
-
hmmm... we do this for Instructional Decision Making groups in Carroll. It's only one piece of the puzzle, but at the beginning of the year, we rely on the ITBS Reading Comp score to place students into groups.
-
I have done this myself at the high school level. No other data exists for my use in connection with students I don't know and time constraints.
-
-
Seven strategies of assessment for learning.
-
cultural insensitivity
-
I witnessed this first hand when the demographics in one district changed dramatically over the course of about two years. For younger students, pictures in an assessment were used. Several of the students had never seen a rose, but they knew it was a flower--but flower wasn't a choice.
-
This is so true! One night my husband and I were watching COPS and they were in NYC. A little boy pointed to the very small grassy area in between four apartment buildings that made a square and said, "He just ran through that meadow." I looked at my husband and said, "That kid would flunk the ITBS because he doesn't know the true definition of a meadow...for him, the small grassy area is a meadow. But for our region, a meadow is described totally differently and looks totally different." Test writers do not consider regional vocabulary enough when putting together an assessment. It is still 'one size fits all.'
-
There are some obvious things when it comes to cultural sensitivity. There are also some things a person preparing a test just might not know since their culture is different.
-
-
access to the data they want when they need it,
-
learning continuum
-
The classroom is also a practical location to give students multiple opportunities to demonstrate what they know and can do
-
the reason for assessing is to document individual or group achievement or mastery of standards and measure achievement status at a point in time.
-
Choosing the Right Assessment
-
Specific, descriptive feedback linked to the targets of instruction and arising from the assessment items or rubrics communicates to students in ways that enable them to immediately take action, thereby promoting further learning.
-
Whenever I read the word "specific," I can't help but to remember my third year of teaching when the English 9 teachers would share an old reel-to-reel converted to VHS instructional video with the class. Several minutes into the video, the narrator would tell the students: "Specific is terrific." This type of feedback is really the exception rather than the rule, isn't it?
-
This is really good practice, but extremely time consuming. We need to include as much as possible, but it may not always be feasible or possible to do it all the time.
-
-
build balanced systems, with assessment-literate users
-
Creating a plan like this for each assessment helps assessors sync what they taught with what they're assessing.
-
In the case of summative tests, the reason for assessing is to document individual or group achievement or mastery of standards and measure achievement status at a point in time.
-
inform instructional improvement and identify struggling students and the areas in which they struggle
-
Students learn best when they monitor and take responsibility for their own learning.
-
When we begin a project in desktop publishing the students and I brainstorm the different skills and techniques they can demonstrate and use in the project which in turn becomes their checklist or rubric. They feel more ownership and may need to revisit a skills that other students - they often require more of themselves as well.
-
I think this is very true and I also believe that the learning is at a higher level.
-
-
For each assessment, regardless of purpose, the assessor should organize the learning targets represented in the assessment into a written test plan that matches the learning targets represented in the curriculum
-
Or....we need to be sure that students are learning what is going to be assessed. And what is going to be assessed is aligned with the intended learning target. I think too often in classrooms, the teaching is first, then the learning, then the alignment with the assessments or definied learning targets.
-
So we need to decide what is going to be assessed first before we create the curriculum. I think often as teachers we do this the other way around. Seems like it should be simple, but sometimes I find myself creating my curriculum before I have decided what I might need to assess.
-
Teachers should design the assessment so students can use the results to self-assess and set goals.
-
Annual state and local district standardized tests serve annual accountability purposes, provide comparable data, and serve functions related to student placement and selection, guidance, progress monitoring, and program evaluation.
-
As a "big picture" beginning point in planning for the use of multiple measures, assessors need to consider each assessment level in light of four key questions, along with their formative and summative applications1
-
Summative applications refer to grades students receive (classroom level)
-
At the level of annual state/district standardized assessments, they involve where and how teachers can improve instruction—next year.
-
It is great when this data is used to improve instruction. I was teaching in Texas whe Gov. Perry took over from George Bush (late 90s). The annual testing there was used to determined which schools received the most funds for the next year. High scoring schools received more money; low scoring schools received less money. Sadly, the low scoring schools generally needed the funds so much more than the high scoring schools. I had friends teaching in downtown Houston who told me how many of their students came to school with just a plain tortilla for lunch. They needed more funds, but since they received low scores received less funds. The students from the suburbs (such as Sugarland where at that time the mean income was $100,000/year, attending private tutoring (paid for by parents) several afternoons a week so their test scores would be higher. I literally saw students and teachers who had nervous breakdowns due to the pressure on the testing results. I agree we need assessments; I'm just concerned about how some of those assessments are used.
-
-
Although it may seem as though having more assessments will mean we are more accurately estimating student achievement
-
The assessor
-
-
Devil's Advocate at work here....in a perfect world, our assessments would inspire students to WANT to improve, but in reality, can a rubric really do that in and of itself?
-
I have yet to see a student use a rubric to improve a project. I think the idea of it is good, but the self-motivation is not there, or I don't know how to motivate them myself.
-
-
Given the rise in testing, especially in light of a heightened focus on using multiple measures, it's increasingly important to address two essential components of reliable assessments: quality and balance.
-
-
I believe that this article "The Quest for Quality" really gets at the heart of the importance of having "focus lessons" daily and more long-term learning targets for both teachers and students. Being specific and purposeful about what and how we want students to learn (skills and academic (vocabulary) is essential to genuine learning and performances.
-
-
It also helps them assign the appropriate balance of points in relation to the importance of each target as well as the number of items for each assessed target.
-
This key ensures that the assessor has translated the learning targets into assessments that will yield accurate results. It calls attention to the proper assessment method and to the importance of minimizing any bias that might distort estimates of student learning.
-
A mechanism should be in place for students to track their own progress on learning targets and communicate their status to others
-
My comment here concerns this whole paragraph. I think we need to provide time to students as well as teachers for analyzing the results of assessments, and for using the results to make their projects better. As it is, no one has time to revisit the object of the assessment. Time constraints have all educational participants roaring along at breakneck speeds
-
-
Who is the decision maker?
-
I think this question is crucial. If the decision-maker and the purpose of the test are punitive rather than informed, no wonder people don't want to be assessed! of course we need to consider this as people who are decisionmakers and quit using tests scores to punish students--we don't like being punished for results and neither do they.
-
-
Assessment literacy
-
A detailed chart listing key issues and their formative and summative applications at each of the three assessment levels is available at www.ascd.org/ASCD/pdf/journals/ed_lead/el200911_chappius_table.pdf
-
to know what constitutes appropriate and inappropriate uses of assessment results—thereby reducing the risk of applying data to decisions for which they aren't suited.
-
The point where my assessment breaks down is that my formative data is rarely shared with others. We don't look for trends or patterns or discuss needed changes in content or instructional delivery.
-
I believe that this article "The Quest for Quality" really gets at the heart of the importance of having "focus lessons" daily and more long-term learning targets for both teachers and students. Being specific and purposeful about what and how we want students to learn (skills and academic (vocabulary) is essential to genuine learning and performances.
-
Founded in 1943, ASCD (formerly the Association for Supervision and Curriculum Development) is an educational leadership organization dedicated to advancing best practices and policies for the success of each learner. Our 175,000 members in 119 countries are professional educators from all levels and subject areas--superintendents, supervisors, principals, teachers, professors of education, and school board members.