Thursday, February 18, 2010

What Was She Thinking??

Please tell me that I am wrong in the way I think.

Let me give some background...

I am taking classes to be an administrator.

This week during one of my classes the discussion centered around accountability, mainly should test scores be used in evaluating teachers. Timely, considering the #edchat topic this week.

In North Carolina, teachers here get a report that tells them how effective they are as a teacher, based solely on test scores. This system looks at previous standardized tests that the student has taken and then attempts to predict how that student will do. In some cases it is pretty accurate. But often, it is not.

What teachers get at the beginning of the year is an effectiveness rating that looks at how their kids did last year compared to where the program thought they should be. Teachers can be in green which means they are highly effective, yellow which means they are neither effective or ineffective or red which is ineffective.

Now that NC has opted to play the money game with Washington and is applying to Race To The Top they have agreed that in the future these effectiveness ratings will be used as part of a teacher evaluation. There is even talk that the state is considering a pay structure that would include these scores. (Mind you that is just "water cooler" talk and nothing formal has been announced.)

Arming you with is what happened today.

We were discussing the use of this system in the schools. I personally believe it is flawed and if anyone spends much time examining or investigating these numbers they are really wasting their time...but I digress...

We started talking about the way the system predicts how a student is going to do on the end-of-year standardized test. I used to get these numbers as a classroom teacher so I have a little background. These numbers far from predict how students will do. It may get it right some of the time but more often than not it is way, way off.

There was a teacher in my class, an experienced teacher, 17 years in, who said that those predictors where what she felt were the best way to know who to tutor in her class. Those predictors were the best thing she had to decide who to give extra attention to.

I just about fell out into the floor. I could not decide whether to laugh because I thought she was being serious or to cry because I felt so bad for her students.

You have got to be kidding me!

What happened to actually getting to know our students. Understanding them. Learning about their background. We have to see what is going on with them.

Are we really going to let a some scores dictate to us who needs help and who does not? Seriously!!! What happened to formative assessment? What happened to ongoing assessments. Nope, this teacher is going to get her little paper and let that decide, right or wrong, who gets extra help and who does not.

Am I wrong to just dismiss this system, which you can read more about here if you want. (My personal favorite line in the text on that page is; "EVAAS tools provide a precise measurement of student progress over time and a reliable diagnosis of opportunities for growth that help to identify which students are at risk for under-achievement.") Should tools like EVAAS be used to, maybe not be the sole source of information but used to aid in decision-making?

Maybe I am wrong...

Image from Creative Commons Image Search


  1. You are not wrong. In my opinion using a system like EVASS may be easier... but really getting to know our students is so much more valuable. As a classroom teacher I feel like my opinions are less valued than the "real data" so many administrators in our district are so enamored with.

  2. It seems to me that a lot of the changes, such as EVASS and using test scores as predictors of teaching ability are not a way to improve education, but are instead a way to justify the way students are being taught now. They should work on ways to improve the classroom and teaching instead of evaluating teaching continuously more messed up system.

  3. Don't worry Steve, you're not wrong! I always joke during our benchmark testing that I'm in the wrong business. I joke that I should work for a testing company or a data management company (like SchoolNet) because that's where the real money is.

    What's crazy is our teachers are REQUIRED to have a data wall in their rooms of benchmark and predictive test scores, so even the students use these tests to track their progress!

    Now THAT is wrong!

  4. first - how great to find you and right here in Winston Salem!

    and - we can only manage what we measure, but often we forget that we can measure a lot by observation and subjective assessment.

  5. *shaking head* How sad! You are not wrong at all. When we decide to teach students and provide extra attention to those kids who are underperforming on tests there is a major problem. These kids are not machines. They are living, breathing human beings! For crying out loud, what is wrong with these people? If this is the way that schools are going to go, I need to start encouraging kids to fail so that they will get "extra" attention in the classroom.
    This is a terrible idea, there are so many other factors that weigh into how a student performs from year to year. Classroom dynamics, how they feel they fit in socially, what is happening at home, if they are sick, tired, stressed by situations outside of schools. Anyone who has spent any amount of time around kids knows this is a bad idea, unfortunately the people making the decisions don't seem to fall into this category.
    Bad...just bad!

  6. I don't think that this is as rare as you might think it is. At our school, we are given state test scores and asked which "bubble" kids we are going to focus on for the year. Bubble means ones who are closest to becoming proficient or in danger of falling below the proficient line. Shameful practice, but I don't believe for a minute that this doesn't happen all over.
    Sad commentary for the high-stakes testing mentality we all fight against daily.

  7. Steve, why do either you or the experienced colleague you reference have to be in the wrong here, per your opening question? It might be worth considering, before the laughing and/or crying, the brilliance of the "and". Students deserve teachers that spend time getting to know them as people and learners, use formative assessments effectively to make instructional decisions as they journey together "and" use the historical /predictive data available from research-based data systems. Teaching and making those minute by minute, lesson by lesson, day by day, week by week decisions is very complex and challenging work- to purposely limit and disregard any part of the information available limits the ability to make thoughtful and purposeful decisions.

  8. I think the error would be in only using the data to determine who may need extra help. I have had fakers and students who fly under the radar pretty well. Looking at benchmark tests helped me to see who fell into these categories.
    I have also had a daughter who was identified as needing "extra help" based on fluency scores and benchmark testing in 5th grade. I refused the "extra help", citing the year end test scores of advanced proficient in reading from the year before and asked that she be enriched instead of in a clinic for fluency only. She attended school in the district where I was also teaching 5th grade. Interesting to argue your own child's case with teachers, reading specialists and administrators that you work with! (BTW she was also advanced proficient in 5th grade)
    Not all data is "bad", and conversely not all observations are accurate, even with an educational professional with experience. To isolate either would be poor practice.

  9. Any "formula for success" requires an ingredients list greater than one! The concensus of the comments so far seems to be that assessment is just a part of the solution, not the answer. Why does educational reform seem to occur in a cyclical pattern in which standardized testing is either vilified or treated as a panacea? Why must we throw the baby out with the bath water when the au current trend is no longer popular? I agree with Amy who wrote that to rely on testing data, student performance or teacher observations alone is poor practice!

  10. While this sounds good in principle, I can't help but wonder how the effectiveness rating system was tested. Did they run actual tests using effective, non-effective and ineffective teachers to see how well their model worked and with what degree of accuracy and precision? I would guess not. And if not, it raises so many ethical and philosophical questions. For example: Just what exactly IS an "effective" teacher?

  11. The part that worries me the most is that the teachers who are commenting here do not seem to get a real voice when it comes to policy. What is the union stance on this policy? What power do we have as the experienced and trained educators to influence the ways our children will be raised and nurtured? I've been teaching for 5 years and I'm concerned that the politicians just want numbers because that's what earns money. And yet it's fascinating to me to see that private schools where rich people (including our own president) send their kids, often opt out of high-stakes testing and steer kids to more thematic and content-based inquiry-style learning. The hypocrisy is heart-breaking.

  12. I had something very similar happen to me when I was commenting on one of my students' reading level, another first grade teacher asked me how I know what level my student is reading on. I was shocked that she had been teaching reading so long and could not assess a student on their reading level. I think it must all go back to teacher training. Teachers need to know how to use assessment in the classroom to meet the needs of learners.

  13. The reason we depend on these test scores is that education has such poor measures in the first place. I wrote about these growth models in a previous blogpost
    Incentives and Growth Models: Why Educators Should Care
    and the bottom line--it is the equivalent to grading on the curve. The statistics may be solid but in my opinion the philosophy behind the model is flawed.

  14. haha too funny--- i think you have to actually DEVOTE some time to get to know your students :) that's terrible- glad she wasn't my teacher