Meet the new boss…

August16

We want to provide better information. … We then want to provide support. You can’t be held accountable for things that you can’t do or that you can’t reach without any prospects of help or support to get you there. And then we want to provide pressure. Use the tools, use the information, use the supports. Or we’re going to make some changes. That system has changed the paradigm. It’s a culture here of getting better.

What makes our accountability model kind of special is that it’s not like the federal or state “all or nothing” measure of you’ve either made it or you haven’t. … How do you tell a teacher who has just worked with kids who are two or three grade levels behind and moved them two or three grade levels that they’re not proficient? That’s not a failing teacher in Charlotte. That’s a teacher that’s going to get recognized. That’s a teacher that has really moved the bar for kids. Our accountability model reflects that.

I know the bit on with the numbers on how many grade levels a student moves is a little messed up, but I’m going to take the statement that if you move a kid who is multiple grade levels behind significantly, even if they aren’t yet proficient, you’ve done something. He doesn’t seem to be wanting to pull out a can of whoop-a$$ on the teachers (and their union), no hidden brooms, so he passes the “not scaring me test”.

There are some specific areas of the district that could do with a “house-cleaning” that could make a significant difference in getting more technology in learning, and my whole world. These could be avoided in a “love” of paper approach, or de-railed if the focus for technology is just as a test prep tool. I have NO idea what to expect from this new leader based on these articles.

I worry, because all I’m seeing is that success is based on testing, and it’s all about the numbers. To paraphrase Chris Lehmann, I’d like to create citizens, not just “test monkeys”. I don’t see a vision of that at all here. Like Chris and Tom, how much hope can you have about a system where progressive reformers are described as the status quo?

Testing, testing…

December9

DONE FOREVER from mr. nightshades photostream

Testing, it is the only objective criteria for assessment isn’t it? How can you be sure that students have “gotten” it if they can spit it back up on a test three months later? How can a test be fair if kids use aids like a calculator, how do we know that they “really” know how math if they need “help”? I could just go down the line shooting these and other “myths” away, but what takes its place, and why is that better? Okay, enough questions, let’s try for some answers, NOW!
What is the goal of education? Most of us want our students to be some blend of good and productive citizens which translates crudely into being able to hold down a job. Others may have a more beneficent vision of education, but this is the most functional goal, and testing still fails it. Let’s pull out a case study. I was discussing this recent post with my bil who is a manager at a software firm. I told him about how students in the study who showed mastery based on the formative data, did not always have mastery based on either of the summative assessments (a standardized bubble test, and a report they had to write summarizing their findings). We discussed whether mastery was in completing a task (the formative data), or passing a test. BIL thought that the written report was probably the assessment instrument that was most like work projects. Now this is where the conversation got interesting. He and his co-workers have gone to some of the best schools in the country. Many, like BIL, graduated with honors, but the biggest problem he has is that they can do the task part of their job very well (they’d pass a formative assessment), and they do well on tests (he guessed many had 800s on their Math SATs), but they can’t explain what they are doing when requests for information come from management.
My conclusion from discussions with BIL and others and based on my 7+ years in analytical work, you will rarely be given a bubble test as part of your professional life, you will be asked to solve problems, and to communicate with others. Bubble tests do not accurately assess whether someone can perform a task in context. They also are not a great way to assess how clearly folks can communicate about those tasks and projects. I hope this story illustrates both why bubble tests are not useful, but I hope that it also points the way at how we should be assessing, and why those types of assessments are called authentic. What can it look like in education? Here is an interesting article on what Nebraska tried to do. Interestingly, part of why it died was because scores on tests (NAEP) did not match how students performed on their portfolios, and it was assumed that the test was “accurate” and grade inflation was occurring in the portfolios. It never seemed to occur to folks that it might be the bubble tests that were missing something, instead of blaming the students and teachers.
I’ve blogged before about the romance of the quantifiable, and paired with that, a distrust of measures seen as subjective. Here is my suggestion, lets let Nebraska do their portfolios. Since it’s the first time we’ll be doing this under NCLB, do some audits on them. Have outsiders look at it, auditors without a stake in the outcome. They couldn’t do any worse than KPMG has lately.

Links:

Newer Entries »
rssrss
rssrss

Links of Interest


License

Creative Commons License
All of Ms. Mercer's work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.


Skip to toolbar