Student Data Systems are the new black…
Everyone is looking at the numbers, but what do they mean? There is a huge push for data with reforms being proposed by the current administration in RttT. States are scrambling to keep up. It’s caused my own state to “pervert” the intent of the last round of EETT with ARRA. There are a number of programs to try to help states improve their data warehouse systems (ISO from IES: State Applications for Data-System Grants – Inside School Research).
Many of these data systems do not always perform as desired. My state’s system (CALPADs) has a universally lousy reputation among school IT professionals, and has never lived up to it’s promise. We’re not the only state with this problem (Resource record: States Learn From Failures in Arizona Student Data System). Even if you get the programs “working” from a technical point of view, problems can remain.
Taming the data firehose…
I worry that we don’t always know what to do with this data we’re collecting. The priority for the administration seems to be more about collecting, and less about analyzing and understanding, as we can see by how long it took the administration to appoint a statistics chief for education (Obama Names His Pick for Statistics Chief … Finally – Inside School Research – Education Week ). This disconnect between a love of numbers, and doing something meaningful with the them is shown in the example of Tennessee, where they have the best student data systems, but little support for schools, and really poor performance, so they now have a state of the art system to show them how they’ve failed their children (Why do states with the “best” data systems have the worst schools? « Schoolfinance101′s Blog).
Garbage in…garbage out…
This sums up my basic philosophy of student data systems. There is a lot of talk about using “student data” to improve instruction these days. What I hear does not sound like “best practices”. My background includes work in public opinion polling (election polls), and seven years doing management reports at a large national bank. My prior careers gives me a good background in “measuring” humans, and some appreciation for the limits assigning metrics to what students do.
Here are some basic points when you looking at measuring assessment:
1. Are you really measuring the knowledge you would like students to gain?
Tuttle SVC: The High School Reform Disconnect Duncan said he was impressed by students and teachers at Aviation High School and would like to see a hundred more schools like it across the country.
“This is a model for the country, absolutely,” he said, adding that the administration is interested in both charter schools and other innovative approaches.
But Duncan’s agenda in general and Race to the Top in particular are not geared toward this kind of school at all. Does he know this? What a phony snow job. What a bunch of liars.
Maybe not liars, but definitely using the wrong ruler for the job.
2. Are you or is your state “playing the numbers”? What percentage constitutes “Proficiency”, and have you been playing with your “cut rates”? What about the “lexile level” and degree of difficulty of the questions? Do you have five proficiency bands, or just three? There is a whole slew of posts and articles about this subject, but I’ll just share a few of my favorites:
The Proficiency Illusion — Fordham Foundation
This gives an overview of the testing systems from the states, and the different ways that proficiency is “played” with.
3D2Know ::: Data-Driven Decision Making – Best Practices
This article was from the links reading and lauds the practices in Chicago, but there is critique of what was done with the data (Too Fast, Too Furious? | LFA: Join The Conversation – Public School Insights) and from the last The Bracey Report 2009 on page 26.
3. Understanding basic statistics and the limitations and trade-offs behavioral statistics (which is what test scores are) can help too.
Being Data-Driven vs. Data Informed
The term “data informed” comes from a fellow local teacher, Larry Ferlazzo, and I think it’s a good way to treat the data (“Data-Driven” Versus “Data-Informed” | Larry Ferlazzo’s Websites of the Day…). You shouldn’t ignore the data, but you also need to pay attention to the kid(s) in front of you, and use all your senses. One of the most influential pieces for my thinking about teaching is this one, Ideology Trumps Evidence — Richard L. Allington (pdf), which puts forward the idea that good teachers are constantly monitoring students, and adjusting their teaching to address changing needs. It’s for this reason that I think clicker systems which allow for quick, formative assessment, may be much more useful in teaching than a whole slew of benchmark summative assessments.
What’s my role?
If a district doesn’t want their technology coordinator talking about these things, or doesn’t view it as their role, it says a lot about the job. It means you should be more interested in “tech widgets” and you are not part of the discussion about student achievement in a really meaningful way. Some folks will be happy with that role. I would not. It probably why the job of technology coordinator is not in the cards for my future.