So after my last post, someone on Twitter sent this response:
— Dea Conrad-Curry (@doctordea) July 16, 2013
Basically, the teacher who are reporting on the New York State test may have seen so many questions on structural choices of the author, but these are likely just questions placed as part of “field testing” and will not count in scoring. That may well be what the state of New York and Pearson intended. But, even if Dea is right, and Grant Wiggins is right, I can tell you right now, teachers gonna teach what they think is gonna be tested. I’m betting teachers in New York are going to spend a bunch of time on structural choice of authors next year.
I’ve been in two Common Core trainings already where teachers have stated they were going to skip some standards because they would not be tested (oral language), or would not have a lot of test questions (one of the math standards) — this even though we have fewer math standards to cover with Common Core. This testing has so corrupted us that we have forgotten that we are preparing our students for the next grade (which might need some knowledge from that standard you are skipping) and for life (where they will need to talk to other people about ideas). I leave you with two thoughts:
When you measure performance in the courses the professors taught (i.e., how intro students did in intro), the less experienced and less qualified professors produced the best performance. They also got the highest student evaluation scores. But more experienced and qualified professors’ students did best in follow-on courses (i.e., their intro students did best in advanced classes).
The authors speculate that the more experienced professors tend to “broaden the curriculum and produce students with a deeper understanding of the material.” (p. 430) That is, because they don’t teach directly to the test, they do worse in the short run but better in the long run.
To summarize the findings: because they didn’t teach to the test, the professors who instilled the deepest learning in their students came out looking the worst in terms of student evaluations and initial exam performance. To me, these results were staggering, and I don’t say that lightly. — from “Do the Best Professors Get the Worst Ratings? Psychology Today
And after reading this gem from Tom Hoffman, try to imagine what would happen if Triple AAA included questions about toothpaste on it’s survey, but not in the results. How would that change what the hotels put out, hmm?