Romancing the quantifiable

November22

“Not every thing that can be measured is important. Not everything that is important cannot be measured” — Einstein

Brian Crosby at In Practice recently discussed the “safety” that educators can find in using set instructional programs. It got me thinking about how we love our numbers. Recent financial events show where an uncritical romance with numbers can led to. Much of what was done looked great on paper, but took on a less rosy hue when it met with time and reality. Numbers can be nice, and there is a a natural human tendency to trust numbers because they are solid, and unambiguous looking, but numbers are like all information and have to be approached critically. What is being measured? How were the numbers assigned? Do the numbers that you are measuring match the goal or standard you are trying to achieve?

“Inferential methods:

On average, students in the River City treatment scored b.2 points higher on the post self-efficacy in general science inquiry section of the affective measure (t=2.22, p<.05).

On average, students in this sample who saw higher gains in self efficacy in general science inquiry scored higher on the post test. These gains were higher for students in the River City project. (n=358)

Yet these results tell us nothing about patterns, behaviors, and processes that lead to inquiry.

We are also limited by # of variables we can build into our inferential models.”

from Dr. Chris Dede, Harvard University in “New Strategies for Educational Assessment” at ILC 2008 Conference.

Dr. Dede had one of the more interesting presentations for me at ILC. He’s worked on assessment theory on a federal panel. His conclusions, if you do formative assessment really well, summative assessment becomes superfluous. His presentation was a case-study of a project using an immersive environment, called River City, to teach students life sciences and problem solving. The project had the students gather facts, and going through the interaction to try to discover the cause for a mysterious illness in a turn of the century town. This was an unusual learning situation because rather than too little, they almost have too much information. What they found was that traditional testing measures missed mastery that was shown through the observational data.

Had most of us world enough and time, we could gather tons of observational information about our students. When you’re at Harvard, you get grad students to do it, so they combed the log files from River City, and coded the students activities. In addition to activities that students did trying to “solve the mystery”, they also had to do a “final project” writing up their findings in a letter to the mayor. They were also given a standardized test. This is where things got interesting. They found that students in the study who showed mastery based on the logs, did not always show mastery on the final project or the standardized test.

So the question is, what are we preparing students to do take a test, or solve problems with others? You could make the argument that they should show mastery on the report to the mayor, because being able to communicate what they’ve done is an important part of being a scientist, but you’ll never fill out a multiple choice grid to show what you’ve discovered.

Email will not be published

Website example

Your Comment:

rssrss
rssrss

Links of Interest


License

Creative Commons License
All of Ms. Mercer's work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.


Skip to toolbar