Not-so shocking: “Students’ writing scores lower on computer tests”

I was on twitter yesterday when I noticed the following news article getting sent around, the headline screaming “Students’ writing scores lower on computer tests.”

Here is an excerpt from the opening paragraph:

Bend-La Pine principals and district officials have puzzled over recent preliminary writing test scores showing students who took the online version of a state writing test scoring lower than students who completed a paper version.

Bend-La Pine principals and district officials have puzzled over recent preliminary writing test scores showing students who took the online version of a state writing test scoring lower than students who completed a paper version.

How can this be?  How can a generation who has grown up with computers, interacted with them, taken keyboarding and computing classes, how can THIS generation score lower on a computer-based writing assessment when this is their domain? their kingdom?  Shouldn’t they have scored higher?  The prompts were the same, the time constraints were the same, only the media was different. The experts in the district were puzzled and surprised.

What is surprising are not the results, but the explanations: the kids are used to spelling and grammar check (which was not available to them even in the computer based assessment), they were slower when they typed, they revised more often on the computer.  The possibility that no one raised was the idea that the explanation was in the way the essays were scored.

I know of at least two studies (see below) which have pointed out that typed essays consistently are scored lower than handwritten essays.  Take the very same essay, give it to a trained rater to read in the handwritten version and then give it to them typed, and they will give the typed version a lower score.

It seems to me that when we are considering different types of assessments for our students, we should be sure of what we are assessing.  In this case, the assessment told us that the raters prefer handwritten essays.

For more:

Article Global Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google StumbleUpon Eli Pets

    • Lawrence Bruce
    • June 22nd, 2010 7:56pm

    Is that all…? In light of our recent work today, I’d like to offer a few comments.

    My favorite part of the article is when the author, attempting to provide why this occurred, wrote:

    “Students gave three reasons.

    One, they said they struggled with proofreading their work on-screen instead of printing their work out and editing it by hand.

    “When we’re working on a paper in class and we have access to computer labs, all my kids do multiple hard-copy drafts and edit on those, and they’re quite successful at that,” he said.

    Two, Molner said students are accustomed to using word-processing programs that feature spelling and grammar tools.

    “They’ve never used (a computer) that didn’t have that,” he said.”So they’re not as rigorous in their proofreading.”

    And three, students told Molner they seem to slow down and be more thoughtful when they handwrite a piece.”

    Where is the student’s response? All we have is at the end, “students told Molner…” (writing teacher). The author left out the critical voice that we wanted to hear.

    This “study” offers the argument that many educators and administrators with a stake in educational technology have been dreading – that technology in the hands of students is leading us in a backward direction. It seems easy to criticize the “scorer” when the results do not match what we want to see. It was stated that the scorers were trained to read both handwritten and typed essays (whatever that means – I hope it didn’t cost too much).

    Other reasons listed in the AP article suggested that the over-reliance on writing tools that were unavailable during test caused poorer performance. A tool is a tool, until it becomes a crutch. The teachers clearly stated how much they rely on it in their writing. Maybe the question is, “who is it being left behind?” I think this article leads me to believe that teachers are being left behind – or at least performing below expectations.

      • Andrea Z
      • June 22nd, 2010 9:08pm

      Thanks for the nudge, Larry! I should’ve known better than to think I could away with such a thin elucidation of my argument.

      My biggest issue with the original article was that not one explanation looked to the raters. The prompts were the same, the student population was the same, the teaching was the same: the only difference between the two groups was that the products were developed on a word-processor. Historically, more than one study has shown that even with trained raters, papers that are handwritten are scored higher. While some of the explanations offered may have contributed to some of the students scoring lower, one would expect that there would still be some variation within the population. When the entire population of students scores lower on the assessment, it is a logical conclusion that something other than individual factors are at work. Taking that into consideration along with the previous studies, I am led to question whether rater bias for handwritten essays is at work.

      I completely concur that I would have liked to hear from the students, but I think it is an over-generalization that the additional tools of grammar and spell check have become a crutch and the students lack of access when faced with a computerized assessment lead to their lower scores.

        • Lawrence Bruce
        • June 22nd, 2010 10:28pm

        Good times. I find your statement regarding the bias toward written work interesting. I had not seen/heard that before.

  1. No trackbacks yet.

Additional comments powered byBackType