Thursday, February 18, 2016

Consistency in Assessment: Is It Possible?


In Chapter Two of A Guide to College Writing Assessment, O’Neill, Moore, and Huot outline the concerns related to consistency in assessment:

“[T]he problem for writing assessment (Huot and Neal 2006, 1) was ‘framed’ (Schon 1982, 40) as what could be done to make independent readers agree on the same scores for the same papers. This is not an easy or inconsequential task. Without consistency in scoring, students’ scores on their writing would depend upon who read the papers rather than who wrote them. Without consistency in scoring, it would be impossible to argue for the validity of decisions based upon such scores.” (19)

I think this issue of getting readers to score writing exams in a similar way deserves its own blog post. I remember having a conversation about "calibrating" grades during my TA training at another institution. We spent a couple of hours working in small groups to discuss the ways we could all score essays similarly. Each group member graded the same essays using the same rubric and then we compared the grades we had given them. While some of the essays were clearly very well developed, many of them were B or C range. But there’s a big difference between a B and a C and as graders, we struggled to agree on what constituted each grade. That’s where the rubric was supposed to help. But it proved to be less useful than intended. By the end of the session, I remember being frustrated because we had not “calibrated” and we continued to take very different approaches to grading and scoring. Other programs try to implement calibration sessions (here’s a link to one guide, for example:) http://www.ride.ri.gov/Portals/0/Uploads/Documents/Teachers-and-Administrators-Excellent-Educators/Educator-Evaluation/Online-Modules/Calibration_Protocol_for_Scoring_Student_Work.pdf


Is calibration possible? Certainly we can get together and talk about what kinds of things the program wants us to emphasize in our assessment and we should be using the Writing Program’s grading criteria: http://cms.bsu.edu/academics/collegesanddepartments/english/forcurrentstudents/writingprogram/evaluation-criteria

But instructors still emphasize different concepts or skills. Some of us use rubrics and some do not and I think that works to our advantage because we are not forced to evaluate in a way that doesn’t fit our own teaching style. Still, I wonder if we are adequately preparing students for the writing proficiency exam (*sigh). While I was reading this chapter of the book, I kept thinking: I wonder how the writing proficiency exam is graded. I wonder what the score sheets look like and how readers are trained to score them. I wonder if we could find that information and then use it to build those specific skills into some of the assignments in our classes. But I was also wondering if that should be part of our role as writing instructors. This makes me think of high school teachers preparing students for ACT/SAT exams and I certainly don’t want to be teaching for a writing proficiency exam. I hope that I am teaching students enough already so that they are more adequately prepared for that exam but I honestly have no idea what that exam looks like or how it’s graded. Perhaps it should be more transparent for instructors in the writing program, even though the exam isn't administered through the writing program (right?).


“Without consistency in scoring, it would be impossible to argue for the validity of decisions based upon such scores,” the authors of A Guide to College Writing Assessment stated (19). What does this suggest about the writing proficiency exam at Ball State and what are the implications for our writing program?






Also, here's the link to the exam grading criteria (which I think is vague and still doesn't show us the score sheets): http://cms.bsu.edu/academics/collegesanddepartments/universitycollege/writingproficiency/exam/howitworks/grading/criteria


And here's the link to information about how the assessment works: http://cms.bsu.edu/academics/collegesanddepartments/universitycollege/writingproficiency/exam/howitworks/grading/examassessment

 

3 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. I think trying to teach to the writing proficiency exam specifically could add an extra burden to our courses. We have certain pedagogical goals that are not necessarily aligned with the exam requirement. Personally I do not think it is needed and that faciliating the writing program requirements (which takes quite a significant commitment on the part of instructors when you consider how much time we spend reading, writing, and planning the class) should be considered sufficient when it comes to assessing students' proficiency. I feel that the instructors are in a better position to do this, having worked with the students for one or even two semesters, than an exam reader measuring the results of a one-time effort.

    ReplyDelete
  3. I just exited out of this tab in the middle of my reply to Alyssa's post and I lost it (angry face), so I'm going to keep this re-do short and simple.

    I was putting Alyssa's concerns about the Ball State proficiency exam into conversation with the section on the "assessment of proficiency" in the CCCC Position Statement on Writing Assessment. Basically, my argument was that our proficiency exam does not respond to those guidelines in many ways. For example, the guidelines suggest that students' writing proficiency should be determined based on their multiple works that respond to a variety of writing situations, yet ours is a one essay kind of deal.

    I won't belabor the point, though, because I know I'm preaching to the choir. I do wonder who the stakeholders are that refuse to consider our expert opinion on the matter.

    ReplyDelete