Yesterday was my first day of exam grading at home during the holidays, and I tried to make it delightful. A quota of only ten exams today would have caused me to fall short of having a full periods’-worth of grading done, so I grabbed 13 exams with a mind to finishing off all the exams of my largest group.
As it happens, many students had a lot to say about every question, and in conjunction with ignoring the instruction, “THIS SECTION: ANSWER THREE QUESTIONS,” many fell short of finishing. Thus, not much kindness was bestowed upon me today, except for the below, which is the biological equivalent of standing up during a math exam and shouting, “I am an integral.” That man is Drake, by the way. (smh, Carla!)
Thus, my mind turned to data and contemplating the relationship between not answering constructed response questions and constructed response grade outcome. A correlation of 100% was what I anticipated, but this is what I found (n=25):
There is a weak correlation, to be sure, and not answering questions lowers grade outcome ,which is just logical. However, I would like to state for the record that answering all questions briefly is a better strategy than going into the minutiae on the first half of the exam at the expense of having any time at all for the second half of the exam. The average points missed on this exam was five, and indeed, we had already lowered our constructed response denominator by four in anticipation of this likelihood. Once I add the other section of students to the mix I will be able to make a data-driven decision as to whether those four points of forgiveness stand or if they need adjustment. This will occur in consultation with my teaching partner and her findings for her groups.
I have repeatedly observed that impressive semester exam Scantron scores precede good semester exam constructed response scores (when I finally get around to grading them). Just how predictive were the Scantron scores in my first group of students? As it happens, there’s a formula for that (n=25):
What will be interesting to see is whether my additional group (n=21) improves this cause-effect relationship or breaks it down. It would also be fun to use this formula in advance of the Semester 2 exam constructed response grading to see if it was useful beyond this one time. I am still trying to decide what, if any, data-driven decision I could make using this data. I do contemplate the accuracy and fairness of continuing to use historical assessment practices when so much on the front end of teaching and learning has changed, but that’s a story for another day!