Month: November 2014


I had a great day of assessing students’ small group discussions. My sophomores are getting better and better at deep, analytical analysis. Without giving them a form/sheet/worksheet of what to look for in a text, they are digging deeper, looking for and noticing ways of seeing literature that veer away from the blatant, in your face, elements or strategies that float on the surface.

And better still, they are starting to see the importance of seeing the non-obvious.

But one comment stands out from a discussion I had after a group finished their analytical presentation. One student asked why I didn’t have a formal rubric for these SGDs. Before I could answer, another student responded. She was glad there wasn’t a rubric because if there had been, she would have just done what was required. Not to mention, she added, the list would have broken up her thought process as she tried to make sure everything was covered instead of allowing her to move beyond my expectations.

What a thoughtful response.

Before I continue, let me say that I use rubrics. I use them on a regular basis. I went through college education courses when rubrics were the “it girl” of teaching; it’s in my pedagogical blood.

But this student hit the nail on the head. Teachers design rubrics as a means to provide specific feedback; however, students use them as checklists. “Tell me exactly what to do” they say, “and I’ll do it.” No more, no less. “What does it take to get an ‘A’?”

The students know that I expect them to provide insightful analysis during SGDs, but I don’t tell them they need to comment X number of times and provide Y pieces of text evidence and ask Z questions. That would turn a great learning opportunity into a simple mechanical exercise.

Who knows how much text evidence is needed to support their point, or how often they will have to speak up to defend their analysis, or if they even need to ask a question. Every educational situation is different. All I can do is assess the depth and delivery of their analysis and provide feedback so they can grow. Checklists stunt that growth.

As I said earlier, I use rubrics, especially with writing. But I constantly find myself fighting with them. No matter how much I tweak my rubrics, I always feel they’re incomplete. This may very well be a fault of my own; however, anything set in stone, or paper for that matter, seems limited and leaves my assessment fractured.

Let’s say that I’m going to assess a student’s: depth of analysis, text evidence, organization, and conventions. I attempt to create a rubric that breaks down different levels of mastery of each element into enough sub-categories to provide meaningful, individual feedback.

And then I get started. Student A falls between level 2 and 3 of two of the four categories. Great, how does my rubric explain that in a meaningful way except to circle between two boxes?

I then find that student B’s sentences all start the same way. A rubric can never cover all writing elements. So how do I tell student B to work on their sentence structure, when it isn’t on the rubric? I can make a note, but if it isn’t on the rubric it may be overlooked as not important enough to effect the grade.

Both of these issues limit the potential of individual student growth by predetermining what feedback students will need before I ever see their writing.

Which leads to the third and most important issue with a writing, or any other analytical rubric. How can you see the forest for the trees?

Text evidence and organization are both needed to support depth of analysis. How can they be seen as independent of one another? Writing can’t be looked at, in a meaningful way, as unrelated parts that have no connection with one another. Those parts need to work together to create meaning and, therefore, cannot truly be assessed separately.

Personalized commentary can provide meaningful feedback. A conversation with a student can provide meaningful feedback. Detailed notes can provide meaningful feedback. A checklist will always be limited.