Several weeks ago, I had the chance to work with a group of high school teachers as we brainstormed new Inquiry Design Models. Any time I get the chance to spend time with a bunch of other social studies teachers, not much can ruin the day. Seriously . . . a whole day talking, sharing, playing with, and exploring the best social studies tools, resources, and strategies?
And during our time together we messed around with a tool that I had almost forgotten about.
The Pie Chart.
The Pie Chart is a powerful graphic organizer / writing scaffold / assessment tool / Swiss army knife. It does it all and is drop dead simple. I first learned about the Pie almost a decade ago from social studies super star Nathan McAlister.
Nate was part of our Teaching American History grant as the summer seminar master teacher and used the Pie Chart as a hook activity to kick start a conversation about the causes of the Civil War.
We’ve all been there. You just finished putting together a great instructional lesson or unit. Kids are gonna love it. They’re working together. Doing research. Creating stuff, not just consuming it. The historical thinking will be off the charts.
Then you realize . . . you haven’t created the rubric yet.
You know that clear expectations and feedback are critically important to the learning process. You know that rubrics can help you in assessing what students know and are able to do. So you sit back down and eventually decide to use four scoring columns instead of five. Six rows of criteria instead of three. Clear descriptors. Nine point font all crammed into your matrix so that it fits on one page. Definitely tons of feedback gonna happen from this beauty.
But it’s worth it, right?
Mmm . . . using a great rubric can speed up the grading and assessment process but they can also create other issues besides the amount of time it takes to create them. A student shows creativity way beyond what the rubric asks for in a way that you hadn’t anticipated and your columns and rows aren’t able to reward that. Or a kid spells everything correctly but the grammar and punctuation is terrible. Maybe she nails the document analysis but fails to use evidence in her claims and your rubric has those two things together.
And is there any way – other than individual conferences – to really know whether students actually go deeper into your scored rubric than to look at the final grade circled in the bottom left hand corner?
Yes, analytic rubrics are useful. I’m not saying rubrics shouldn’t be part of your assessment toolkit. They can help you develop and create assignments that are aligned to your end in mind. They can provide clear expectations for students and a way to share feedback. But they can also be difficult to design correctly and may seem so overwhelming to students that the expected feedback we want never really sinks in.
And, sure, holistic versions are much quicker to create and use. So that’s nice. But they fail to provide specific and targeted feedback. You get a kid who wants to know why they got a two instead of a three or worse, he won’t ask at all. Missing the whole point of providing feedback in the first place.
Jill Weber is a middle school teacher in Cheney, Kansas and former Gilder Lehrman Kansas History Teacher of the Year.
Today? She talks rubrics.
One thing I love about the teaching profession is that we are always constantly learning, growing, trying new things . . . all in the process of becoming better. This is true whether it’s your first year and you’re improving from the first month of school to the second. And it’s true if you’re a veteran teacher who decides to try something different to “shake things up.” There is always an opportunity to learn and improve.
One thing I am learning more and more as I keep going is how important it is to have clear expectations. Now, it’s not that I didn’t know that I needed that when I started but I keep learning that what I think is “clear” doesn’t necessarily translate that way to my 7th and 8th grade students. I find that they ALWAYS do better when I am as simplyspecific as possible with my expectations.
Don’t let that fool you. I didn’t say I lower my expectations.
I simplify my explanation of the expectations so that it is as clear as possible.
I am constantly getting better at this.
And one of my favorite examples is with my rubrics.
I am a FIRM believer in having rubrics to score students on. Nothing is more frustrating for a student to receive a score on a project or assignment and not have a clear picture as to why they were given that score. So when I’m making and using rubrics in my classroom, I’m always keeping in mind this #1 major rule . . .
I’m not talking about an actual hat. Not a baseball cap. Or a visor. Or a bowler, beanie, beret, or bucket hat.
I’m talking about SHEG HATS.
As in Stanford History Education Group and History Assessments of Thinking.
I’m sure that you’ve been over to the very useful Stanford History Education Group’s site with its three different tools, right? (If you haven’t, mmm . . . go there now and be amazed at how your life will be changed.)
All of us at the KCSS have been pushing Sam Wineburg’s work for years so I’m hoping you’re already familiar with the work his SHEG group has been doing around the idea of reading like a historian. They’ve packaged their work into three chunks – instructional lessons that focus on training kids analyze evidence to solve problems, onlive civic literacy lessons, and wait for it . . . Continue reading SHEG HATs for the win→