Crossing the rubricon: turning a DH lens on higher-ed assessment

October 24th, 2010 |

I’ve had a few ideas for sessions, but since the first few posters have come from the land of comp and rhetoric, I’ll put the following up as a crazy project idea: use the tools of digital humanities to examine discursive assessment of student work.

Many faculty, students, and staff work in colleges and universities under pressure from accreditors to quantify and standardize assessment of student work. My impression is that those working in humanities and social sciences are probably less pressured to create multiple-choice exams than to “rubricize” writing assignments — create templates that apply grading criteria to student work, and then have faculty and grad students apply those templates to key assignments.

I have clear grading criteria for my own undergraduate classes, and in theory I have no problems with something like this, but it rubs faculty the wrong way, in part because the so-called rubrics tend to be committee products that flatten the evaluation process in two ways — they are usually insensitive to disciplinary/field perspectives, and they squeeze out any issue that might not fit the relevant criteria. Forgot to include “doesn’t pretend that the Holocaust never happened”? Oops — can’t downgrade a student for not knowing anything about reality. (There’s also the question of whether every key student piece of work subject to such evaluation should be evaluated by one person each with streamlined “reliability checks,” as in standard psychological assessment techniques, or evaluated by multiple parties, as in juried studio work. But that’s largely limited by student-teacher ratios.)

Many faculty would like to have the time to comment in detail on student work, because our experiences with evaluation is discursive — you write something, you receive comments, you read, maybe respond, and so forth. But that evaluative discussion is itself a textual product, whose analysis is the humanist’s stock in trade, and the analysis of incredibly large chunks of text with varied structure is one of the goals of digital humanities: what can technology do for textual analysis?

So, to the project: What if we designed a way to collect textual commentary on student work and analyzed the commentary along with ratings? There are loads of questions one could answer with such information, especially if cleverly designed. Instead of focusing on specific research questions, this session would focus on a package of tools and a practical how-to guide for designing such studies.


For those who are curious, I’m a long-time academic blogger whose training is in U.S. social history and demography and who works in a college of education. I’ve been to one THATCamp prime (2009) and avidly watch the Twitter feed for the various THATCamps.

Comments Feed

One Response to “Crossing the rubricon: turning a DH lens on higher-ed assessment”

  1. johnsonj Says:

    Sherman

    As one of those comp/rhet folks, I’ll admit to sharing interest in this idea, especially in the tool for collecting and assessing commentary. I haven’t played an part in the administrative role of program assessment, but do have some colleagues who are currently wading through this requirement who might provide their insights and reactions.

    As far as tools are concerned, I’m imagining something along the lines of CommentPress for inline commenting and perhaps a rating system that attempts to gauge the commentary. This rating system could be open to both instructor(s) and peers. Something similar to Drupal’s Voting API module might work.

    Of course, that leaves central question unanswered: the discussion of how to quantify the effectiveness of comments.