I volunteered to lead our annual departmental writing assessment session this year, in which a group of faculty sit together in a room and read sample student essays selected by some magic algorithm known only to the dean in charge of university-wide assessment — or perhaps only to his chief elf. It can be a pretty mind-numbing task as the hours roll by, but I have to say that today’s session was the most pleasant I’ve attended. Perhaps because there is a modest stipend for the job, mostly junior faculty volunteer and we have a particularly fine group of assistant professors in the department at the moment; and perhaps it was because I was nominally in charge of the operation; but the real difference from earlier sessions was the absence of several control-freak senior colleagues whose certainty about the nature of college writing they felt compelled to impose on others. Endless argument over meaningless details. Today, we were so efficient we even developed a set of notes for improving the process in the future.
Assessment, of course, is all the rage in education policy circles these days. The result is mostly a dreary proliferation of standardized tests at the K through 12 level and an equally dreary emphasis on “outcomes assessment” in higher education, in which the outcomes must be quantifiable. The problem is that lots of meaningless things can be quantified and stuck in spread sheets and made to look significant when the truth is that the numbers say little or nothing about the experiences students are actually having with texts and ideas. I think it is perfectly reasonable for students and their families, and even state and federal government agencies who fund education, to ask colleges to assess the relative success or lack of success they are having in educating students; but my notions about what constitute success are probably not what they are thinking of in the dean’s office or in the high councils of the education bureaucracy.
I consider myself a successful teacher of literature when I get students to develop a set of humane critical attitudes that they are able to apply to reading and writing about literary texts, with “literary” being very broadly defined to include everything from Shakespeare and Emily Dickinson to Bob Dylan and and much of popular culture. We certainly need not confine ourselves to “high” or canonical arts and texts, though this does not, in my view, relegate value judgments to the ash heap of history. If I have a distinctive approach to literary studies, it would be raising the question of literary value outside the stultifying categories of high and popular culture.
But how do you measure the development of a humane and critical attitude, even you you can get more than a handful of literature professors to agree that that is what they ought to be doing — to say nothing of deans and bureaucrats? I am certainly not as assessment guru — most of the research I’ve looked at makes my eyes glaze over and then roll back in my head. In terms of the task my colleagues were engaged in today, I would say that we could certainly design a more effective rubric for writing in the Humanities and probably another for writing in the Social Sciences, but even this presents a serious problem for our interdisciplinary department, which includes faculty who work within both of these discourses. We all agree — at least we did today — that the basic rubric we were using today is almost entirely without merit, designed primarily to generate numbers that can be attached to our first-year Clarkson Seminar course, which is Clarkson’s version of “Freshman English,” despite the fact that a minority of the faculty teaching the course are “English teachers,” though that is how most of our students regard them.
It seems to me that one could possibly define and then start to measure evidence for two dimensions of thought that cross virtually all disciplinary boundaries: Analysis & Synthesis, or Understanding & Imagination. There are probably other names and I’d argue that, at the very bottom, the two dimensions line up pretty neatly with Reading and Writing. Assessment could be conducted as follows: Give students excerpts from three texts with related themes and or ideas — or perhaps five excerpts — and ask students to read them carefully in preparation for a writing assignment. Next, students would be asked to write a short essay or around three pages in which they described as clearly as possible the central idea that they have drawn from the three texts (if using five, they’d choose three). This would be a take-home essay that could then be uploaded to a service like Turnitin.com for evaluation by a group of instructors using a rubric that would measure 1. basic mechanics (2 points); understanding of the texts (4 points), and Imagination (6 points). Or something like that. The details are less important than the central idea.