Image

Uncovering Research Practices in Student Writing

When I was a baby librarian, I thought Information Literacy was about searching and evaluating. The ACRL standards had some other stuff in there, but it seemed like abstract stuff that I couldn’t do much about. Keywords, operators, relevance, currency, authority — just learn the formula and my work here is done. No wonder librarians were the only people who cared about information literacy, I thought.

In my defense, I was young. In my defense, this is how it had been presented to me all the way up through library school.

In the past three years, I’ve been part of a project that really expanded my thinking and made me fall in love with what information literacy could be and with the ways in which it really is relevant to people and projects on my campus.

But let me back up.

All of our sophomores are required to submit a portfolio of their writing, and passing this assessment is a graduation requirement. When they submit their portfolios, they’re given the choice of designating that their writing can be used for research, which many of them do, and lately the college has been doing three large projects (that I know of) based on these writing portfolios.

  1. Our quantitative initiative (QUiRK) reads a subset of the papers to determine how sophomores use quantitative evidence in their writing.*
  2. The writing program and SERC are pairing up for the Tracer Project, which studies how faculty development (which includes writing portfolio assessment) impacts student learning.
  3. And starting in 2008, we in the library have been reading portfolios to see how information literacy is revealed in academic writing at the sophomore level.

As part of that last one, my department had fascinating hours of discussion about what we could and couldn’t evaluate about information literacy when presented with a finished paper. One of the most interesting and useful of these discussions (for me) was the one which revealed that we could, in fact, assess evaluation of sources even when the paper didn’t use “outside” sources beyond primary sources or sources prescribed by the professor. We could watch students picking primary sources, even from assigned readings, that worked well together and could be used to make a compelling point, or we could see them cramming two such sources together and either treating them entirely separately or in other ways not using them instrumentally toward making a point. We also confirmed what we had always suspected: that implementation of attribution was about more than just mechanics, and that failures in attribution could often signal a fundamental misunderstandings of the sources the student was using or of the purpose of reporting evidence in the first place. And we articulated for ourselves some of the ways in which integrating evidence into a paper can help or hinder the student’s rhetorical goals.

We couldn’t assess much (if anything) about the actual steps in the process that resulted in the writing we had in front of us, but we could look for habits of mind associated with using evidence, and we could look for the ways in which conventions of communicating evidence manifest in sophomore level student writing.

In the end, after much testing and revision, we came up with a rubric for assessing information literacy in writing and sat down to score papers. And yesterday, we finally presented our work and some preliminary findings, handed around a sample of student writing and watched as the faculty and staff attendees pulled interesting and useful insights out of the writing and then all came up with exactly the same score on the rubric (inter-reader reliability!), and had a fun discussion about how this could be used on campus to build shared expectations for information literacy and to help inform our teaching.

For my part, participating in this project has fundamentally changed one of the major ways I think about my work. It was so liberating for me to realize in concrete fashion that “information literacy” does not equal “the research paper.” All of a sudden I discovered that I do have something to contribute to those parts of the curriculum that interest me but that don’t produce traditional research projects. All of a sudden I realized that I don’t have to help faculty squeeze research projects into courses where those projects don’t fit naturally, and that instead we could talk about context-building skills or source interpretation skills for thought-pieces, class discussions, and other non-research assignments.

For me, this project helped me realize that I actually do like the concept of information literacy and that it actually does have meaningfully deep and cross-cutting applications on a liberal arts college campus — that it’s not simply about making mini-librarians out of our students or about searching for searching’s sake. I’m hoping that as we open it up to include faculty readers this year, that same sense seeps through the campus. I hope this is something we can get behind and dig into and find interesting, and that what we learn from analyzing these portfolios will meaningfully inform our practice as teachers.

I’m just so excited about this project, and so glad to be involved in it. It’s probably been the most eye-opening and practice-changing project I’ve participated in.

* Rutz, Carol and Nathan D. Grawe. “Pairing WAC and Quantitative Reasoning through Portfolio Assessment and Faculty Development,” Across the Disciplines, December 2009; Grawe, Nathan D., Neil S. Lutsky, and Christopher J. Tassava. “ A Rubric for Assessing Quantitative Reasoning in Written Arguments,” Numeracy, January 2010.
[back to text]

One thought on “Uncovering Research Practices in Student Writing

  1. Pingback: Mental Block

Comments are closed.