Over the course of the past few months (well, years actually, but more recently it’s become a higher priority project and less of a “watch and see” project), Carleton and St. Olaf have been exploring federated search as a joint option for our two libraries. It’s entailed many meetings and informal discussions, quite a bit of research, and significant time imagining scenarios and functions and services.
The good news is that federated search products have improved, even compared to last winter. Metalib, for example, has what looks like a slick set-up which would allow me to quickly and easily select databases that might be useful at the course- or assignment-level and to create a search box that will let my students explore just those resources. In fact, this more focused use of the system strikes me as incredibly appealing. I can imagine using this for almost every class I serve, but especially for the interdisciplinary classes. I would also look forward to using a federated search tool to look up a known item. You know the kind of search… you know the author or the title but not where it was published, and you have to go searching through 25 or 30 databases just to get a complete citation. These frustrating sessions could be a thing of the past just like the similarly frustrating hunts for full text access were squashed by our link resolver.
Encouraging this kind of use of the tool while discouraging its use as a kind of Library Google wouldn’t be so hard, I think. It would mean limiting exposure to the “search every database under the sun” search box, and placing all kinds of subject-specific search boxes in the places where students will be likely to find and use them. I can imaging search boxes on every research guide, and I bet professors would be happy to put course-specific search boxes into their pages in our Course Management System. We’d have to be careful how we labeled and described these pages or we’d end up with the “every search box searches everything” problem all over again. But I think this could accomplished, and what’s more, I think it could serve our students well.
At the same time that these exciting possibilities exist, though, federated search is still not up to par as a Library Google, or even a tool for pointing students toward subject-specific databases. Not by a long shot. Do a keyword search for “psychology” and you still won’t get many results from PsychINFO, simply because the word “psychology” has very little descriptive value in a database wholly devoted to that subject, so it isn’t used very often in that database. Because of this, the tool can’t even serve as a pointing device to get students into a subject-appropriate database. All it’d point toward would be Academic Search Premier, and ProQuest.
But will the students be satisfied anyway? We hear all the time that they don’t want the “best” sources as long as they find stuff that’s “good enough,” so we should provide access to some system that would supply “good enough” easily. Well, I didn’t necessarily agree with that line of reasoning to begin with, but in the last couple of years my co-workers and I have repeatedly experienced proof that general results from ASP and ProQuest do NOT satisfy our students. We have students who refuse to search ProQuest because of the sheer number of hits, many of which are irrelevant to their needs. If I do an example search in class and come up with 500 hits, a common response is, “But how do you ask it for more specific things? Isn’t that a lot to look through?” Seeing students flock away from ASP and ProQuest makes me think that an even more general search tool would not go over well in the long run.
And, of course, there’s the problem of the frustration and instruction time involved in helping students navigate collections of collections, but I’ve already written about that at length here. All I’ll add is that the one class of freshmen described there ended up requiring multiple sessions of clarification in class, time spend writing up detailed instructions that could be linked from the Course Management System, 7-8 hours of one-on-one time in my office, and about 4 hours of reference desk time, all told. And all that to accomplish a simple exploratory assignment that lead them through a collection of collections. And while I don’t grudge that time at all (I learned a lot by helping them through it, and it gave me excuses to teach them so much more than navigating American Memory), I can only imagine the amount of instruction and desk time we’d sink into a poorly implemented federated search product. Far from being a time-saver, I think it’d be a time-sink.
So at this point in the federated search life-cycle, I think it’s finally become useful if implemented smartly, but it hasn’t yet become useful as a monolithic library search tool. If we end up getting one of these things, I actually look forward to coming up with a careful and creative implementation that will maximize its benefits and minimize its faults. I think we could end up having a positive influence on our students’ search experience and outcomes if we do this well, just as I think both the experience and the outcomes will be disappointing if we do this poorly.
I have only one goal, and that is to serve my students well. I just wish I knew exactly what that would look like at this point. Even though I’m on the committee that’s supposed to recommend a tool to our libraries and should therefore be in a position to know which way we’ll go, I’m waiting with bated breath to see what we decide. The suspense is killing me!
FYI, Sol Lederman at the Federated Search Blog recently responded positively to your post.
I was intrigued by the thought (hinted at in that post) that perhaps some of my concerns about “one box to search everything” would be new to those in the business of creating federated search tools.