After a mostly underwhelming first full day of conferences sessions, I found myself inexplicably awake and deciding to attend a dual session on federated search. The first presenters had done a study on several of their campuses (Utah, Idaho, and Hawaii campuses of Brigham Young University). They wanted to know if federated search actually saves time, if students were satisfied with their results, and if they preferred federated search to searching individual databases, and (of course) if the results were as scholarly as those returned by direct database searching. So they set up a study where they had students perform tasks using a federated search implementation and direct database searching, and they did this on three campuses.
What I found interesting, and more enlightening than the results themselves, was that the results often differed wildly by campus. The Utah campus saw a lot of time savings whereas Hawaii saw almost none. The Utah campus preferred federated search while Idaho preferred directly searching individual databases. This is fascinated because it shows that a tool, in and of itself, is not better or worse. There’s an element that’s dependent on students’ prior experience and current environment. It’s kind of like IM reference, which takes off like nobody’s business on some campuses and sits there feeling left out on other campuses.
The other part of their study that I find telling is that they tested only biology students searching for biology topics in 5 biology databases and 2 general databases. I would love to see a similar study done across more disciplines.
Oh, and in case you’re interested: students were generally satisfied with federated search to the tune of 5.59 on a scale of 1 to 7 (and one comment was that federated search was ok, but the student would just go back to “good old Google”), and the qualitative differences between the journals retrieved were… ambiguous. Librarians thought the results were slightly less scholarly, faculty thought they were slightly more scholarly, and all of this is based on students running their own searches in the different tools.
So what do I think of federated search now? Well, I’m conflicted. Some of my students might benefit from it, but those that would are still more inclined to use Google. On the other hand, most of my students have to struggle through particularly disorganized databases (MLA International comes to mind…) and I’m not confident that the results from these databases will be able to compete with tasty but less useful results from other places. But if I learned one thing it is that you have to test YOUR students in YOUR situation before jumping to conclusions. I don’t think federated search has “arrived” yet. But when we come to the point where we’re making this decision, we’ll have to test its implementation on our students.
Just to add to this, Marist college then presented on their experience with a new implementation of a search box on every subject research guide that searches databases and Google (the search box is labeled “Search databases and Google”). They saw increased usage of their databases and full text holdings.
So federated search is definitely a force to be reckoned with. I’m looking forward to keeping my eye on it as it matures. I still think the ultimate fix to the problem is to get feeds of information from vendors and having applications that mix and manipulate the information in these feeds (as I mused here)… but that’s a real reach-for-the-moon solution.
technorati tags: acrl2007