Terry Reese, Russell Schelby and I have met with several groups in the library to gather feedback and expectations for the Discovery Redesign. We received a great deal of valuable information from these sessions, and will continue to meet with people to showcase new features.

But one point that has come up repeatedly in these sessions is how we have labeled each set of results. And even before that, usability testing  with non-library employees indicated confusion about this issue. What does the label indicate about the kind of results returned? What will our users think each section offers?  Does the label mean the same to the ‘typical’ new OSU Libraries user as it does to those of us more accustomed to library terms?

We labelled the result sections based on a few different criteria, as a starting point. The labels can, and likely will, change. But for now, the criteria:

The source of the results

For example, the Library Catalog section has results from, you guessed it, the OSU library catalog. The Special Collections section was labelled as that in order to be consistent with our searchable Special Collections system. This label is more problematic for a few reasons. Special Collections is much more than the finding aids included in the results. By labelling the section as we do, we need to consider whether we are leading users to believe (incorrectly) that we’re searching against all our special collections content (digital assets, web site, collections information, etc.) . And again, will the average user know what a ‘special collection’ is or appreciate the way that we define a ‘special collection’ in the libraries? Will a user understand that we only mean main library ‘special collections’ (and archives) and not special collections at the Health Science Library, the Law Library, or our regional Libraries?  Will a user make the same distinctions we do and understand why Area Studies isn’t a ‘special collection’, but Archives is?  We expect more usability testing will help clarify our users’ expectations.

Labels commonly used in other bento-box style library search tools

Because we are building a search tool that takes advantage of established search patterns, it follows that we would also survey how other universities with similar tools have labeled their results. Using terms that library users have likely seen before can ease the confusion when searching for information. The libraries surveyed are at North Carolina State University, Princeton University, and the University of North Texas

An overview of OSUL's and other libraries’ search tool labels with the types of results in each section

An overview of OSUL’s and other libraries’ search tool labels with the types of results in each section.

 Usability testing based on established principles

An excellent resource that can guide us on this project is the Nielsen Norman Group, which provides evidence-based user experience research. The title of the article written by Hoa Loranger, Avoid Category Names That Suck, makes plain what we instinctively know: users don’t like to be confused, and clear language is one of the foundations of good user experience. But it can be challenging to put that into practice.

Another very useful resources for this issue is the article by John Kupersmith, Library Terms That Users Understand. The article, “…is intended to help library web developers decide how to label key resources and services in such a way that most users can understand them well enough to make productive choices.” The key recommendation from an examination of many usability tests was that users were more successful in making the right choices if offered natural language ‘target words’ such as ‘Find books’ and ‘Find articles.’  It is also best to avoid library terms such as ‘Database’ and ‘E-journals’ unless they can be enhanced with simple graphics, tooltips, glossaries, and the like.  We have added a kind of glossary for the Discovery tool, and are also discussing other ways to further clarify search results.

Based on feedback from both library employees and of course, our users, we will be making continuous improvements. For those of you who have already provided feedback on the tool—thank you! You can continue to do so by clicking the Feedback tab on the right side of the tool.