I have just been reading a recent article by Kathy Enger* published in Library & Information Science Research that examines the potential value of citation analysis as a selection tool in academic library acquisitions. Enger proposes that citation analysis of the journal literature might be used to identify potentially high-impact books for inclusion in a college or university library collection. The reasoning here is quite interesting: based on the observation that humanities and social science scholars rely more heavily on monographs than journals as a vehicle of scholarly communication, a sampling method is used to identify high impact journals in the social sciences and then cull from these the top cited authors. If these authors have also published books not already represented in the local collection, the titles are acquired on the premise that the content is likely to represent ‘high value’ scholarship. Library circulation figures are later examined to determine if these titles are used (borrowed) more frequently than titles selected through traditional means.
This seems like a proposition worth testing. One might reasonably assume that since book literature is more often cited by humanists, the authors cited in the literature are also likely to be authors of books. Yet, by selecting the most-cited journal authors, rather than the authors most frequently cited in the journal articles it seems to me that there is some misalignment between intent and outcome. If it is the case that the most cited journal authors exhibit the same citation behaviors as the profession as a whole (i.e. they reference more book literature than journal articles) and also produce books themselves, then perhaps the conflation of ‘author’ and ‘book authors cited by high-impact journal article authors’ is immaterial. I do not know if the most highly cited authors of articles in sociology, say, are also the authors of the most cited books in the field. We do not have commonly accepted measures of book citation activity, as we do for scholarly articles. But it is striking that a number of the authors referenced in the article either did not produce books chosen for inclusion in the study, or didn’t produce books at all.
Possibly I have misunderstood the methodology behind this study. The article deserves a closer reading. In any case, the results of the research showed no significant difference in circulation (an imperfect but reasonable proxy for scholarly use) of book titles selected by traditional means — bibliographer recommendation, approval plans, faculty request — and those selected based on the citation analysis of related journal literature. The conclusion drawn from this is that citation-based book selection methods are at least as good as traditional methods and may provide a useful means of verifying that locally held collections are representative of the core scholarship in a given discipline.
It strikes me that a particular challenge in implementing this format-hopping (perhaps one could say ‘zoonotic‘) selection method is that the abbreviated and uncontrolled form of author names most often used in the journal literature makes it very difficult to identify related book titles. Is the DA Waldman who publishes in the journal Academy Management Review the same person as David Andrew Waldman, author of The power of 360 feedback : how to leverage performance evaluations for top productivity (1998)? One would guess so, since the disciplinary affiliation is the same. But he might instead be the D.A. Waldman who writes on applied optics and published a technical report on photo-sensitivity in Azotobacter agile in 1954. If so, his research interests have drifted considerably over the course of his career.
As noted in the recently issued Networking Names report (mentioned here), disambiguating journal author names, and associating them with the more tightly controlled identities of book authors is a tricky business. And until that changes, the prospects for a “universal and standardized method” (Enger 2009: 110) of academic book selection based on journal citation analysis seem slim at best. Still, this article has got me thinking — surely the best and most important measure of scholarly impact.
*Enger, K. B. (2009). “Using citation analysis to develop core book collections in academic libraries.” Library & Information Science Research. 31 (2), 107-112.
Constance Malpas was Research Scientist at OCLC. Her work at OCLC focused on data-driven analysis of library collections and services, with a special emphasis on strategic planning and managing institutional change. She has a particular interest in the organization of knowledge and research practices in the sciences.