Recently in the London Review of Books Marina Warner explained why she quit her post at the University of Essex. I found it a shocking essay. Warner was pushed out because she is chairing the Booker Prize committee this year, in addition to delivering guest lectures at Oxford. (If those lectures are anything like Managing Monsters (1994), they will probably change the world.) Warner’s work – as a creative writer, scholar, public intellectual – does not count in the mechanics of assessment, which includes both publishing and teaching.
Warner opens her LRB essay with the library at Essex as the emblem of the university: “New brutalism! Rarely seen any so pure.” I don’t want to make light of the beautifully-written article, which traces changes over time in the illustrious and radical reputation of the University of Essex since it was founded in the 60s. Originally Warner had enthusiastic support, which later waned when a new vice-chancellor muttered, “‘These REF stars – they don’t earn their keep.”
Warner’s is just the latest high-profile critique about interference in research by funders and university administrators. The funniest I’ve read is a “modest proposal” memo mandating university-wide use of research assessment tools that have acronyms such as Stupid, Crap, Mess, Waste, Pablum, and Screwed.
I have been following researchers’ opinions about management of information about research ever since John MacColl synthesized assessment regimes in five countries. This past spring John sent me an opinion piece from the Times Higher in which the author, a REF coordinator himself, despairs about the damage done by years of assessment to women’s academic careers, to morale, to creativity, and to education and research. During my visits to the worlds of digital scholarship, I invariably hear of the failure of assessment regimes for the humanities, the digital humanities, digital scholarship, and e-research.
I figure it is high time I post another excerpt from my synthesis of user studies about managing research information. I prepared most of this post a year ago, when I was pondering the fraught politics (and ethics) of libraries’ contributions to research information management writ large.
So here goes:
Alignment with the mission of one’s institution is not a black-and-white exercise. I believe that research libraries must think carefully about how they choose to ally themselves with their own researchers, academic administrations, and national funding agencies. If we are calibrating our library services – for new knowledge and higher education – to rankings and league tables, I certainly hope that we are reading the journals that publish those rankings, especially articles written by the same academics we want to support.
An editorial blog post for the Chronicle of Higher Education is titled, provocatively, “A Machiavellian Guide to Destroying Public Universities in 12 Easy Steps.” The fifth step is assessment regimes:
(5) Put into place various “oversight instruments,” such as quality-assessment exercises, “outcome matrices,” or auditing mechanisms, to assure “transparency” and “accountability” to “stakeholders.” You might try using research-assessment exercises such as those in Britain or Australia, or cheaper and cruder measures like Texas A&M’s, by simply publishing a cost/benefit analysis of faculty members.
This reminded me of a similar cri de coeur a few years ago in the New York Review of Books. In “The Grim Threat to British Universities,” Simon Head warned about applying a (US) business-style “bureaucratic control” – performance indicators, metrics, and measurement of outputs, etc. – to scholarship, especially science. Researchers often feel that administrators have no idea what research entails, and often for a good reason. For example, Warner’s executive dean for the humanities is a “young lawyer specialising in housing.”
A consistent theme in user studies with researchers is the sizeable gulf between what they use and desire and the kinds of support services that libraries and universities offer.[1] A typical case study in the life sciences, for example, concludes that there is a “significant gap” between researchers’ use of information and the strategies of funders and policy-makers.[2] In particular, researchers consider libraries unlikely to play a desirable role supporting research. [3]
Our own RIN and OCLC Research studies interviewing researchers reveal that libraries offering to manage research information seems “orthogonal, and at worst irrelevant,” to the needs of researchers.[4] One of the trends that stands out is oversight: researchers require autonomy, so procedures mandated in a top-down fashion are mostly perceived as intrusive and unnecessary.
Librarians and administrators need to respect networks of trust between researchers. In particular, researchers may resist advice from the Research Office or any other internal agency removed from the colleagues they work with.[5]
Researchers feel that their job is to do research. They begrudge any time spent on activities that serve administrative purposes.[6] A heavy-handed approach to participation in research information management is unpopular and can back-fire.[7] In some cases, mandates and requirements – such as national assessment regimes – become disincentives for researchers to improve methodologies or share their research.[8]
On occasion researchers have pushed back against such regimes. For example, in 2011, Australian scholars successfully quashed a journal-ranking system used for assessment. The academics objected that such a flawed “blunt instrument” for evaluating individuals uses crude criteria to rank journals rather than professional respect. [9]
Warner – like many humanists I have met – calls for a remedy that research libraries could provide. “By the end of 2013, all the evidence had been gathered, and the inventory of our publications fought over, recast and finally sent off to be assessed by panels of peers… A scholar whose works are left out of the tally is marked for assisted dying.” Librarians can improve information about those “works left out,” or get the attributions right.
But assisted dying? Yikes. At our June meeting in Amsterdam on Supporting Change/Changing Support, Paul Wouters gave a thoughtful warning of the “seduction” of measurements, such as the trendy quantified self. Wouters gave citation analysis as an example of a measure that is necessarily backward-looking and disadvantages some domains. “You can’t see everything in publications.” Wouters pointed out that assessment is a bit “close to the skin” for academics, and that libraries might not want to “torment their researchers,” inadvertently making an honest mistake that could influence or harm careers.
Just because we can, we might consider whether we should, and when, and how. The politics of choosing to participate in expertise profiling and research assessment regimes potentially have consequences for research libraries that are trying to win the trust of their faculty members.
References beyond embedded links:
[1] pp. 4, 70 in Sheridan Brown and Alma Swan (i.e. Key Perspectives). 2007. Researchers’ use of academic libraries and their services. London: RIN (Research Information Network)/CURL (Consortium of Research Libraries). http://www.rin.ac.uk/our-work/using-and-accessing-information-resources/researchers-use-academic-libraries-and-their-serv
[2] pp. 5-6 in Robin Williams and Graham Pryor. 2009. Patterns of information use and exchange: case studies of researchers in the life sciences. London: RIN and the British Library. http://www.rin.ac.uk/our-work/using-and-accessing-information-resources/patterns-information-use-and-exchange-case-studie
[3] Brown and Swan 2007, p. 4.
[4] p. 6 in John MacColl and Michael Jubb. 2011. Supporting research: environments, administration and libraries. Dublin, Ohio: OCLC Research and London: Research Information Network (RIN). http://www.oclc.org/research/publications/library/2011/2011-10.pdf
[5] p. 10 in Research Information Network (RIN). 2010. Research support services in selected UK universities. London: RIN. http://www.rin.ac.uk/system/files/attachments/Research_Support_Services_in_UK_Universities_report_for_screen.pdf
[6] MacColl and Jubb, 2011, p. 3-4.
[7] p. 12-13 in Martin Feijen. 2011. What researchers want: A literature study of researchers’ requirements with respect to storage and access to research data. Utrecht: SURFfoundation. https://www.surf.nl/binaries/content/assets/surf/en/knowledgebase/2011/What_researchers_want.pdf. P. 56 in Elizabeth Jordan, Andrew Hunter, Becky Seale, Andrew Thomas and Ruth Levitt. 2011. Information handling in collaborative research: an exploration of five case studies. London: RIN and the BL. http://www.rin.ac.uk/our-work/using-and-accessing-information-resources/collaborative-research-case-studies. MacColl and Jubb 2011, p.6.
[8] p. 53 in Robin Williams and Graham Pryor. 2009. Patterns of information use and exchange: case studies of researchers in the life sciences. London: RIN and the British Library. http://www.rin.ac.uk/our-work/using-and-accessing-information-resources/patterns-information-use-and-exchange-case-studie
[9] Jennifer Howard. 2011 (June 1). “Journal-ranking system gets dumped after scholars complain.” Chronicle of higher education. http://chronicle.com/article/Journal-Ranking-System-Gets/127737/
Jennifer Schaffner was a Program Officer with the OCLC Research Library Partnership. She worked with the rare books, manuscripts and archives communities. She worked with OCLC Research from 2007 to 2015.