University reputation and ranking – another way that Europe is not like US

View of Union Square during the Research Library Partnership meeting
View of Union Square during the Research Library Partnership meeting via Roy Tennant Instagram

The OCLC Research Library Partnership Rep, Rank & Role meeting was held in San Francisco, California on 3-4 June 2015. It focused on the library’s contribution to university ranking and researcher reputation. A distinguished group of speakers provided a mix of perspectives. You can check out their slides (and videos) here. You can get a good sense of the conference from these but a few of us have decided to report here on hangingtogether.org on some of the major themes that shaped interests and interaction. The others will follow soon.

One of our meeting goals was to have these presentations spark discussion among attendees about how the library might advance university goals around reputation, assessment and recording of research. We had a nicely varied attendance that included a contingent from outside the USA. This trans-national dimension also characterized our speakers.

Most importantly, this perspective from outside the US shaped discussions and was one of the most important differentiators of perspective and focus among libraries contemplating their role in the management of information generated by research and related to the research process.

If you operate in a country with a national research assessment regime such as those implemented in the UK, the Netherlands and Australia you think differently about the reputation and ranking challenge, you feel differently about the ways in which research outputs get judged qualitatively, and you emphasize investment in services that respond to these regimes and judgments. And that’s not the way peers in the US think about those same matters. See Keith Webster’s presentation (slides, video) for a nice timeline of assessment emergence and his thoughtful take on the impact of global ranking, the effects on resource discovery and some of the implications for librarians.

[N.B. If you are unfamiliar with these national research assessment exercises you might want to look through this report:
MacColl, John. 2010. Research Assessment and the Role of the Library. Report produced by OCLC Research. Published online. (.pdf: 276K/13 pp.).

Granted it is a few years old but it provides a good introduction to what might be an unfamiliar process see pages 5-8 and catalogs some of the impacts for libraries.]

Where there is a national assessment regime they have been implemented to guide, and in some countries, dictate the award of research funding from the national government agencies that support the university-based research within the country. The tie-in to funding changes the conversation. In the US there is concern about faculty motivation to participate in activities that contribute to university reputation and rank. Services are designed to deliver benefits to researchers. Successful participation is driven by offering a personal individual benefit – a bibliography suitable for website publication, advice about how to increase the readership or citation frequency of a faculty member’s work, introductions to other researchers with similar interests for collaboration and networking, etc. See Ginny Steel’s presentation (slides, video soon) that emphasizes the faculty-centered design of the UCLA reputation management system. Contrast that with Wouter Gerritsma’s view (slides, video soon) of his work at Wageningen University in the Netherlands which began with data and moved on to service provision.

Where research assessment exercises rule there is no question about faculty motivation. It is a requirement. It determines personal reputation, shapes professional contribution and feeds into the local reward system of the home institution. It creates and sustains the more intense interest in bibliometrics and altmetrics that characterizes discussion of reputation and rank in these countries.

It also creates a much franker interest in and discussion of institutional rank. Institutions outside the US are much more likely to have set a public goal of climbing to a particular level in one or more of the global university ranking schemes (THE, QS World, USNews, etc.) Once you begin granular assessment of individual research output it seems to make it easier to be honest about the way that rolls up into institutional striving and ambition. See Jan Wilkinson’s presentation (slides, video soon) about the University of Manchester’s ambitions and her candid assessment of what it takes to move an institution’s ranking. This nicely complimented Jiro Kikuryo’s presentation (slides, video soon) about the challenges of getting non-English language research properly appreciated and linking university ranking ambitions to broader societal values and goals.

Some of the conference discussion made the point that the US is on the same trajectory and only behind in the adoption of these assessment and ranking objectives. The market penetration of the support systems and tools outside the US is deeper but a similar pattern of take-up may unfold in the US over the next few years.

There were at least two areas where those with assessment regimes and those without shared similar concerns. One was with the balance between compliance and service goals. Whether being driven by mandates (like those proceeding from the unfunded OSTP pronouncements in the US) or by funding allocations (like those dictated by the UK’s Research Excellence Framework) librarians worried that their support roles would result in them being viewed as ‘instruments of compliance’ by faculty and research staff. This is at odds with the long history of the library as service and support organization within the academy. There was some feeling that a convergence of the US faculty-oriented service development with the expertise outside the US in implementation and support of assessment tools would be the combination that allows librarians to be viewed as partners at the individual faculty level and as important contributors to university administrative ambitions. See David Seaman’s presentation (slides, video soon) for a candid view of the struggle to reconcile these conflicting goals while developing a new research and reputation management system at Dartmouth.

Another area of shared concern independent of assessment was how to round out the view of research beyond STEM disciplines. There are significant challenges in measuring the research contributions of humanities and social science disciplines. The metrics that have emerged around STEM disciplines don’t fit these disciplines and consequently they are undervalued in both university ranking as well as assessment judgments. There was a desire to bring library expertise to bear even while understanding the systemic and cultural complexity of this challenge. See Catherine Mitchell’s nuanced presentation (slides, video soon) about the humanities push back as she led the California Digital Library’s attempt to introduce a research information management system.

In short, all discussions about reputation and ranking need to be predicated on the presence or absence of national assessment regimes and should take into account the need to offer library services that provide individual benefit, contribute to institutional ambitions, and reflect the full range of impact that research has for the university community, the nation’s citizenry and global society.