OCLC-LIBER Open Science Discussion on Metrics and Rewards

What is the role of metrics and rewards in an ideal open science ecosystem? What are the challenges in getting there? What would collective action look like? The fourth session of the OCLC/LIBER Open Science Discussion series, which brought together a group of participants from universities and research institutes across ten different countries, focused on these questions.

Ideal future state

Photo by Jonathan Chng on Unsplash

The group envisioned an ideal future open science ecosystem in which the current academic system of metrics and rewards will have been overhauled. Its new characteristics were described as transparent and open, involving the researcher in the design of the assessment process, incentivizing research behaviour that attaches greater importance to openness and integrity and “thinking more of the researcher as a whole person”. In other words, the new system puts the metrics in a human context, looking at the researcher’s profile, network and relationships, development trajectory, and narrative. This requires a different taxonomy for research assessment and more attention for qualitative measurements. Should openness become the new focus of the reward system or should the system continue to focus on ranking research quality and excellence? There was agreement that in an Open Science ecosystem openness should be central and that what needs to be measured is “the way open impacts society”.

Doing away with the “perverse incentives” that are embedded in the current ecosystem, such as the “pressure to publish” and the associated use of the journal impact factor, was seen as the way forward – although one participant cautioned against throwing the baby out with the bathwater because the main problem is not the measure itself, but the way it is used. This led to a discussion of the current system of individualistic rewards linked to tenure and the question: at what level should metrics and rewards be applied? The dilemma of individual versus aggregate level metrics and the ways to analyze the data, is a methodological one. On the whole, the group felt that research metrics are best applied at the aggregate and not the individual level.

Responsible metrics and rewards requires “thinking more of the researcher as a whole person”

Obstacles and challenges

Photo by Alyssa Ledesma on Unsplash

It was clear to the participants that achieving this ideal future OS ecosystem for metrics and rewards won’t happen overnight. The new ecosystem is slow to start. One participant mentioned a recent survey on open research conducted at their university in the UK, which revealed that some researchers still do not know what Open Science is. There is no common understanding of OS across regions in the world, sighed another discussant. Someone observed that compliance figures in repositories did not make her optimistic about the pace of adoption of Open Science. On the other hand, there are also signs that the current system is causing much stress to researchers and is reaching its limitations. One participant mentioned a survey, conducted at a US university, which revealed that most respondents were burned-out on the research assessment practice and face mental health risks. The group mentioned many challenges and obstacles to effectuating change. Culture change was one of them, but not among the top three. The challenges that scored high and that were chosen to discuss further were:

  1. Uncertainty among researchers on what Open Science actually means to them
  2. Many open science activities don’t directly result in articles (e.g., open methods, open infrastructure, open data). Other stuff matters too but is not measured/supported and based on unpaid and unrecognized labor.
  3. The big divide between Open Science advocacy and the day-to-day research work of researchers: ‘this is yet another thing we need to do’

Collective Action

With these three challenges in mind, what can libraries collectively do to raise awareness about Open Science and help effectuate systemic change in terms of metrics, incentives, rewards and funding? The group clearly saw an educational opportunity for libraries. Research libraries could act as a neutral party on campus, creating teachable moments on open research and explaining what it could mean for different disciplines. Someone suggested to start with educating early career researchers or even target undergraduate level students. The library is not always the natural go-to partner for researchers. Some participants thought it was more effective to engage higher-up, with campus administrators (the Research Office, Faculty Senate, etc.), in conversations around research impact and strength areas – as a way to raise awareness and demonstrate what can be done differently. Others suggested creating OS-metrics together with those who conduct research assessments and those who decide on assessment criteria. These stakeholders – in particular the funding agencies – have shown the influential role they play and their ability to effect change rapidly.

In the same vein, “ranking the rankers” was viewed as an effective way to change and help bridge the divide. This approach aims to disrupt the hierarchy of stakeholders in research evaluation, by targeting the top of the pyramid – the League Tables or World University Rankings – and adding an additional layer to evaluate the rankers. This is something the INORMS Research Evaluation Working Group have suggested and are currently working on. 

We need to make open easy – and we are currently not making it easy.”

Finally, another option and effective way to engage researchers was discussed, namely, to support their workflow in sharing outputs. This was also seen as an opportunity to help address the burden for researchers of having to report their outputs multiple times, in different information systems. “We need to make things easy – and we are currently not making it easy.” So what would be ideal or responsible metrics in the day-to-day research workflows?

Participants pointed out that the new metrics & rewards system is not only about Open Access publications. It is about valuing and recognizing all contributions to research and scholarly activity. And a main issue is standardisation, which is needed in the evaluation system at the national, regional and international levels. How aligned are systems across geographies? How different are the levels of adoption across different nations? To help start the conversation at the institutional and national levels, the European Working Group on Rewards under Open Science published a Report in 2017.  The ‘Hong Kong’ principles for assessing researchers was also mentioned in this context. There is an effort to standardize the format of a CV – not just listing publications, but all sorts of research outputs, such as peer review contributions. Using open standards and infrastructures can help make OS easier and also support greater system-to-system integration required to reduce researcher burden. Greater systems integration also allows access to more complete data – and this in turn enables to perform analytics which is the driving force behind the new metrics & rewards system.

About the OCLC-LIBER Open Science Discussion Series

The discussion series is a joint initiative of OCLC Research and LIBER (the Association of European Research Libraries). It focusses on the seven topics identified in the LIBER Open Science Roadmap, and aims to guide research libraries in envisioning the support infrastructure for Open Science (OS) and their roles at local, national, and global levels. The series runs from 24 September through 5 November.

The kick-off webinar opened the forum for discussion and exploration and introduced the theme and its topics. Summaries of all seven topical small group discussions will be published on the OCLC Research blog, Hanging Together. Up to now these are: (1) Scholarly Publishing, (2) FAIR research data and (3) Research Infrastructures and the Europen Open Science Cloud.

Join us! We invite all members of the open science community to join our organizations for the closing round-up webinar on 5 November, where we will synthesize and share the findings from all seven group discussions. Register today.