[This is the third in a short series on our 2014 OCLC Research Library Partnership meeting, Libraries and Research: Supporting Change/Changing Support. You can read the first and second posts and also refer to the event webpage that contains links to slides, videos, photos, and a Storify summary.]
John MacColl (University Librarian at University of St Andrews) [link to video] opened the session, speaking briefly about the UK context to illustrate how libraries are taking up new roles within academia. John presented this terse analysis of the landscape (and I thank him for providing notes!):
- Professionally, we live increasingly in an inside-out environment. But our academic colleagues still require certification and fixity, and their reputation is based on a necessarily conservative world view (tied up with traditional modes of publishing and tenure)
- Business models are in transition. The first phase of transition was from publisher print to publisher digital. We are now in a phase which he terms as deconstructive, based on a reassessment of the values of scholarly publishing, driven by the high cost of journals.
- There are several reasons for this: among the main ones are the high costs of publisher content, and our responsibility as librarians for the sustainability of the scholarly record; another is the emergence of public accountability arguments – the public has paid for this scholarship, they have the right to access outputs.
- What these three new areas of research library activity have in common is the intervention of research funders into the administration of research within universities, although the specifics vary considerably in different nations.
John Scally (Director of Library and University Collections, University of Edinburgh) [link to video] added to the conversation, speaking about the role of the research library in research data management (RDM) at the University of Edinburgh. From John’s perspective, the library is a natural place for RDM work to happen because the library has been in the business of managing and curating stuff for a long time and services are at the core of the library. Naturally, making content available in different ways is a core responsibility of the library. Starting research data conversations around policy and regulatory compliance is difficult — it’s easier to frame as a problem around storage, discovery and reuse of data. At Edinburgh they tried to frame discussions around how can we help, how can you be more competitive, do better research? If a researcher comes to the web page about data management plans (say at midnight, the night before a grant proposal is due) that webpage should do something useful at the time of need, not direct researchers to come to the library during the day. Key takeaways: Blend RDM into core services, not a side business. Make sure everyone knows who is leading. Make sure the money is there, and you know who is responsible. Institutional policy is a baby step along the way, implementation is most important. RDM and open access are ways of testing (and stressing) your systems and procedures – don’t ignore fissures and gaps. An interesting correlation between RDM and the open access repository – since RDM has been implemented at Edinburgh, deposits of papers have increased.
Driek Heesakkers (Project Manager at the University of Amsterdam Library) [link to video] told us about RDM at the University of Amsterdam and in the Netherlands. Netherlands differs from other landscapes, characterized as “bland” – not a lot of differences between institutions in terms of research outputs. A rather complicated array of institutions for humanities, social science, health science, etc, all trying to define their roles in RDM. For organizations who are mandated to capture data, it’s vital that they not just show up at the end of the process to scoop up data, but that they be embedding in the environment where the work is happening, where tools are being used. Policy and infrastructure need to be rolled out together. Don’t reinvent the wheel – if there are commercial partners or cloud services that do the work well, that’s all for the good. What’s the role of the library? We are not in the lead with policy but we help to interpret and implement — similarly with technology. The big opportunity is in the support – if you have faculty liaisons, you should be using them for data support. Storage is boring but necessary. The market for commercial solutions is developing which is good news – he’d prefer to buy, not built, when appropriate. This is a time for action — we can’t be wary or cautious.
Switching gears away from RDM, Paul Wouters (Director of the Centre for Science and Technology Studies at the University of Leiden) [link to video] spoke about the role of libraries in research assessment. His organization combines fundamental research and services for institutions and individual researchers. With research becoming increasingly international and interdisciplinary, it’s vital that we develop methods of monitoring novel indicators. Some researchers have become, ironically and paradoxically, fond of assessment (may be tied up with the move towards the quantified self?). However, self assessment can be nerve wracking and may not return useful information. Managers may are also interested in individual assessment because it may help them give feedback. Altmetrics do not correlate closely to citation metrics, and and can vary considerably across disciplines. It’s important to think about the meaning of various ways of measuring impact. As an example of other ways of measuring, Paul presented the ACUMEN (Academic Careers Understood through Measurement and Norms) project, which allows researchers to take the lead and tell a story given evidence from his or her portfolio. An ACUMEN profile includes a career narrative supported by expertise, outputs, and influence. Giving a stronger voice to researchers is more positive than researchers not being involved in or misunderstanding (and resenting) indicators.
Micah Altman (Director of Research, Massachusetts Institute of Technology Libraries) [link to video] discussed the importance of researcher identification and the need to uniquely identify researchers in order to manage the scholarly record and to support assessment. Micah spoke in part as a member of a group that OCLC Research colleague Karen Smith-Yoshimura led, the Registering Researchers Task Group working group (their report, Registering Researchers in Authority Files is now available). It explored motivations, state of the practice, observations and recommendations. The problem is that there is more stuff, more digital content, and more people (the average number of authors on journal articles have gone up, in some cases way up). To put it mildly, disambiguating names is not a small problem. A researcher may have one or more identifiers, which may not link to one another and may come from different sources. The task group looked at the problem not only from the perspective of the library, but also from the perspective of various stakeholders (publishers, universities, researchers, etc.). Approaches to managing name identifiers result in some very complicated (and not terribly efficient) workflows. Normalizing and regularizing this data has big potential payoffs in terms of reducing errors in analytics, and creating a broad range of new (and more accurate) measures. Fortunately, with a recognition of the benefits, interoperability between identifier systems is increasing, as is the practice of assigning identifiers to researcher. One of the missing pieces is not only identifying researchers but also their roles in a given piece of work (this is a project that Micah is working on with other collaborators). What are steps that libraries can take? Prepare to engage! Work across stakeholder communities; demand more than PDFs from publishers. And prepare for more (and different) types of measurement.
Paolo Manghi (Researcher at Institute of Information Science and Technologies “A. Faedo” (ISTI), Italian National Research Council) [link to video] talked about the data infrastructures that support access to the evolving scholarly record and the requirements needed for different data sources (repositories, CRIS systems, data archives, software archives, etc.) to interoperate. Paolo spoke as a researcher, but also as the technical manager of the EU funded OpenAIRE project. This project started in 2009 out of a strong open access push from the European Commission. The project initially collected metadata and information about access to research outputs. The scope was expanded to include not only articles but also other research outputs. The work is done by human input and also technical infrastructure. They rely on input from repositories, also use software developed elsewhere. Information is funneled via 32 national open access desks. They have developed numerous guidelines (for metadata, for data repositories, and for CRIS managers to export data to be compatible with OpenAIRE). The project fills three roles — a help desk for national agencies, a portal (linking publications to research data and information about researchers) and a repository for data and articles that are otherwise homeless (Zenodo). Collecting all this information into one place allows for some advanced processes like deduplication, identifying relationships, demonstrating productivity, compliance, and geographic distribution. OpenAIRE interacts with other repository networks, such as SHARE (US), and ANDS (Australia). The forthcoming Horizon 2020 framework will cause some significant challenges for researchers and service providers because it puts a larger emphasis on access for non-published outputs.
The session was followed by a panel discussion.
I’ll conclude tomorrow with a final posting, wrapping up this series.
Merrilee Proffitt is Senior Manager for the OCLC RLP. She provides community development skills and expert support to institutions within the OCLC Research Library Partnership.