Researcher Identifiers

April 14th, 2014 by Karen

Researcher IDsResearcher identifiers have been the focus of OCLC Research “Registering Researchers in Authority Files” Task Group since October 2012 (see my blog post soon after we started). We recently issued our draft Registering Researchers in Authority Files report for community review and feedback.

One of the challenges in this work is that there are different stakeholders with an interest in researcher identifiers. The task group identified seven: researcher, funder, university administrator, journalist, librarian, identity management systems, and aggregators (which include publishers).  We developed use case scenarios, functional requirements and recommendations targeted at each of these stakeholders.

The task group members also represent different perspectives: ORCID and ISNI Board members, people representing the perspectives of publishers, a CRIS (Current Research Information System), VIAF, VIVO and librarians including those who create authority records and contribute them to the LC/NACO Authority File.  Task group members come from The Netherlands, the United Kingdom and the United States.

During the review period we hope to receive feedback from different stakeholders with different perspectives. We’ve already heard a basic question, from a researcher: “What is an authority file?” Book authors may not even realize they have an “LCCN” identifier from an authority record a librarian created as part of cataloging their works, nor that they also have a VIAF identifier as a result.  The report’s introduction attempts to differentiate authority records and identifiers. Task group member Micah Altman (MIT), in the presentation he and I did for the spring Coalition for Networked Information membership meeting (“Integrating Researcher Identifiers into University and Library Systems”) created the following table comparing the two:

Research Identifiers Compared to Name Authorities

 Please send your comments on our draft report to me at  We plan to publish the final report in June, so feedback received by 30 April would be most timely.

Related posts:

Authority for Establishing Metadata Practice

April 7th, 2014 by Karen

A metadata fkiw duagram
That was the topic discussed recently by OCLC Research Library Partners metadata managers. Carlen Ruschoff (U. Maryland), Philip Schreur (Stanford) and Joan Swanekamp (Yale) had initiated the topic, observing that libraries are taking responsibility for more and more types of metadata (descriptive, preservation, technical, etc.) and its representation in various formats (MARC, MODS, RDF). Responsibility for establishing metadata practice can be spread across different divisions in the library. Practices developed in relative isolation may have some unforeseen outcomes for discovery in awkward juxtapositions.

The discussion revolved around these themes:

Various kinds of splits create varying metadata needs. Splits identified included digital library vs. traditional; MARC vs. non-MARC; projects vs. ongoing operations. Joan Swanekamp noted that many of Yale’s early digitization projects involved special collections which started with their own metadata schemes geared towards specific audiences. But the metadata doesn’t merge well with the rest of the library’s metadata, and it’s been a huge amount of work to try to coordinate these different needs. There is a common belief in controlled vocabularies even when the purposes are different.  The granularity of different digital projects makes it difficult to normalize the metadata. Coordination issues include using data element in different ways, not using some basic elements, and lack of context. Repository managers try to mandate as little as possible to minimize the barriers to contributions. As a result, there’s a lot of user-generated metadata that would be difficult to integrate with catalog data.

Metadata requirements vary due to different systems, metadata standards, communities’ needs. Some digital assets are described using MODS (Metadata Object Description Schema) or VRA. Graphic arts departments need to find images based on subject headings, which may result in what seems to be redundant data. There’s some tension between specific area and general needs. Curators for specific communities such as music and divinity have a deeper sense of what their respective communities need rather than what’s needed in a centralized database. Subject headings that rely on keyword or locally devised schemes can clash with the LC subject headings used centrally.  These differences and inconsistencies have become more visible as libraries have implemented discovery layers that retrieve metadata from across all their resources.

Some sort of “metadata coordination group” is common.  Some libraries have created metadata coordination units (under various names), or are planning to. Such oversight teams provide a clearing house to talk about depth, quality and coverage of metadata. An alternative approach is to “embed” metadata specialists in other units that create metadata such as digital library projects, serving as consultants. After UCLA worked on ten different digital projects, it developed a checklist that could be used across projects: Guidelines for Descriptive Metadata for the UCLA Digital Library Program (2012). It takes time to understand different perspectives of metadata: what is important and relevant to curators’ respective professional standards.  It’s important to start the discussions about expectations and requirements at the beginning of a project.

We can leverage identifiers to link names across metadata silos. As names are a key element regardless of which metadata schema is used, we discussed the possibility of using one or more identifier systems to link them together. Some institutions encourage their researchers to use the Elsevier expert system. Some are experimenting with or considering using identifiers such as ORCID (Open Researcher and Contributor ID), ISNI (International Standard Name Identifier) or VIAF (Virtual International Authority File). VIAF is receiving an increasing number of LC/NACO Authority File records that include other identifiers in the 024 field.

Related posts:

Implications of BIBFRAME Authorities

April 3rd, 2014 by Karen


Bibframe graphicThat was the topic discussed recently by OCLC Research Library Partners metadata managers, initiated by Philip Schreur of Stanford. We were fortunate that several staff from the Library of Congress involved with the Bibliographic Framework Initiative (aka BIBFRAME) participated.

Excerpts from On BIBFRAME Authority  dated 15 August 2013 served as background, specifically the sections on the “lightweight abstraction layer” (2.1- 2.3) and the “direct” approach (3). During the discussion, Kevin Ford of LC shared the link to the relatively recent BIBFRAME Authorities draft specification dated 7 March 2014, now out for public review:

The discussion revolved around these themes:

The role of identifiers for names vis-à-vis authority records. Ray Denenberg of LC noted that when the initiative first began, the framers searched unsuccessfully for an alternate name for “authorities” as it could be confused with replicating the LC/NACO or local authority files that follow a certain set of cataloging rules and are constantly updated and maintained. BIBFRAME is meant to operate in a linked data environment, giving everyone a lot of flexibility. The “BIBFRAME Authority” is defined as a class that can be used across files. It could be simply an identifier to an authoritative source, and people could link to multiple sources as needed. The identifier link could also be used to grab more information from the “real” authority record.

Concern about sharing authority work done in local “light abstraction layers.” It was posited that Program for Cooperative Cataloging libraries, and others, could share local authorities work and expose it as linked data. This is one of the objectives for the Stanford-Cornell-Harvard Linked Data for Libraries experiment. They plan to use a type of shared light abstraction model, where they may share URIs for names rather than each institution creating their own. Concerns remain about accessing, indexing and displaying shared local authorities across multiple institutions, and the risk of outages that could hamper access. Although libraries could develop a pared down approach to creating local authority data (which may not be much more than an identifier) and then have programs that pull in more information from other sources, some feared that data would only be created locally and not shared and libraries would not ingest richer data available from elsewhere.

Alternate approaches to authority work. Given the limited staff libraries have, fewer have the resources to contribute to the LC/NACO authority file as much as they have in the past. The lightweight model could serve as a place for identifiers and labels, and allow libraries to quickly create identifiers for local researchers prominent in the journal literature but not reflected in national authority files. Using identifiers instead of worrying about validating content—doing something quick locally that you can’t afford to do at a national level—is appealing. Alternatively, a library could bring in information from multiple authority sources—each serving a different community—noting the equivalents and providing an appropriate label.  BIBFRAME Authority supports both approaches. Other sources could include those favored by publishers rather than libraries, such as ORCID (Open Researcher and Contributor ID) or ISNI (International Standard Name Identifier), or by other communities such as those using EAC-CPF (Encoded Archival Context – Corporate bodies, Persons and Families). This interest overlaps the OCLC Research activity on Registering Researchers in Authority Files.

Concern about the future role of the LC/NACO Authority File.  Some are concerned that if libraries chose to rely on identifiers to register their scholars or bring in information from other sources, fewer would contribute to the LC/NACO Authority File. Will we lose the great database catalogers have built cooperatively over the past few decades? Some would still prefer to have one place for all authority data and do all their authority work there. LC staff noted that a program could be run to ingest authority data done in these local (or consortial) abstraction layers into the LC/NACO Authority File.

Issues around ingesting authority data. We already have the technology to implement Web “triggers” to launch programs that pull in information from targeted sources and write the information to our own databases. OCLC Research recently held a TAI-CHI webinar demonstrating xEAC and RAMP (Remixing Archival Metadata Project), two tools that do just that. There are other challenges such as evaluating the trustworthiness of the sources, selecting which ones are most appropriate for your own context and reconciling multiple identifiers representing the same entity. Some are looking for third-party reconciliation services that would include links to other identifiers.

Those interested in the continuing discussion of BIBFRAME may wish to subscribe to the BIBFRAME listserv.



Related posts:

Happy Extraterrestrial Abduction Day!

March 19th, 2014 by Roy

abductionJust in time for Extraterrestrial Abduction Day, commemorated by Earthlings everywhere on 20 March, we bring you the list of the top ten most popular library items with the subject heading “Alien abduction”:

  1. Close Encounters of the Third Kind (1977  movie)
  2. The True Meaning of Smekday, by Adam Rex
  3. Abduction: Human Encounters With Aliens, by John E. Mack
  4. Abducted: How People Come to Believe They Were Kidnapped By Aliens, by Susan A. Clancy
  5. Transformation: The Breakthrough, by Whitley Strieber
  6. Little Green Men, by Christopher Buckley
  7. Close encounters of the fourth kind : alien abduction, UFOs, and the conference at M.I.T., by C.D.B. Bryan
  8. The Light-Years Beneath My Feet, by Alan Dean Foster
  9. The Fourth Kind (2009 movie)
  10. Skyline (2010 movie)

For even more suggestions on library items one can borrow to get in the mood, see what FictionFinder suggests.

But in all of our excitement celebrating Extraterrestrial Abduction Day let’s not forget the most important item of all: How to Defend Yourself Against Alien Abduction, by Ann Drufell (see cover). I mean, whether you are returned to Earth or not, the best outcome of any attempt by aliens to abscond with you is to not be abducted in the first place. At least, that’s what I’m thinking.

Related posts:

New Scholars’ Contributions to VIAF: Syriac!

March 11th, 2014 by Karen
Syriac VIAF Example for Blog

Syriac scripts added to VIAF cluster for Ephraem

We have just loaded into the Virtual International Authority File (VIAF) the second set of personal names from a scholarly resource, the Syriac Reference Portal hosted by Vanderbilt University.

Syriac is a dialect of Aramaic, developed in the kingdom of Mesopotamia in the first century A.D. It flourished in the Persian and Roman Empires, and Syriac texts comprise the third largest surviving corpus of literature from the Roman Empire, after Greek and Latin. The Syriac Reference Portal is a collaborative digital reference project funded by the National Endowment for the Humanities and the Andrew W. Mellon Foundation involving partners at Vanderbilt University, Princeton University and other affiliate institutions. Syriac - Ephraem

This addition represents the first time we see Syriac scripts (there are variants) as both the “preferred form” and under “alternate name forms” in a VIAF record. The Syriac Reference Portal also contributes additional Arabic and other scripts as alternate names, but selects a Syriac script form as a preferred form for people who wrote or were written about in Syriac.

The Syriac names join the Roman and Greek personal names we loaded from the Perseus Catalog last November and blogged about here as part of our Scholars’ Contributions to VIAF activity. Together they demonstrate how scholarly contributions can enrich existing VIAF clusters—generally comprising contributions from national libraries and other library agencies— by adding script forms of names that previously lacked them, as well as adding new names. Scholars benefit from using VIAF URIs as persistent identifiers for the names in their own databases, linked data applications and scholarly discourse to disambiguate names in multinational collaborations and using VIAF as a means to disseminate scholarly research on names beyond scholars’ own communities.

Adding these scholarly files demonstrates the benefits of tapping scholarly expertise to enhance and add to name authorities represented in VIAF. We look forward to more such enhancements from other scholars’ contributions.

Related posts:

Another Step on the Road to Developer Support Nirvana

March 10th, 2014 by Roy

devnetToday we released a brand spanking new web site for library coders. It has some cool features including a new API Explorer that will make it a lot easier for software developers to understand and use our application program interfaces (APIs). But seen from a broader perspective, this is just another way station on a journey we began some years ago to enable our member libraries to have full machine access to our services.

When I joined OCLC in May 2007, I immediately began collaborating with my colleagues in charge of these efforts, as I knew many library developers and had been active in the Code4Lib community. As a part of this effort, we flew in some well-known library coders to our headquarters in Dublin, OH, to pick their brains about the kinds of things they would like to see us do, which helped us to form a strategy for ongoing engagement.

From there we hired Karen Coombs, a well-known library coder from the University of Houston, to lead our engagement efforts. Under Karen’s leadership we engaged with the community in a series of events we began calling hackathons, although we soon changed to calling them “mashathons” in response to the pejorative nature the term “hack” had in Europe. In those events we brought together library developers to spend a day or two of intense learning and open development. The output of those events began populating our Gallery of applications and code libraries.

Karen also dug into the difficult, but very necessary, work to more thoroughly and consistently document our APIs. Her yeoman work in this regard helped to provide a more consistent and easier to understand and use set of documentation from which we continue to build upon and improve.

When Karen was moved into another area of work within OCLC to better use her awesome coding ability, Shelley Hostetler was hired to carry on this important work.

In this latest web site release I think you will find it even easier to understand and navigate. One essential difference is it is much easier to get started since we have better integrated information about, and access to, key requesting and management when those are required (some services do not require a key).

Although this new site offers a great deal to developers who want to know how to use our growing array of web services, we recognize it is but another step along the road to developer nirvana. So check it out and let us know how we can continue to improve. As always, we’re listening!


Related posts:

OCLC Exposes Bibliographic Works Data as Linked Open Data

February 25th, 2014 by Roy

zenToday in Cape Town, South Africa, at the OCLC Europe, Middle East and Africa Regional Council (EMEARC) Meeting, my colleagues Richard Wallis and Ted Fons made an announcement that should make all library coders and data geeks leap to their feet. I certainly did, and I work here. However, viewed from our perspective this is simply another step along a road that we set out on some time ago. More on that later, but first to the big news:

  1. We have established “work records” for bibliographic records in WorldCat, which bring together the sometimes numerous manifestations of a work into one logical entity.
  2. We are exposing these records as linked open data on the web, with permanent identifiers that can be used by other linked data aggregations.
  3. We have provided a human readable interface to these records, to enable and encourage understanding and use of this data.

Let me dive into these one by one, although the link above to Richard’s post also has some great explanations.

One of the issues we have as librarians is to somehow relate all the various printings of a work. Think of Treasure Island, for example. Can you imagine how many times that has been published? It hardly seems helpful, from an end-user perspective, to display screen upon screen of different versions of the same work. Therefore, identifying which works are related can have a tremendous beneficial impact on the end user experience. We have now done that important work.

But we also want to enable others to use these associations in powerful new ways by exposing the data as linked (and linkable) open data on the web. To do this, we are exposing a variety of serializations of this data: Turtle, N-Triple, JSON-LD, RDF/XML, and HTML. When looking at the data, please keep in mind that this is an evolutionary process. There are possible linkages not yet enabled in the data that will be later. See Richard’s blog post for more information on this. The license that applies to this is the Open Data Commons Attribution license, or ODC-BY.

Although it is expected that the true use of this data will be by software applications and other linked data aggregations, we also believe it is important for humans to be able to see the data in an easy-to-understand way. Thus we are providing the data through a Linked Data Explorer interface. You will likely be wondering how you can obtain a work ID for a specific item, which Richard explains:

How do I get a work id for my resources? – Today, there is one way. If you use the OCLC xISBN, xOCLCNum web services you will find as part of the data returned a work id (eg. owi=”owi12477503”). By striping off the ‘owi’ you can easily create the relevant work URI:

In a very few weeks, once the next update to the WorldCat linked data has been processed, you will find that links to works will be embedded in the already published linked data. For example you will find the following in the data for OCLC number 53474380:


As you can see, although today is a major milestone in our work to make the WorldCat data aggregation more useful and usable to libraries and others around the world, there is more to come. We have more work to do to make it as usable as we want it to be and we fully expect there will be things we will need to fix or change along the way. And we want you to tell us what those things are. But today is a big day in our ongoing journey to a future of actionable data on the  web for all to use.

Related posts:

Graph-ted on: Dewey and Wikipedia

February 24th, 2014 by Max


Since entering the library world, I’ve been fascinated by the substream of uber-geeks which are the Dewey Decimal Classification (DDC) nerds. It was the enthusiastic and technical-leaning banter of Editor-in-Chief Michael Panzer, that enticed me. When I learned that the DDC is designed to be able to describe all possible concepts, and then also that is not a strict hierarchy, but really a web of complex relations, I was intrigued by the foresight of the approach. I see the DDC as a best effort 100 years before anyone would utter “crowdsourcing,” – which is how a modern analog the Wikipedia Category systems works. Using Graph Theory we develop a new way to look at, and compute with DDC and then connect it to Wikipedia Categories.

(In this post we assume some knowledge of Graph Theory, so if you need a quick primer, it’s at the end: Graph Theory tl;dr.)

Representing the Dewey Decimal Classification in Graph Theory

In order to represent DDC using Graph Theory we have to translate its components into graph theory. That means,

  • Classifications become the “nodes”.
  • Relationships between Classifications become the “edges.”
  • Relation colourings:
    • Notational hierarchy – Red
    • See References – Green
    • Class elsewhere – Blue

We are actually defining a directed graph because the relations we have are not in general reciprocal. That is, if a classification A is below B in hierarchy,  or  A has a “see also” reference to B, B does not necessarily have that reference to A. (Although there are some reciprocal “class elsewhere” relations for example 391 and 646.3  in the 4 hop graph below). Note also that colourings – the mathematical parlance – indicates that relationships are different without assigning the relationship a numerical value.


This has all been quite abstract so far so let’s look at something concrete. We’ll pick classification 394.1, whose caption is the enjoyable “Eating, drinking; using drugs” .  The distance to other nodes is how many edges we travel from our source. We will look at all the connections that are n-hops from 394.1. Below is the evolution of visualizations from distances 2, 3, 4, and 5. By 5 hops you can see that the graph begins to be too complicated to visually represent in a single image.






Weighting the Graph

One of the powerful aspects of rendering the classification system as a graph is that we can determine the shortest distance between two nodes. However, as mentioned, our colourings have no intrinsic value, so in order to to find the shortest paths, we have to assign weights to colours. This is variable, but for the rest of the examples I am using the settings: Notational Hierarchy = 0.5, See Reference = 1, Class_elsewhere = 2.  That means that our shortest path between 394.1 and 341.33 (“Diplomatic Law”) is 394.1 -> 394-> 395  -> 327.2 -> 341.33, which has a total value of 5.5.

Transforming Wikipedia Categories

Consider a Wikipedia Category. It can contain many Wikipedia  articles, each of which has zero or more citations of books. Some of those book citations include the ISBN, which is precisely what the OCLC  Classify API is hungry for. Classify will transform an ISBN or OCLC number into a DDC, based on how  it is most popularly cataloged in WorldCat. That means we can arrive at a set of classifications associated with the category. From this set it would be instructive if we could somehow average them into one representative classification. In Graph Theory terms this is known as finding the center, and there are several ways to approach this problem. The method we employ here is the betweenness centrality. That is, we calculate the weighted shortest paths between all the classifications associated with the category, and find the nodes that were most touched in all the shortest paths. This is rather abstruse without some examples. We run the algorithm on some categories and below are the top 5 between nodes of:

  • Category:Cooking techniques:
    1. 641 – Food and Drink
    2. 641.5 – Cooking
    3. 641.3 – Food
    4. 613.2 – Dietetics
    5. 641.815 – Bread and bread-like foods
  • Category:Feminism
    1. 305.42 – Social role and status of Women
    2. 305.43 – Women by occupation
    3. 305.9 – People by occupation and miscellaneous social statuses
    4. 305 – Social Groups
    5. 305.4 – Women

This shows there is some basis for proving that this technique works. The centre of the classifications we find are around the same topic of that organizes the classification. We usually finds a strand or run in the system that hover around the main ideas. One caveat though is that for both of these Categories only about 25% of Wikipedia articles actually contain an ISBN citation. Yet of articles that do include at least one ISBN citation, they average 2.7 citations per article. With that in mind we continue to look at some set of articles that are not organized by topic, and are also more ISBN-rich. What better category than Featured Articles?

  • Category:Featured articles
    1. 930 – History of the ancient world c. 499
    2. 940 – History of Europe
    3. 361 – Social problems and welfare in general
    4. 800 – Literature and Rhetoric
    5. 362 – Social welfare problems & services
    6. 580 – Plants (Botany)

Featured articles, of which there were ~4,000 gave lead to 25,000 ISBNs, which related to 17,000 Classifications. There are only approximately 50,000 standard Classifications. This at first would seem impressive coverage, but distribution of the coverage in the graph was not calculated, so its still difficult to say how diverse the featured article set is. Computing on this set was very intensive, in fact just computing the centre of these 17,000 Classifications took 9 hours parallelized over 16 cores. These topics seem to be more abstract, and less interrelated. Without being able to infer some sort of connect, next we run the algorithm not on a category, but on the list of articles that your blogger has ever edited (307 articles).

  • Contributions of User:Maximilianklein
    1. 930 – History of the ancient world c. 499
    2. 800 – Literature and Rhetoric
    3. 302.23 -  Media (Means of communication)
    4. 910.45 – Ocean travel and seafaring adventures
    5. 361 – Social problems and welfare in general

Uncovered here is a latent passion for Seafaring adventures! This is probably owing to the fact that travel articles may have cited ISBN-having travel books which were disproportionately focused around sea-travel. More importantly what arises here is that several of the central Classifications that describe the edit history are similar to the featured articles results. This would seem to suggest that there are some very central nodes that most connections are channelled through if they lie at disparate regions of the graph. The implication is that the problem of classifying an arbitrary group of object remains difficult! Of course these results are only for one possible set of values for our weights dictionary, where these top level channels would be more “expensive” to hop-through. So future research should try and calibrate these values to produce the most varied set of central nodes.

This work is available as an Ipython notebook and on github, and comments welcome

Graph Theory? tl;dr

We are not talking about the colloquial meaning of “graph”, but rather the mathematical concept.

A Graph is a collection of:

  • “Nodes” or “Vertices” – the objects
    • Here letters A through F
  • “Edges” – relations (directed or undirected)
    • Here the arrows between letters

Not talking about this kind of graph. (Image By Inductiveload [Public domain], via Wikimedia Commons)

An example of a “Graph Theory” Graph (By David W. at de.wikipedia (Original text : David W.) [Public domain], from Wikimedia Commons)


Related posts:

Top 10 Love Stories in Libraries

February 14th, 2014 by Jim
from The Visual Thesaurus

from The Visual Thesaurus

Folks may already have seen the list of top love stories we put out yesterday for St. Valentine’s day. It was pulled together very quickly by JD Shipengrover and Diane Vizine-Goetz.

This one is quite nice and we are pleased with the response.

It is programmatically generated and based on a fiction subset of Worldcat(which includes movies); it is FRBRized so it is at the work level and consolidates editions etc; the items are identified by genre data in the MARC records; and it is ranked by holdings.

We are going to do more lists in future.

Related posts:

Library Engagement with Digital Humanities

February 12th, 2014 by Ricky

When our colleagues at OCLC Research Library Partner institutions were asked for topics that they’d like to see addressed, they kept saying, “Do something about Digital Humanities,” but they didn’t say what needed doing. When we said at staff meetings, “We should do something about Digital Humanities,” we were asked what exactly and demurred. What did we have to contribute? Then Jen and I started looking around and realized that a lot has been written by librarians about DH and libraries, and librarians often go to meetings to talk to other librarians about DH. We decided to try to steep ourselves in DH from the researchers’ point of view and see what might come of it. After lots of reading, several conferences, and a couple of focus group sessions, we’d learned a lot. We synthesized what we heard and shared it with several colleagues who thought it would be useful to library directors who are not already in the thick of DH.

Humanités Numériques [crop] Calvinius CC BY-SA 3.0

Humanités Numériques [crop] Calvinius CC BY-SA 3.0

We extrapolated from what we heard from the DH scholars to what we thought it could mean for library directors. The result was a spectrum of possible ways for libraries to engage with DH. It ranged from making sure DH scholars knew about services you were already offering that could be helpful to them, to full-bore immersion in a DH center, and with lots of other possibilities in between. There are ideas about offering training, involving DH scholars in digitization initiatives, helping to make their work discoverable, enabling them to enhance metadata, preserving the outcomes of their work, and making sure that it’s all sustainable. Whether or not a library has one staff member, several, or a center devoted to DH is largely a local issue. That may be based on how much is wanted by the researchers and how much is doable or affordable — but these things will change over time. Where a library begins on the spectrum is not at all where they might end up.

From our conclusion:

Humanities scholars have always been a central constituency for research libraries. The digital humanities constitute an evolving approach to research, and directors must support this work as a component of the university’s research mission. Libraries offer many useful services to digital humanists. Where the need is clear and DH scholars are receptive, libraries can offer various dedicated services to further DH efforts. In some cases, a full-blown DH center may be warranted…

…No matter which approaches to supporting the digital humanities you opt to take, keep in mind that what we call “The Digital Humanities” today will soon be considered “The Humanities.” Supporting DH scholarship is not much different than supporting digital scholarship in any discipline. Increasingly, digital scholarship is simply scholarship.

Since our essay, “Does Every Research Library Need a Digital Humanities Center?” was released last week there has been a lot of important discussion on Twitter and on various blogs. The conversation has gone in many useful directions. The ACRL blog dh+lib has offered to continue the conversation.

Related posts: