The phrase orphan works recalls the world of Oliver Twist, so perhaps it’s appropriate that JISC has just announced the publication of a UK report looking at the ‘orphan problem’ in UK libraries, museums and archives: In from the Cold: An assessment of the scope of â€˜Orphan Worksâ€™ and its impact on the delivery of services to the public
Access to over 50 million items held in trust by publicly funded agencies such as libraries, museums, archives and universities are being prevented from being available online due to current copyright laws. â€˜In from the Coldâ€™, a report by the Strategic Content Alliance and the Collections Trust, shows that millions of so-called â€˜orphan worksâ€™ – photographs, recordings, texts and other ephemera from the last 100 years – risk becoming invisible because rights holders are not known or easy to trace.
The report was commissioned to find the scale and impact of â€˜orphan worksâ€™ on public service delivery.
The issues for libraries are balanced up with those which affect museums and archives, which reflects the joint authorship of the report. The Strategic Content Alliance (SCA) consists of a range of UK public digital content providers (JISC, the British Library, the National Health Service, the BBC, the Museums, Libraries & Archives Council and Becta, in its own words the government agency leading the national drive to ensure the effective and innovative use of technology throughout learning). The SCA aims to build a common information environment where users of publicly funded e-content can gain best value from the investment that has been made by reducing the barriers that currently inhibit access, use and re-use of online content. It has been joined in this report by The Collections Trust, which was formerly known as the Museums Documentation Association. The scale problem is very explicitly quantified in the Executive Summary, with a number of startling figures (the emphases are mine):
2. The mid-range estimates put the total number of Orphan Works, represented in our sample of 503
responses to the online survey, at a total of in excess of 13 million.
3. Individual estimates suggest that there are single organisations in the survey sample that hold in excess of 7.5 million Orphan Works. If we include even a few of these extreme examples in our calculations, it appears likely that this sample of 503 organisations could represent volumes of Orphan Works well in excess of 50 million.
4. Extrapolated across UK museums and galleries, the number of Orphan Works can conservatively be estimated at 25 million, although this figure is likely to be much higher.
9. Organisations spent on average less than half of one day tracing rights for each Orphan Work. Therefore it would take in the region of 6 million days effort to trace the rights holders for the 13 million works represented in our on-line survey.
(The SCA blog translates the 6 million days of effort into 16,000 years).
This report sits interestingly alongside US discussions on orphan works which are focused on the Google Books Settlement (whose impact for libraries is described by Ricky Erway here). These have recently been somewhat polarised by Robert Darnton’s essay in the New York Review of Books in February of this year, and are well elucidated in Peter Brantley’s excellent post The Orphan Monopoly which succinctly expresses the nub of the problem:
The essential problem is that the settlement parties have a vested interest in maintaining a monopoly over access to orphan books.
But whatever the dangers may be, the Google solution does at least address scale. As Tim O’Reilly wrote in January:
It is true, perhaps, in the narrow sense, that no other party will be able to do a mass digitization project on the scale of Google’s – but that was already true. The barrier has always been the willingness to spend a lot of money for little return; the settlement doesn’t change that.
Which brings us back to the UK report, where clearly scale has been revealed as a very troubling problem. And when we turn to museum objects and archives, there is no Google ready and waiting to take on the digitisation. What strikes me is the near impossibility of this problem being addressed by the current institutionally-based configuration of the memory community. In the podcast which JISC have released to accompany the report, its author, Naomi Korn, describes the practical options available to institutions, which are principally to be aware of the need to consider rights when digitising materials, and to use standard rights agreements wordings available as templates from the SCA. Doubtless this is valuable advice, but given the scale of the problem it is clearly not going to do anything about the millions of orphan works already unavailable, and probably not a great deal about the millions more which are turning into orphans year by year. The report itself points to this failure, because of our institutional configuration, to address scale with scale, by pouring away public money duplicatively and redundantly on ‘due diligence’ compliance (which, incidentally, the Google Settlement has rendered unnecessary), within institutions, and by ducking our responsibility towards current content:
scarce public sector resources are being used up on complex and unreliable â€˜due diligenceâ€™ compliance … There are also suggestions that often works are selected for digitisation based on the fact that they do not pose any copyright issues, thus creating a black hole of 20th century content.
So digitisation of content forces the appropriate scale question to meet the public accountability question more familiar in the realm of open access to research materials. Ricky and Jen touched upon it in Shifting Gears which took Google Books as an inspiration for the special collections community to seek to emulate:
Through Google Books, the Open Content Alliance, and similar efforts, book collections are flying off the shelves and finding their way to users in digital form. In a world where it is increasingly felt that if itâ€™s not online it doesnâ€™t exist, we need to make sure that our users are exposed to the wealth of information in special collections.
We are explicitly picking up the copyright question in our new Introduce Balance in Rights Management project, led by Ricky, around which there was a very useful update session at our Annual Partners’ Meeting in Boston last week. The scale problem demands collaborative responses. The Strategic Content Alliance is itself a model of collaboration to tackle it, but much more substantial machinery is required. The report is right to call for policy change to help with the task (eg by distinguishing commercially from non-commercially motivated copyright protection). But it may have to be bolder still, in requiring national and international collaboration. Digitisation of publicly held content is a massive problem, and institutional responses are not enough.Related posts: