The RLG Programs Descriptive Metadata Practices Survey results are now out! The report is divided into two documents:
- RLG Programsâ€™ interpretation of the results and the issues we identified to pursue in future projects. (13 pages)
- The data supplement with the charts and graphs generated from the 89 survey responses and the survey instrument. (46 pages)
I blogged about our preliminary analysis of responses to this survey conducted in July and August among 18 RLG partners, selected because they had â€śmultiple metadata creation centersâ€ť on campus that included libraries, archives, and museums and had some interaction among them. Our objective was to gain a baseline understanding of current descriptive metadata practices and dependencies, the first project in our program to change metadata creation processes.
The data is reported in a series of charts and graphs that are open to interpretation. RLG Programs offers its own interpretation in the prefatory narrative, flagging questions for followup and identifying opportunities to simplify and integrate metadata practices in support of network level services.
Although we sawÂ some expected variations in practice across libraries, archives and museums, we were struck by the high levels of customization and local tool development, the limited extent to which tools and practices are, or can be, shared (both within and across institutions), the lack of confidence institutions have in the effectiveness of their tools, and the disconnect between their interest in creating metadata to serve their primary audiences and the inability to serve that audience within the most commonly used discovery systems (such as Google, Yahoo, etc.).
I previously blogged about one aspect of the survey, on the percentage of respondentsâ€™ collections estimated to be inadequately described, and unlikely to be without additional resources, funding, or both.
Some of the other results we commented on:
- Although MARC is the most widely used data structure standard used, Dublin Core Qualified and Dublin Core Unqualified comes in as a close second (when combined), followed by Encoded Archival Description.Â
- The Cataloging Cultural Objects data content standard had a surprisingly strong showing, given its relative newness.
- The single most common response to the question of the tools used to create, edit, and store metadata descriptions was a customized tool, cited by 69% of all respondents.
- Whatever the evaluation criteria used to measure the effectiveness of oneâ€™s metadata creation tools, only a quarter think the tools used are effective, a third thought they were partly effective â€“ and a third didnâ€™t know.
- About half the respondents build and maintain one or more thesauri.
- Almost all respondents noted that their staff works with other units within their library, archive, or museum as well as people outside their library, archive, or museum but within the institution. There is less sharing when it comes to technical infrastructure, discovery environments, descriptive strategies, and metadata creation guidelines.
We identified the following goals for future projects in our Renovating Descriptive and Organizing Practices agenda:
- Maximize resources to generate as much metadata as quickly as possible.
- Optimize descriptive data with Web information hub targets in mind.
- Share tools and descriptive strategies as much as possible.
- Leverage terminologies from as many sources as possible.
Take a look yourself at the charts and graphs and let us know what you think!