A couple of weeks ago the UCLA Film & Television Archive hosted “Reimagining the Archive,” a three-day conference that brought together archivists, scholars, artists, creators of digital humanities projects, and assorted others to hear about a wide-ranging array of digital initiatives. While there was a certain focus on the moving-image realm, the papers went far beyond. A few talks that have stuck with me:
Keynoter Rick Prelinger, speaking after the opening reception, was his usual feisty self. He called for film archivists to become activists in finding ways to lessen the intellectual property stranglehold on access to and re-use of moving-image content, in part by reducing the emphasis on commercially-produced content in favor of “ephemeral film” (his term). He also issued a call to defend the power of the original image, unpolluted by enhancements like sound tracks and voiceovers.
A panel on digital scholarship included some good stuff. The p.i.’s on the Sacred Samaritan Texts project (digitized Torah scrolls) at Michigan State modeled a nice approach to working in close concert with members of multiple user communities who helped them understand the documents and the ways in which both scholars and religious practitioners would approach them as both texts and artifacts.
Fast-forwarding from A.D. 500 to 21st-century art took us to Adam Lauder, a digital scholarship librarian at York University (Canada) who is building IAINBAXTER&raisonnE. He seeks to reinterpret the concept of the catalogue raisonné by using crowdsourcing to create a virtual exhibition, curation, and research environment. Lauder offered up the phase “ephemeral curating,” which I kind of like. (Hmm, “ephemeral” emerges as theme.)
Howard Besser focused on projects that are using visual segmentation to enable more granular analysis of moving-image content. His closing “four things to prepare for” make a pithy summation: users will want ever-smaller units of granularity and will expect segmentation from us, geo-referencing will be low-hanging fruit, crowdsourcing helps us do more for less, and metadata must be created during production.
In a panel on new tools and platforms, Sherri Wasserman from Thinc, an incredibly cool NYC design firm, demoed several projects that use personal mobile devices to connect people to content. She described archival materials as “powerful objects in space without personality” and showed techniques for bringing memory objects to life. Like Howard, she brought in geo-referencing multiple times. Her advice: find ways to place memories within the spaces with which your archives is convergent.
INA–the French national audiovisual institute–was the principal cosponsor of the conference, and Thomas Drugeon gave a fascinating overview of their activity to archive websites. INA and the BnF share legal responsibility to preserve the French web, with INA focused on sites with “audiovisual content.” They went live in February 2009, and in less than two years have harvested 33 terabytes (that’s after compression) of content. (There, a glimpse at what “at scale” is going to mean when everybody really tackles born-digital.) They currently crawl 7200 sites, and Drugeon emphasized that they can preserve only “traces” via periodic sampling; archives that preserve websites must make sure researchers understand this. Speaking of researchers, he said the web archive will “never” be accessible openly but rather via designated libraries. He couldn’t say why the law specifies this. (Something other than intellectual property rights? Er, maybe “never” is justified …)
Well, there was a lot more, but you get the idea. On Sunday morning Greg Lukow, chief of Motion Pictures (etc.) at LC, gave a whiz-bang ppt on the new Packard (yes, that Packard–they’re also building a facility for UCLA’s film and TV archive) Campus of the National Audio-Visual Conservation Center built out in Culpepper VA. Y’all go take a tour. Looks pretty fabulous.