I am proud to announce this new desktop tool, which is definitely the coolest software I’ve worked on this year. It solves several problems we faced in submission work flow and we hope it can dramatically speed up processing for large collections with custom metadata. The features break down into three vaguely overlapping categories, those being capture, rearrangement and description.
Here are some screenshots of the interface:
This screenshot shows the project tree to the left and a MODS editor on the right. The user is editing the MODS elements for a single folder called “TUCASI”. The attributes of the selected MODS name element are editable in the properties view in the lower right quadrant.
The most novel feature and the one I most want to highlight is batch metadata crosswalks. The screenshot above shows a crosswalk editor, which consists of a canvas and a palette of widgets. The end user can construct a pretty sophisticated mapping of custom metadata to MODS by “visual programming”. By dropping widgets on the canvas and linking them together, they define how a field becomes an element. Presently the editor only supports tab-separated metadata sources, but as time allows we plan to extend the feature to support any delimited file and XML sources.
Whenever a crosswalk definition is saved, it is used to generate or regenerate a set of MODS records. These MODS records can be automatically associated with files and folders through a matcher widget on the canvas, which works as long as you have file and folder names in your custom metadata. Otherwise you can drag and drop a MODS records onto the appropriate item in the arrangement.
This visual programming and automation of crosswalks saves a lot of valuable time on the part of curators and programmers, who would otherwise be engaged to create custom scripts for each new custom metadata format. Since we are collecting data from disparate parts of the university, each collection may come with a unique descriptive metadata format, often manually created spreadsheets or discipline-specific XML. It’s just not resource efficient to create custom scripts for most incoming collections. The crosswalk feature lets us migrate literally thousands of descriptive records at a time and link them to data objects without new software development.
The last feature to mention today is staging of files. I designed the workbench to process large numbers of files and folders in one submission. However repository ingest happens via a web interface, which is not the most reliable way of transmitting thousands of large files let alone a SIP containing such numbers. So we needed to stage files in advance. The diagram above shows how data flows from incoming data to staging, archival and access storage. Individual users have accounts in a staging area within our iRODS grid. Files placed there by the workbench are readable by Fedora at ingest time, when they are copied into archival storage.
This approach comes with several advantages:
- There are no data transmission failures at submission time
- The transmission of files to staging can be incremental, controlled and “paranoid” with a checksum comparison
- The workbench can inform users of staging issues as they arise, so they can be addressed before submission.
- Files are staged in the background while you work on arrangement and description
- There are efficiencies to be gained at ingest time, when copying from a staging grid location to an archival grid location.
Some Notes on the Software Technology
The workbench is built upon a considerable pile of open source code and standards, including the following:
- Eclipse Rich Client Platform (RCP)
- Eclipse Modeling Framework (EMF) and Graphical Modeling Framework (GMF)
- METS XML for project definition files and submission files
- MODS XML
- iRODS jargon client libraries
The Eclipse RCP is extensible via the OSGi framework. This means that parts of the tool can be made modular and/or mashable to better fit non-UNC environments. This will require some refactoring that we need to do anyway, but most of it is already there with OSGi.
One module that I’d like to see is a way to integrate Google Refine into workflows. This seems like a natural fit for cleaning up custom metadata and normalizing various sources before crosswalks are applied.
Another modular area would be export for submission. The current implementation transforms our internal METS project definition into a submission METS for ingest into the CDR. Needless to say, this submission METS is in a CDR-specific profile. So a natural extension point would be to support other export modules for other repositories.
The BETA software is available for download, experimentation and use. We cannot provide any support, but we do welcome your comments here or contact us directly. Oh yeah, you may only download and use the software at your own risk. See our download page.