Tuesday, October 18, 2016

Customizing Archivematica's Format Migration Strategies with the Format Policy Registry (FPR)

Over the past couple weeks we've been exploring the ways in which our current normalization strategies (to read them for yourself, see our Format Conversion Strategies for Long-Term Preservation) compare to those in Archivematica. Below you'll find a brief introduction to Archivematica's Format Policy Registry (FPR), an overview of the process we went through to compare our format policies to Archivematica's and a couple of approaches we've taken to reconcile the differences between the two.

Hope you enjoy it!

Format Policy Registry (FPR)

Located in the "Preservation planning" tab, the Archivematica Format Policy Registry (FPR) is a database which allows users to define format policies for handling file formats.

Formats 


In the FPR, "a 'format' is a record representing one or more related format versions, which are records representing a specific file format" (Format Policy Registry [FPR] documentation). As you can see from the example above, the "Graphics Interchange Format" format is made up of 3 specific versions, "1987a," "1989a," and "Generic gif."

Formats themselves are described this way:
  • Description: Text describing the format, like a name.
  • Version: The version number for that specific format.
  • PRONOM ID: The specific format version’s unique identifier in PRONOM, the UK National Archives’s format registry.
  • Access format? and Preservation format?: This is where you indicate that whether something is suitable for access or preservation purposes, or both or neither.
Formats also have UUIDs, are enabled or disabled, and have a number of associated actions (which we'll talk about later). They also have a group, "a convenient grouping of related file formats which share common properties" (ibid.), e.g., "Video." All of this is customizable.

Format Policies

In Archivematica, format policies act on formats. Format policies are made up of:
  • Tools: Tools are things like 7Zip, ImageMagick's convert command, ffmpeg, FFprobe, FITS, Ghostscript, Tesseract, MediaInfo, etc., which come packaged with Archivematica.
  • Commands: These are actions that you can take with a tool, e.g., "Transcoding to jpg with convert" or "Transcoding to mp3 with ffmpeg." Commands can be used in one of the following ways:
    • Identification: The process of trying to identify and report the specific file format and version of a digital file.
    • Characterization: The process of collecting information (especially technical information) about a digital file.
    • Normalization: Migrating/transcoding a digital file from an original format to a new file format (for access or preservation purposes).
    • Extraction: The process of extracting digital files from a package format such as ZIP files or disk images.
    • Transcription: The process of performing Optical Character Recognition (OCR) on images of textual material.
    • Verification: The process of validating a digital file produced by another command. Right now these are pretty simple, e.g., check that it isn't 0 bytes.
  • Rules: This is where you put it all together and apply a specific command to a specific format, saying something like: "Use the command 'transcoding to tif with convert' on the 'Windows Bitmap' format for 'Preservation' purposes." When browsing the FPR you can actually see how well these policies are working out. In our case, this particular policy has been successful for 2 out of 2 digital files we attempted it on.

The first time a new Archivematica installation is set up, it will register the Archivematica install with the FPR server [1], and pull down the current set of format policies. FPR rules can be updated at any time from within the Preservation Planning tab in Archivematica (and these changes will persist through future upgrades). You also have the option of refreshing your version with the centralized Archivematica FPR server version, if you so choose.

Customizing Archivematica's Format Migration Strategies

What follows is our initial foray into customizing Archivematica's format migration strategies. For a more detailed look at this as well as customizing other aspects of the FPR, you should definitely check out the documentation.

What We Do Now

For some context, we've been normalizing files for quite some time. Because we must contend with thousands of potential file formats, a number of years ago we adopted a three-tier approach to facilitate the preservation and conversion of digital content:
  • Tier 1: Materials produced in sustainable formats will be maintained in their original version.
  • Tier 2: Common “at-risk” formats will be converted to preservation-quality file types to retain important features and functionalities.
  • Tier 3: All other content will receive basic bit-level preservation.

These, by the way, are being incorporated into a more comprehensive Digital Preservation Policy which we hope to share with others in the near future...

Comparing Our Format Migration Strategies to Archivematica's

We decided to make some customizations to Archivematica's FPR because some of our existing policies didn't quite match up with Archivematica's. We discovered this by doing an initial comparison of the FPR with our existing Format Conversion Strategies for Long-Term Preservation.

For a detailed list of all of our findings, please see this spreadsheet. Basically, however, here's how things broke down for the 62 formats in Tiers 1 and 2 that I examined in depth:
  • Formats we recognized as preservation formats, and are an Archivematica preservation format.

Examples: Microsoft Office Open XML formats, OpenDocument formats, TXT, CSV and XML files, WAV files, PNG and JPEG2000 files, etc.

  • Formats we recognized as preservation formats, but aren't an Archivematica preservation format. These have a normalization pathway.

Examples: AIFF and MP3 files, and also lots of video: AVI, MOV, MP4.

  • Formats we recognized as preservation formats, but aren't an Archivematica preservation format. These have no normalization pathway.

These were the most varied, including files belonging to the PDF, Word Processing, Text, Audio, Video, Image, Email and Database groups. Examples: PDF/A files, RTF and TSV files, FLAC and OGG files, TIFF files and SIARD files.

  • Formats we didn't recognize as preservation formats, but are an Archivematica preservation format.

These were mostly things like older Microsoft Office formats, mostly. Examples: DOC, PPT and XLS files.

These, by the way, are our most common Tier 2 formats based on an analysis of our already processed digital archives I did for Code4Lib Midwest this year:

As you can see, all but one of the top five Tier 2 formats is one of those older Microsoft Office formats. What can I say? We get a lot of this kind of record!

  • Formats we didn't recognize as preservation formats, and aren't an Archivematica preservation format. For these, Archivematica's normalization pathway is the same as ours.

Lots of raster images here. Examples: BMP, PCT and TGA files.

  • Formats we didn't recognize as preservation formats, and aren't an Archivematica preservation format. For these, Archivematica's normalization pathway is not the same as ours.

These all stemmed from a difference in preferred preservation target for normalized video formats. We typically converted these to MP4 files with .H264 encoding, while Archivematica prefers the MKV format. Examples: SWF, FLV and WMV files.

  • Formats we didn't recognize as preservation formats, and aren't an Archivematica preservation format. For these, Archivematica does not even have a normalization pathway.

Essentially, these were files that we had a normalization pathway for, but Archivematica doesn't. Examples: Real Audio files, FlashPix Bitmap and Kodak Photo CD Image files, and PostScript and Encapsulated PostScript files.

  • Finally, formats we didn't recognize as preservation formats, but not even in Archivematica.

Examples: EML files and other plain text email formats.

Approaches

To be honest, I was a bit surprised by just how different our local practice was from Archivematica's, considering we both look to the same authorities on this type of thing! This diversity led to a number of different approaches to customizing Archivematica's Format Migration Strategies, which I'll briefly detail here.

Do Nothing

For those formats that we agree on, i.e., we both agreed they were preservation formats or we both agreed they were not preservation formats, but shared the same normalization pathway, we didn't do anything! Easy peasy lemon squeazy.

Disable a Normalization Rule, Replace a Format

This we did for formats we recognized as preservation formats, that aren't an Archivematica preservation format but that do have a normalization pathway in Archivematica. Basically, we disagreed with the out-of-the-box FPR and we weren't interested in having Archivematica doing any normalization on these. After we went to check the Library of Congress Sustainability of Digital Formats site to ensure that we weren't totally off...

...we went to the FPR and disabled the normalization rule...

...and verified that we'd done it correctly...

...then searched for the format itself...

...clicked "Replace"...

...and set the format to a Preservation format.




You can also easily verify that Archivematica got the message...
 


Replace a Format

A somewhat simpler approach, this we did when there were formats we recognized as preservation formats, but that aren't an Archivematica preservation format and have no normalization pathway. Since Archivematica didn't really have a better alternative, we stuck to our existing policies.

This was as simple as finding the appropriate format, clicking "Replace"...

...and setting it as a Preservation format.

Create a Command, Edit a Normalization Rule 

This started to get a bit more complicated. We did this for formats we didn't recognize as preservation formats, and neither did Archivematica, but Archivematica's normalization pathway is not the same as ours. Again, these all stemmed from a difference in preferred preservation target for normalized video formats.

For these, Archivematica didn't have an existing command that worked for our purposes (it did have a tool, ffmpeg, that would). We had to write a little something up (which was inspired by other Archivematica commands) [2]...


...create a new normalization command...

...add in the information Archivematica needs for  the new command...

...then go in and replace the rule for the appropriate format(s)...

...select the appropriate command (our new one!)...

...and finally verify that it had been changed.

Create a Normalization Rule

This we did for formats we didn't recognize as preservation formats, and neither did Archivematica, but for which Archivematica does not even have a normalization pathway (and we did). For these, we wanted to have Archivematica use our existing normalization pathway.

To create a new rule, we selected the "Create New Rule" option...

...and entered the new information (purpose, original format and command you want to use) for the file format for which you're interested in created a new policy.


Manual Normalization and Other Thoughts...

That leaves us with a couple of outstanding issues, namely, legacy Microsoft Office documents and EML and other email formats (which Archivematica doesn't recognize at all--because the tools Archivematica uses for file format identification doesn't recognize them or they aren't registered in PRONOM).

The "ubiquity" argument aside, we'd really love to do something about older Microsoft Office documents, especially since currently these are the most common formats that we normalize. At the moment we use LibreOffice's Document Converter to handle conversion to a more sustainable format, i.e., Microsoft Office Open XML. However, Archivematica has looked into LibreOffice with the following results:
  • LibreOffice normalization led to significant losses in formatting information.
  • LibreOffice sometimes hangs, causing any future LibreOffice jobs to fail until an administrator manually kills the service.
  • LibreOffice sometimes reports that it succeeded despite not actually succeeding, making it difficult to determine whether or not the job really succeeded.

There may also be options here for converting to PDF as well, at least for documents. In the interim, we're still examining our options. At the very least we can change the FPR so that these formats are not recognized as preservation formats; we'll be looking into alternative approaches and will plan to report back when appropriate.


As for the email formats, we currently use a tool called aid4mail to convert these to MBOX files. This is a proprietary program, though, and only works in Windows, so we're looking into ways that we might manually normalize these files outside Archivematica (and associate different versions of files with one another inside Archivematica). This can be done, but we're looking into ways of doing this efficiently in batch, however, and again, we can plan to report back when we've got something figured out.

To the FPR and Beyond!

Alright! That's about it for customizing the FPR; I think we've covered (in at least a basic way) all the different angles (with the exception, perhaps, of introducing a new tool to Archivematica!).

By the way, one of the most exciting things about the FPR is that since ours (and yours!) is actually registered with the Archivematica server, one day we all might be able to share this information in a more efficient fashion!

Have you customized the FPR? Are you too excited about the possibility of sharing FPR format policies via linked data? Let us know in the comments!

[1] Format policies are maintained by Artefactual, Inc., who provide a freely-available FPR server hosted at fpr.archivematica.org. This server stores structured information about normalization format policies for preservation and access.
[2] This could also have been written in Python.

Wednesday, October 5, 2016

Transferring Legacy Content into Archivematica's Backlog: `automation-tools` to the Rescue!

First off, these are exciting times at the Bentley. In just about every sense of the word, we are live with ArchivesSpace and, soon, real soon, we'll be live with Archivematica. In fact, I believe at least one of our processors may be in denial about the fact that he'll probably never use AutoPro again:


Remember, Devon, lament is the antidote to denial.

Alright, back to our regularly scheduled program. For us, part of "going live" with Archivematica is ensuring that our current backlog of unprocessed, digital accessions makes its way safely and efficiently into Archivematica's backlog... so that those accessions can be appraised and arranged sometime soon with some new functionality that will be out in version 1.6! Today's post details some of the analysis we did on our current backlog, some of the "pre-transfer" manipulation we've done to the digital accessions in it, the Archivematica environment we're using (at least for now) and the automation-tools setup we're using to automate the process as well as some of the errors we've encountered so far.

Our Current Backlog

As tempted as we were to just dive in, we knew we needed to do a bit of analysis on our current backlog, just to know what we were up against. Here's some basic stats we gathered that informed our subsequent decision-making processes.

Overall:
  • Total deposits: 209
  • Total size: 3.6 TB
  • Total number of files: 5,187,920 

Averages:
  • Average size: 17.2 GB
  • Average number of files: 5683.8

Top ten deposits by size:
  1. 806.6 GB (21.6 %)
  2. 797.1 GB (21.4 %)
  3. 705.6 GB (18.9 %)
  4. 180.4 GB (4.8 %)
  5. 154.1 GB (4.1 %)
  6. 133.2 GB (3.6%)
  7. 128.2 GB (3.4%)
  8. 126.5 GB (3.4 %)
  9. 121.1 GB (3.2 %)
  10. 67.8 GB (1.8%)

For reference, only 6.7% of our deposits make up 99% of our total size; that means we're talking about a pretty long tail here.

Top ten deposits by number of files:
  1. 4,182,060
  2. 208,577
  3. 178,265
  4. 155,198
  5. 154,187
  6. 89,431
  7. 49,201
  8. 17,883
  9. 17,786
  10. 17,541

For reference, only 8.1% of our deposits have more than 5,000 files, so another really long tail.

Ensuring that Archivematica Could Handle this Backlog

Based on a couple of relevant conversations on the Archivematica listserv, we knew that Archivematica, like ArchivesSpace and Pokemon Go and just about every other piece of software out there, can have some scalability and performance issues when you start throwing a lot at it.

There's a couple in particular that we returned to again and again:
  • Archivematica Data Flow: This one gave us Justin Simpson's heuristic for processing space, and taught us that we'd need consider the largest transfer we'd want to process at once, and allocate 3 to 4 times as much space in our processing location (i.e., /var/archivematica/sharedDirectory). Note, this is not the same as storage space, this is just the amount you'll need as Archivematica runs through it's various micro-services.
  • Elasticsearch indexing AIPs with many files: These basically just taught us to keep an eye out on Elasticsearch. Indexing makes about "a zillion" queries (quoting our system administrator here), and with every query Archivematica has to look through all the files in a given transfer, once per file (still quoting). So it can take a while, especially for transfers with lots of files. In fact, this is our most persistent source of frustration during testing, this round and previous rounds.
  • Archivematica scalability on very large object: This one taught us that Archivematica can handle really large files (our biggest, if I had to guess, is probably a ~65 GB video file) and that number of files is really what we should be looking out for.
  • Also, we learned that transfers of 80,000 or 80,000 files just won't work, no matter what setup you've got. This we learned from an internal e-mail but apparently it's out there on the list somewhere.

In the end, there are basically two ways you can ensure that Archivematica doesn't run into scalability and performance issues. You can: 1) reduce the size or volume of your transfers; or 2) up the power of your Archivematica environment. After some back and forth with our system administrator, we ended up doing a little of both.

Pre-Transfer Manipulation

We've settled on somewhat arbitrary parameters for our transfers: no more than 50-60 GB or 50,000 files. For the handful of transfers we encounter that are over that amount (I'm looking at you, Michigan governors and congresspeople), I've manually broken them up and given them a sequential suffix (i.e., _001, _002, _003, etc.) that will become the "name" of the transfer in Archivematica. We'll keep them together with the same accession by ensuring that the accession number in Archivematica is the same. We've even worked this into our procedures going forward.

Archivematica Environment

Here's what we're working with so far. The VM has:
  • 16GB of RAM
  • 4 virtual CPUs
  • ~350GB of disk for processing

Our system administrator also set Elasticsearch's ES_HEAP_SIZE to 2G from the default of 640m. Finally, he made some tweaks to mysql, mostly guesswork based on mysqltuner:
  • key_buffer_size to 512M from the default of 16M
  • query_cache_size to 256M from default of 16M
  • query_cache_limit to 8M from default of 1M
  • innodb_buffer_pool_size to 512M from default of 128M
  • max_heap_table_size to 256M and tmp_table_size to 256M

Finally, he commented out the print statement in index_transfer_files in elasticSearchFunctions.py and added an index on currentLocation (this particular change will make it into Archivematica proper as you can see for yourself on this pull request). You can see the explanation on the Archivematica Tech List.

Aaron in awesome! These tweaks took our "Index transfer contents" step from 5 days on a 50,000 file transfer (when we got tired of waiting and quit) to about 2 hours. What!!!!?!?!?!!? Did I mention that Aaron is awesome!

Enter automation-tools


automation-tools are a set of Python scripts that are designed to automate the processing of transfers in an Archivematica pipeline. They are what's preventing me from manually starting and checking on all 209 (now 249 since I broke some of them up) transfers.

Installation


You won't need to if you're using the Ansible scripts, but if not, you can install automation-tools using the instructions on the README. We ended up doing it both ways here on different versions of Archivematica.

Setup and Configuration/Customization

There's a couple of different ways to customize the automation-tools. One is by specifying a default processing configuration, which you can do by using the Administration tab in Archivematica to set the default processing configuration, then copying the processing configuration file from /var/archivematica/sharedDirectory/sharedMicroServiceTasksConfigs/processingMCPConfigs/defaultProcessingMCP.xml to the transfers/ directory of your automation-tools installation location. Here's our default processing MCP if you're interested. That way, all transfers will get that processing configuration, even if you've changed the one in the dashboard to something different.

Another way to customize is by creating "hooks," which tweak the behavior of automation-tools. There are a couple of options here, but one we've found particularly useful is the get-accession-number hook. This automatically fills in the accession number for a given transfer. Our version of this script takes the folder name (this is the only variable you have to work with, which for us is currently based on a local, "Digital Deposit ID" convention), looks that up in a simple dictionary of Digital Deposit IDs and Accession numbers, which was exported from our local FileMaker Pro database. It even accounts for transfers that have been split up and those that (gasp!) don't have accession numbers for one reason or another. All this serves to ensure that when we do a search through the backlog, we get every transfer associated with a particular accession, even if there's more than one!

Note: It's easy to miss, but you must name this script "get-accession-number" (without an extension). Don't use "get-accession-id" (despite what the instructions say), "get-accession-number.py" or create a folder called "get-accession-number" with a script inside of it. Yes, we made all of those mistakes...

There's also an option for pre-transfer hooks. We haven't explored these yet, but I can see using this to, for example, ensure that an Accession record with a particular ID and/or extent is already in ArchivesSpace (or, if not, even creating one on the fly when a transfer to Archivematica is made).

Running

You can test the automation-tools by running them with the following command:
/usr/share/python/automation-tools/bin/python -m transfers.transfer --user <user> --api-key <apikey> --ss-user <user> --ss-api-key <apikey> --transfer-source <transfer_source_uuid> --config-file <config_file>

Here's what all this means:
  • /usr/share/python/automation-tools/bin/python
The first part of that command tells the automation-tools to use a particular version of Python (the one with particular libraries that you installed earlier). 

  •  transfers.transfer
You then tell it which script to run (for us, for now, the transfer script--you can also run ingest scripts). 

  •  --user <user> --api-key <apikey> --ss-user <user> --ss-api-key <apikey>
Next you give it your credentials for both Archivematica and the Storage Service, which you can find by navigating to your user profile in Archivematica and the Storage Service, respectively. 

  • --transfer-source <transfer_source_uuid>
Transfer source tells the automation-tools where to look for transfers (you can find the UUID in the Storage Services).

  • --config-file <config_file>
The configuration file tells automation-tools where to store the database, log files and a "-pid.lck" file which keeps the automation-tools from starting a new transfer when one is already going. By the way, every once in a while this seems to stick around when it should so you have to go there and delete it.

We also add the following options:
  • --transfer-type 'unzipped bag'
This simply lets the automation-tools know what kind of transfer you're doing. We have to specify this since we're not using the default of 'standard'.

  • --hide
This hides the transfer from the dashboard when we're through. As we learned from this conversation on Managing the dashboard (transfer and ingest), you've got to keep this cleaned up or else you'll run into some browser timeout issues.

  • --verbose
This increases the debugging output because, well, why not?
You could also write a shell script (don't forget to chmod +x transfer-script.sh to make it an executable first!) and when you're ready to go, you can set up a cron job (or in our case, three) to automatically run different versions of the script (that point to different source locations or have different parameters) at given intervals. Here's what our crontab entry looks like:

0,5,10,15,20,25,30,35,40,45,50,55 * * * * /usr/lib/archivematica/automation-tools/transfer-script_legacy.sh
1,6,11,16,21,26,31,36,41,46,51,56 * * * * /usr/lib/archivematica/automation-tools/transfer-script_bhl-digitalarchive.sh
2,7,12,17,22,27,32,37,42,47,52,57 * * * * /usr/lib/archivematica/automation-tools/transfer-script_bhl-dropbox.sh


This runs the three automation-tools script we have (one for legacy transfers, two for current transfers from two different source locations to which we'll move transfers from separate staging locations) at regular intervals (every five minutes), but ensures that they never run at the same time. Be prepared for about "a zillion" emails (for real) from some guy named "Cron Daemon" :).

Once you have this going, sit back, relax, party like a partyparrot and never look at the Archivematica transfer dashboard again!


Errors Encountered Thus Far

That is, until you hit an error. And we've hit a few. Here's what our inboxes looked like after the first weekend of letting automation-tools go:




This, I'll admit, looks like a lot of failures, except when you remember that many more transfers went through without issue. If we can count what's gone through so far as a sample, we have about a 80% success rate. Excluding the errors we got due to some permissions issues we eventually worked out, here's a couple of common errors we got and how we're handling them:
  • Approve transfer micro-service, Verify bag, and restructure for compliance job: This is the most common. Generally, this has been because some thumb.db file has snuck in or out, and just requires a simple bag update to get things going again.



  • Approve transfer micro-service, Assign checksums and file sizes to objects job: The only commonality I could see in these is that many of them look like they have a weird character in the file name or path (e.g., "black body 1025.mov" or "FY2014/archival files/thach_diane_2014/Fireside_Preview_Panel7.svg", or look like a hidden file that starts with a period (like "._ 2002 - Mary Sue Coleman"). Actually, we're still looking into this error--let us know if you've got any ideas!
  • Scan for viruses micro-service, Scan for viruses job: Yep, this actually happened. It's an example of a time where our local virus scanner (we use System Center Endpoint Protection) didn't catch something that Archivematica's (ClamAV) did. Because of the discrepancy we thought it might be a false positive. In any case, this happened to be something that was mass produced (actually from the development branch of the university... hmm...), so I simply got another copy and got rid of the old one.
  • Create SIP from Transfer micro-service, Index transfer contents job: This one never actually failed, it just went very slowly. See the tweaks we made above.

Conclusion

And so begins our Archivematica journey, one transfer at a time! Recently we got the full demo, pre-SIP to AIP to DIP from Artefactual... we're super pumped! In a few short weeks, we'll be live "for real" with Archivematica here. Stay tuned for more!