Tuesday, February 26, 2013

Ngoni Munyaradzi on Transcribe Bleek and Lloyd

Ngoni Munyaradzi is a Master's student in Computer Science at the University of Cape Town, South Africa, working on a research project on the transcription of the Digital Bleek and Lloyd collection. He kindly agreed to an interview over email, which I present below:

Your website does an excellent job explaining the background and motivation of Transcribe Bleek and Lloyd.  Can you tell us more about the field notebooks you are transcribing?

The Digital Bleek and Lloyd Collection is composed of dictionaries, artwork and notebooks documenting stories about the earliest inhabitants of Southern Africa, the Bushman people. The notebooks were written by Wilhelm Bleek, his sister-in-law, Lucy Lloyd and Dorothea Bleek (Wilhelm's daughter) in the 19th century, with the help of a number of Bushmen people who were prisoners in the Western Cape region of South Africa at the time. The notebooks were recorded in the |Xam and !Kun languages and English translations of these languages are available in the notebooks.

Link to the collection: http://lloydbleekcollection.cs.uct.ac.za/

Correct me if I'm wrong, but it seems like at least in the case of |Xam, you are working with one of the only representatives of an extinct language. Are there any standard data models for these kinds of vocabularies/bilingual texts which you're using?

There are no complete models - the best known models are still only partial.

I suspect that I'm not alone in wondering why these Bushman people were prisoners during the writing of these texts. Can you tell us a bit more about the Bleek/Lloyd informants, or point us to resources on the subject?

The bushman people were prisoners because of petty crimes and a grossly unfair colonial government.  On the Bleek and Lloyd website there is a story on each contributor.  There is information in various books on the subject as well, but I am not sure there is more that is known than what is on the website. see:
http://lloydbleekcollection.cs.uct.ac.za/xam.html
http://lloydbleekcollection.cs.uct.ac.za/kun.html

This is the first transcription project I'm aware of using the Bossa Crowd Create platform. What are the factors that led you to choose that platform and what's been your experience setting it up?

In 2011 when our project began Bossa was the most mature opensource crowdsourcing framework that was tailored for volunteer projects available. Due to this Bossa suited well with the project's requirements. The alternative crowdsourcing frameworks available at the time used payment methods.

Setting up the Bossa framework was a relatively straight-forward task. The documentation online is very thorough and with examples of how to set-up test applications. I also got assistance from David Anderson the developer of Bossa.

The Bushman writing system seems extremely complex with it's special characters and multiple diacritics. I see that you are using LaTeX macros to encode these complexities. Why did you decide on LaTeX and what has been the user response to using that notation?

So the project is part of ongoing research related to the Bleek and Lloyd Collection within our Digital Libraries Laboratory at the University of Cape Town. Credit for developing the encoding tool goes to Kyle Williams. And the reason why he chose to use LaTeX was that; using custom LaTeX macros allowed for both the problem of the encoding and visual rendering of the text to be solved in a single step. Developing a unique font for the Bushman script is something we might look at in the future!

Here's a link to a paper published on the encoding tool developed by Kyle Williams: http://link.springer.com/chapter/10.1007%2F978-3-642-24826-9_28

Overall the user feedback has been good, as most users are able to complete transcriptions using the LaTeX macros. We have gotten suggestions from users to use glyphs to encode the complexities. Currently the scope of my masters research project does not include that. There are talks in our research group to develop a unique font to represent the |Xam and !Kun languages, as this is not supported by Unicode.

User 1 Comment: "I think the palette handles the complexity of the character set very well. This material is inherently difficult to transcribe. The tool has, on the whole, been well thought out to meet this challenge. I think it needs to be improved in some ways, but considering the difficulties it is remarkably well done."

User 2 Comment: "VERY intuitive, after a few practice transcriptions. I actually enjoyed using the tool after a page was done."

This is incredibly useful. So far as I'm aware, yours is only the third crowdsourced transcription project that's surveyed users seriously (after the North American Bird Phenology Project and Transcribe Bentham). Do you have any advice on collecting user feedback at such an early stage?

Collecting user feedback in the early stages will tremendously help project administrators determine whether the setup of the project is easy to follow for participants. One can easily pick up any hindrances to user participation and address these early. From our project, I've found that participants can actually suggest very helpful ideas that will make the data collection process better.

Crowdsourced citizen science and cultural heritage projects have mostly been based in the USA, Northern Europe and Australia until recently -- in fact, yours is the first that I'm aware of originating in sub-Saharan Africa. I'd really like to know which projects inspired your work with Transcribe Bushman, and what your hopes are for crowdsourced transcription projects focusing on Africa?

Our work was mostly inspired by the success of GalaxyZoo at recruiting volunteers, and also the Transcribe Bentham project that explored the feasibility of volunteers performing transcription. I hope that more crowdsourced transcription projects will start-up within Africa in the near future. What would be interesting is to see a transcription project for the Timbuktu manuscripts of Mali. Beyond transcription, I would like to see other researchers adopting crowdsourcing in fields of specialty within Africa.

Thanks so much for this interview. If people want to help out on the project, what's the best way for them to contribute?

Interested participants can simply:
  1. Create an account on the project website.
  2. Watch a 5 minute video tutorial on how to transcribe the Bushman languages.
  3. With that, you are ready to start transcribing pages.

Monday, February 25, 2013

Detecting Handwriting in OCR Text

This is my fourth and final post about the iDigBio Augmenting OCR Hackathon.  Prior posts covered the hackathon itself, my presentation on preliminary results, and my results improving the OCR on entomology specimens.  The other participants are  slowly adding their results to the hackathon wiki, which I recommend checking back with (their efforts were much more impressive than mine).

Clearly handwritten: T=8, N=78% from terse and noisy OCR files

Let's say you have scanned a large number of cards and want to convert them from pixels into data.  The cards--which may be bibliography cards, crime reports, or (in this case) labels for lichen specimens--have these important attributes:
  1. They contain structured data (e.g. title of book, author, call number, etc. for bibliographies) you want to extract, and
  2. They were part of a living database built over decades, so some cards are printed, some typewritten, some handwritten, and some with a mix of handwriting and type.
The structured aspect of the data makes it quite easy to build a web form that asks humans to transcribe what they see on the card images.  It also allows for sophisticated techniques for parsing and cleaning OCR (which was the point of the hackathon).  The actual keying-in of the images is time consuming and expensive, however, so you don't want to waste human effort on cards which could be processed via OCR.

Since OCR doesn't work on handwriting, how do you know which images to route to the humans and which to process algorithmically?  It's simple: any images that contain handwriting should go to the humans.  Detecting the handwriting on the images is unfortunately not so simple.

I adopted a quick-and-dirty approach for the hackathon: if OCR of handwriting produces gibberish, why send all the images through a simple pass of OCR and look in the resulting text files for representative gibberish?  In my preliminary work, I pulled 1% of our sample dataset (all cards ending with "11") and classified them three ways:
  1. Visual inspection of the text files produced by an ABBY OCR engine,
  2. Visual inspection of the text files produced by the Tesseract OCR engine, and
  3. Looking at the actual images themselves.

To my surprise, I was only able to correctly classify cards from OCR output 80% of the time -- a disappointing finding, since any program I produced to identify handwriting from OCR output could only be less accurate.  More interesting was the difference between the kinds of files that ABBY and Tesseract produced.  Tesseract produced a lot more gibberish in general--including on card images that were entirely printed.  ABBY, on the other hand, scrubbed a lot of gibberish out of its results, including that which might be produced when it encountered handwriting.

This suggested an approach: look at both the "terse" results from ABBY and the "noisy" results from Tesseract to see if I could improve my classification rate.
Easily classified as type-only, despite (non-characteristic) gibberish: T=0,N=0 from terse and noisy OCR files.

But what does it mean to "look" at a file?  I wrote a program to loop through each line of an OCR file and check for the kind of gibberish characteristic of OCR and handwriting.  Inspecting the files reveals some common gibberish patterns, which we can sum up as regular expressions:

GARBAGE_REGEXEN = {
  'Four Dots' => /\.\.\.\./,
  'Five Non-Alphanumerics' => /\W\W\W\W\W/,
  'Isolated Euro Sign' => /\S€\D/,
  'Double "Low-Nine" Quotes' => /„/,
  'Anomalous Pound Sign' => /£\D/,
  'Caret' => /\^/,
  'Guillemets' => /[«»]/,
  'Double Slashes and Pipes' => /(\\\/)|(\/\\)|([\/\\]\||\|[\/\\])/,
  'Bizarre Capitalization' => /([A-Z][A-Z][a-z][a-z])|([a-z][a-z][A-Z][A-Z])|([A-LN-Z][a-z][A-Z])/,
  'Mixed Alphanumerics' => /(\w[^\s\w\.\-]\w).*(\w[^\s\w]\w)/
}

However, some of these expressions match non-handwriting features like geographic coordinates or bar codes.  Handling these requires a white list of regular expressions for gibberish we know not to be handwriting:

WHITELIST_REGEXEN = {
  'Four Caps' => /[A-Z]{4,}/,
  'Date' => /Date/,
  'Likely year' => /1[98]\d\d|2[01]\d\d/,
  'N.S.F.' => /N\.S\.F\.|Fund/,
  'Lat Lon' => /Lat|Lon/,
  'Old style Coordinates' => /\d\d°\s?\d\d['’]\s?[NW]/,
  'Old style Minutes' => /\d\d['’]\s?[NW]/,
  'Decimal Coordinates' => /\d\d°\s?[NW]/,  
  'Distances' => /\d?\d(\.\d+)?\s?[mkf]/,  
  'Caret within heading' => /[NEWS]\^s/,
  'Likely Barcode' => /[l1\|]{5,}/,
  'Blank Line' => /^\s+$/,
  'Guillemets as bad E' => /d«t|pav«aont/  
}

With these on hand, we can calculate a score for each file based on the number of occurrences of gibberish we find per line.  That score can then be compared against a threshold to determine whether a file contains handwriting. Due to the noisiness of the Tesseract files, I found it most useful to calculate their score N as a percentage of non-blank lines, while the score for the terse files T worked best as a simple count of gibberish matches.
Threshold Correct False
Positives
False
Negatives
T > 1 and N > 20% 82% 10 of 45 8 of 60
T > 0 and N > 20% 84% 13 of 45 4 of 60
T > 1 79% 10 of 45 12 of 60
N > 20% 75% 8 of 45 18 of 60
N > 10% 81% 14 of 45 6 of 60
One interesting thing about this approach is that adjusting the thresholds lets us tune the classifications for resources and desired quality. If our humans doing data entry are particularly expensive or impatient, raising the thresholds should ensure that they are only very rarely sent typed text. On the other hand, lowering the thresholds would increase the human workload while improving quality of the resulting text.
One of the  false negatives: T=0, N=10% from parsing terse and noisy text files.

I'm really pleased with this result.  The combined classifications are slightly better than I was able to accomplish by looking at the OCR myself.  The experience of a volunteer presented with 56 images containing handwriting and 13 which don't may necessitate a "send to OCR" button in the user interface, but must be less frustrating than the unclassified ratio of 45 in 105 from the sample set.  With a different distribution of handwriting-to-type in the dataset, the process might be very useful for extracting rare typed material from a mostly-handwritten set, or vice versa.

All of the datasets, code, and scored CSV files are in iDigBio AOCR Hackathon's HandwritingDetection reposity on GitHub..

Friday, February 15, 2013

Results of the "Ocrocrop" Approach to Improving OCR

This project attempted to improve the quality of OCR applied to difficult entomology images[*] by cropping labels from the images to run through OCR separately. In order to identify labels on the image to crop, an initial, 'naive' pass of OCR was made over the whole image, generating both
  • A) a set of rectangles on the image defined as word bounding boxes by the OCR engine, and 
  • B) a control OCR text file to be used for comparing the 'naive' model with the methodology.
Those word rectangles were then filtered, consolidated, and filtered again to identify the labels on the image, which were then extracted and run through the OCR engine separately. The resulting OCR output files were then concatenated into a single text file, which was compared against the 'naive' output described in A (above).

I'll call this method "ocrocrop". (For more detail on method, see the transcript of my preliminary presentation.)

The results were encouraging. (See CSV file listing results for each file, and the directory containing "naive" output, annotated JPGs, and cleaned output files for each test.)

Of 80 files tested, 20 experienced a decrease in score (see Alex Thomson's scoring service), but most (14/20) of those were on OCR output below 10% accuracy in the first place, and the remainder were at or below 20% accuracy. So it is reasonable to say that the ocrocrop method only degraded the quality of texts that were unusable in the first place.

40 of the 80 files tested showed more promising results, showing improvements from one to twenty percentage points -- in some cases only marginally improving unusable (below 10% accurate) outputs, but in many cases improving the scores more substantially (say from 25% to 35% in the case of EMEC609908_Stigmus_sp).

Most of the top quartile of results saw improvements on texts that were already scoring above 10% accuracy rates (16 of 20), so it appears that the effectiveness of the ocrocrop method is correlated to the quality of the naive input data -- garbage is degraded or only minimally improved, while OCR that is merely bad under the naive approach can be significantly improved.


The ocrocrop method saw the greatest improvement in cases where the naive OCR pass was effective at identifying word bounding boxes, but ineffective at translating their contents into words. Taking EMEC609928_Stigmus_sp, the case of greatest improvement (naive: 18.9%, ocrocrop: 70.5%), we see that all words on the labels except for the collector name were recognized as words (in purple), making the cropped label images (in blue) good representatives of the actual labels on the image.

The cropped image was more easily processed by our OCR image, so that we may compare the naive version of the second label:
 CALIF:Hunbo1dt Co. ;‘ ~
 3 m1.N' Garbervflle ,::f< '_- '
 v—23~75 n.n1e:z.' 9 ._ ’
with the ocrocrop version of the second label:
 CALIF:Humboldt Co.
 3 mi.N Garberville
 V-23-76 R.Dietz,'


One of the problems with the OCR-based pre-processing which may be hidden by the scores is that many labels are entirely missed by the ocrocrop if the first, naive OCR pass failed to identify any words at all on the label. In cases such as EMEC609651_Cerceris_completa, the determination label was not cropped (indicated by blue rectangles) because no words (purple rectangles) were detected by the original. As a result, while the ocrocrop OCR is an improvement over the naive OCR (6.6% vs. 6.5%), substantial portions of text on the image are unimproved because they are unattempted.

There are two possible ways to solve this problem. One is to abandon the ocrocrop model entirely, switching back to a computer vision approach -- either by programmatically locating rectangles on the image (as Phuc Nguyen demonstrated) or by asking humans to identify regions of interest for OCR processing (as demonstrated by Jason Best in Apiary and by Paul and Robin Schroeder in ScioTR). The other option is to improve the naive OCR -- perhaps by swapping out the engine (e.g. use ABBY instead of Tesseract), perhaps by using a different image pre-processor (like ocropus's front-end to Tesseract), perhaps by re-training Tesseract.

I suspect that a computer vision approach to extracting entomology labels (or similar pieces of paper photographed against a noisy background) will provide a more effective eventual solution than the ocrocrop method. Nevertheless, the ocrocrop "bang it with a rock until it works" approach has a lot of potential to take entomology-style OCR to bad from worse.

[*]In addition to the difficulties typical of specimen labels--mix of typefaces, handwritten material, typewritten material, text inventory with few overlaps with a dictionary of literary English--the entomology dataset contained additional challenges. Difficulties included the following:
  • Images containing specimens and rulers as well as labels. 
  • Labels casually arranged for photography, so that text orientation was not necessarily aligned. 
  • Labels photographed against a background of heavily pin-pricked styrofoam rather than a black or neutral background. 
  • 3-d images including what appear to be shadows, which soften the contrast differences around borders.

iDigBio Augmenting OCR Hackathon

I spent the last three days at the iDigBio Augmenting OCR Hackathon working alongside mycologists, botanists, entomologists, herbarium managers, and bioinformaticians to explore ways to improve parsing of digitized specimen labels.  While I'm pleased with the results of my own contribution, I'd like to take a minute to talk about the hackathon process itself before I post them.

This was my first hackathon--a condition which seemed to be the rule among the participants--and I was really impressed with it.  The iDigBio folks defined a clear set of goals (improve OCR parsing of specimen labels) with clear metrics (these datasets, these output formats, this scoring algorithm) a couple of months beforehand, and organized five weekly videoconferences before the event.  Most important of all, the participants were encouraged to prepare a 10-minute lightning talk on their efforts and preliminary results.  (See below for the transcript of my talk, see the notes document for descriptions of all talks.)

In my opinion, these preliminary talks were critical to the success of the project.  The preliminary nature relaxed pressure on participants, so we were able to experiment beyond the target of the hackathon (as I did with my handwriting detection digression, a related, but un-scorable effort).  On the other hand, they did provide enough impetus to get many of us looking at the data, working with the tools, and thinking about approaches.  This meant that even before the hackathon started, many of us were familiar enough with the materials to have a real 'meeting of the minds' experience during the pre-event supper:  "Did you just say 'the contrast difference between the print and the label is higher than the difference between the label and the background'?  We ran into that too, and here's what we did..."

The experience was a real education in OCR for me, and I feel like I picked up techniques I can apply directly to projects I've discussed with clients and potential clients.  In particular, I got a real appreciation for how interrelated image preparation, OCR, and parsing are to each other.  One participant had created separate libraries of regular expressions to clean up each kind of field, having discovered that latitude/longitude coordinates require different error correction than personal names or herbarium catalog numbers do.  Another group had built a touch-screen tool for classifying segments of the image before submitting them to OCR.  My own project required a first pass of OCR to clean images before sending them to a second, 'real' pass of OCR.  A simple 1,2,3 workflow just isn't sufficient!

iDigBio itself is an NSF-funded attempt to advance digitization practices on natural history collections, combining disciplinary "thematic collection networks" and methodologically focused working groups on topics like georeferencing, crowdsourcing, and OCR.  Aware that they're not the only people digitizing things, they have been reaching out beyond the natural sciences to the library and information science community at the iConference this year.  This rejection of "not invented here" siloing was a big part of the hackathon, and I hope that more people from outside the natural sciences will get involved.

Thursday, February 14, 2013

Improving OCR Inputs from OCR Outputs?

This is a transcript of my talk at the iDigBio Augmenting OCR Hackathon, presenting preliminary results of my efforts before the event.

For my preliminary work, I tried to improve the inputs to our OCR process through looking at the outputs of a naive OCR.
One of the first things that we can do to improve the quality of our inputs to OCR is to not feed them handwriting.  To quote Homer Simpson, "Remember son, if you don't try, you can't fail."  So let's not try feeding our OCR processes handwritten materials.
To do this, we need to try to detect the presence of handwriting.  When you try to feed handwriting to OCR, you get a lot of gibberish.  If we can detect handwriting, we can route some of our material to "humans in the loop" -- not wasting their time with things we could be OCRing.  So how do we do this?
My approach was to use the outputs of [naive] OCR to detect the gibberish it produces when it sees handwriting to try to determine when there was handwriting present in the images.  The first thing I did before I started programming, was classifying OCR output from the lichen samples by visual inspection: whether I thought there was hand writing present or not, based on looking at the OCR outputs.  Step two was to automate the classifications.
I tried this initially on the results that came out of ABBY and then the results that came out of Tesseract, and I was really surprised by how hard it was for me as a human to spot gibberish.  I could spot it, but in a lot of cases -- ABBY does a great job of cleaning up its OCR output -- so in a lot of cases, particularly the labels that were all printed with the exception of some species name that was handwritten, ABBY generally misses those.  Tesseract, on the other hand, does not produce outputs that are quite as clean.

So the really interesting thing about this to me is that while we were able to get 70-75% accuracy on both ABBY and Tesseract, if you look at the difference between the false positives that come out of ABBY and Tesseract and the false negatives, I think there is some real potential here for making a much more sophisticated algorithm.  Maybe the goal is to pump things through ABBY for OCR, but beforehand look at Tesseract output to determine whether there is handwriting or not.
The next thing I did was try to automate this.  I just used some regular expressions to look for representative gibberish, and then based on the number of matches got results that matched the visual inspection, though you do get some false positives. 
The next thing I want to do with this is to come up with a way to filter the results based on doing a detection on ABBY [output] and doing a detection on Tesseract [output].
The next thing that I wanted to work on was label extraction.

We're all familiar with the entomology labels and problems associated with them.
So if you pump that image of Cerceris through Tesseract, you end up with a lot of garbage. You end up with a lot of gibberish, a lot of blank lines, some recognizable words.  That "Cerceris compacta" is, I believe, the result of a post-digitzation process: it looks like an artifact of somebody using Photoshop or ImageMagick to add labels to the image.  The rest of it is the actual label contents, and it's pretty horrible.  We've all stared at this; we've all seen it.
So how do you sort the labels in these images from rulers, holes in styrofoam, and bugs?  I tried a couple of approaches.  I first tried to traverse the image itself, looking for  contrast differences between the more-or-less white labels and their backgrounds.  The problem I found with that was that the highest contrast regions of the image are the difference between print and the labels behind the print.  So you're looking for a fairly low-contrast difference--and there are shadows involved.  Probably, if I had more math I could do this, but this was too hard.

So my second try was to use the output of OCR that produces these word bounding boxes to determine where labels might be, because labels have words on them. 
If you run Tesseract or Ocropus with an "hocr" option, you get these pseudo-HTML files that have bounding boxes around the text.  Here you see this text element inside a span; the span has these HTML attributes that say "this is an OCR word".  Most importantly, you have the title attribute as the bounding box definition of a rectangle. 
If you extract that and re-apply it to an image, you see that there are a lot of rectangles on the image, but not all the rectangles are words.  You've got bees, you've got rulers; you've got a lot of random trash in the styrofoam.
So how do we sort good rectangles from bad rectangles?  First I did a pass looking at the OCR text itself.  If the bounding box was around text that looked like a word, I decided that this was a good rectangle.  Next, I did a pass by size.  A lot of the dots in the stryofoam come out looking suspiciously word-like for reasons I don't understand.  So if the area of the rectangle was smaller than .015% of the image, I threw it away.
The result was [above]: you see rectangles marked with green that pass my filter and rectangles marked with red that don't.  So you get rid of the bee, you get rid of part of the ruler -- more important, you get rid of a lot of the trash over here. [Pointing to small red rectangles on styrofoam.] There are some bugs in this--we end up getting rid of "Arizona" for reasons I need to look at--but it does clean the thing up pretty nicely.

Question: A very simple solution to this would be for the guys at Berkeley to take two photographs -- one of the bee and ruler, one of the labels.  I'm just thinking how much simpler that would be.

Me: If the guys in Berkeley had a workflow that took the picture--even with the bee--agaist a black background, that would trivialize this problem completely! 

Question: If the photos were taken against a background of wallpaper with random letters, it couldn't be much worse than this [styrofoam].  The idea is that you could make this a lot easier if you would go to the museums and say, we'll participate, we'll do your OCRing, but you must take photographs this way.

Me: You're absolutely right.  You could even hand them a piece of cardboard that was a particular color and say, "Use this and we'll do it for you, don't use it and we won't."  I completly agree.  But this is what we're starting with, so this is what I'm working on.
The next thing is to aggregate all those word boxes into the labels [they constitute]. For each rectangle, look at all of the other rectangles in the system, expand them both a little bit, determine if they overlap, and if they do, consolidate them into a new rectangle, and repeat the process until there are no more consolidations to be done. [Thanks to Sara Brumfield for this algorithm.]
If you do that, the blue boxes are the consolidated rectangles.  Here you see a rectangle around the U.C. Berkeley label, a rectangle around the collector, and a pretty glorious rectangle around the determination that does not include the border. 
Having done that, you want to further filter those rectangles.  Labels contain words, so you can reject any rectangles that were "primitives" -- you can get rid of the ruler rectangle, for example, because it was just a single [primitive] rectangle that was pretty large. 
So you make sure that all of your rectangles were created through consolidation, then you crop the results.  And you end up automatically extracting these images from that sample -- some of which are pretty good, some of which are not.  We've got some extra trash here, we cropped the top of "Arizona" here.  But for some of the labels -- I don't think I could do better than that determination label by hand. 
Then you feed the results back into Tesseract one by one, then we combine the text files in Y-axis order to produce a single file for all those images.  (Not something that's a necessary step, but that does allow us to compare the results with the "raw" OCR.)  How did we do?
This is a resulting text file -- we've got a date that's pretty recognizable, we've got a label that's recognizable, and the determination is pretty nice.
Let's compare it to the raw result.  In the cropped results, we somehow missed the "Cerceris compacta", we did a much nicer job on the date, and the determination is actually pretty nice.
Let's try it on a different specimen image.
We run the same process over this Stigmus image.  We again find labels pretty well.
 When we crop them out, the autocrop pulls them out into these three images.

Running those images through OCR, we get a comparison of the original, which had a whole lot of gibberish. 
The original did a decent job with the specimen number, but the autocrop version does as well.  In particular, for this location [field], the autocrop version is nearly perfect, whereas the original is just a mess.
My conclusion is that we can extract labels fairly effectly by first doing a naive pass of OCR and looking at the results of that, and that the results of OCR over the cropped images is less horrible than running OCR over the raw images -- though still not great. 
[2013-02-15 update: See the results of this approach and my write-up of the iDigBio Augmenting OCR Hackathon itself.]