Tuesday, December 31, 2013

Code and Conversations in 2013

It's often hard to explain what it is that I do, so perhaps a list of what I did will help.  Inspired by Tim Sherratt's "talking" and "making" posts at the end of 2012, here's my 2013. 


I work on a number of software projects, whether as contract developer, pro bono "code fairy", or product owner.  


It's been a big year for FromThePage, my open-source tool for manuscript transcription and annotation.  We started work upgrading the tool to Rails 3, and built a TEI Export (see discussion on the TEI-L) and an exploratory Omeka integration.  Several institutions (including University of Delaware and the Museum of Vertebrate Zoology) launched trials on FromThePage.com for material ranging from naturalist field notes to Civil War diaries.  Pennsylvania State University joined the ranks of on-site FromThePage installations with their "Zebrapedia", transcribing Philip K. Dick's Exegesis -- initially as a class project and now as an ongoing work of participatory scholarship.

One of the most interesting developments of 2013 was that customizations and enhancements to FromThePage were written into three grant applications.  These enhancements--if funded--would add significant features to the tool, including Fedora integration, authority file import, redaction of transcripts and facsimiles, and support for externally-hosted images.  All these features would be integrated into the FromThePage source, benefiting everybody.

Two other collaborations this year promise interesting developments in 2014.  The Duke Collaboratory for Classics Computing (DC3) will be pushing the tool to support 19th-century women's travel diaries and Byzantine liturgical texts, both of which require more sophisticated encoding than the tool currently supports.  (Expect Unicode support by Valentine's Day.)  The Austin Fanzine Project will be using a new EAC-CPF export which I'll deliver by mid-January.

OpenSourceIndexing / FreeREG 2

Most of my work this year has been focused on improving the new search engine for the twenty-six million church register entries the FreeREG organization has assembled in CSV files over the last decade and a half.  In the spring, I integrated the parsed CSV records into the search engine and converted our ORM to Mongoid.  I also launched the Open Source Indexing Github page to rally developers around the project and began collecting case studies from historical and genealogical organizations.

In May, I built a parser for historical dates into the search engine I'm building for FreeREG.  It handles split dates like "4 Jan 1688/9", illegible date portions in UCF like "4 Jan 165_", and preserves the verbatim transcription as well as programmatically handling searching and sorting correctly.  Eventually I'll incorporate this into an antique_date gem for general use.

Most of the fall was spent adding GIS search capabilities to the search engine.   In fact, my last commit of the year added the ability to search for records within a radius of a place.  The new year will bring more developments on GIS features, since an effective and easy interface to a geocoded database is just as big a challenge as the geocoding logic itself.

Other Projects

In January I added a command-line wrapper to Autosplit, my library for automatically detecting the spine in a two-page flatbed scan and splitting the image into recto and verso halves.  In addition to making the tool more usable, it also added support for notebook-bound books which must be split top-to-bottom rather than left-to-right.

For the iDigBio Augmenting OCR Hackathon in February, I worked on two exploratory software projects.  HandwritingDetection (code, write-up) analyzes OCR text to look for patterns characteristically produced when OCR tools encounter handwriting.    LabelExtraction (code, write-up) parses OCR-generated bounding boxes and text to identify labels on specimen images.  To my delight, in October part of this second tool was generalized by Matt Christy at the IDHMC to illustrate OCR bounding boxes for the eMOP project's work tuning OCR algorithms for Early Modern English books.

In June and July, I started working on the Digital Austin Papers, contract development work for Andrew Torget at the University of North Texas.  This was what freelancers call a "rescue" project, as the digital edition software had been mostly written but was still in an exploratory state when the previous programmer left.  My job was to triage features, then turn off anything half-done and non-essential, complete anything half-done and essential, and QA and polish core pieces that worked well.  I think we're all pretty happy with the results, and hope to push the site to production in early 2014.  I'm particularly excited about exposing the TEI XML through the delivery system as well as via GitHub for bulk re-use.

Also in June, I worked on a pro bono project with the Civil War-era census and service records from Pittsylvania County, Virginia which were collected by Jeff McClurken in his research.  My goal is to make the PittsylvaniaCivilWarVets database freely available for both public and scholarly use.   Most of the work remaining here is HTML/CSS formatting, and I'd welcome volunteers to help with that. 

In November, I contributed some modifications to Lincoln Mullen's Omeka client for ruby.  The client should now support read-only interactions with the Omeka API for files, as well as being a bit more robust.

December offered the opportunity to spend a couple of days building a tool for reconciling multi-keyed transcripts produced from the NotesFromNature citizen science UI.  One of the things this effort taught me was how difficult it is to find corresponding transcript to reconcile -- a very different problem from reconciliation itself.  The project itself is over, but ReconciliationUI is still deployed on the development site.


February 13-15 -- iDigBio Augmenting OCR Hackathon at the Botanical Research Institute of Texas.  "Improving OCR Inputs from OCR Outputs?" (See below.)

February 26 -- Interview with Ngoni Munyaradzi of the University of Cape Town.  See our discussion of his work with Bushman languages of southern Africa.

March 20-24 -- RootsTech in Salt Lake City.  "Introduction to Regular Expressions"

April 24-28 -- International Colloquium Itinera Nova in Leuven, Belgium.  "Itinera Nova in the World(s) of Crowdsourcing and TEI". 

May 7-8 -- Texas Conference on Digital Libraries in Austin, Texas.  I was so impressed with TCDL when Katheryn Stallard and I presented in 2012 that I attended again this year.  While I was disappointed to miss Jennifer Hecker's presentation on the Austin Fanzine Project, I was so impressed with Nicholas Woodward's talk in the same time slot that I talked him into writing it up as a guest post.

May 22-24 -- Society of Southwestern Archivists Meeting in Austin, Texas.  On a fun panel with Jennifer Hecker and Micah Erwin, I presented "Choosing Crowdsourced Transcription Platforms"

July 11-14 -- Social Digital Scholarly Editing at the University of Saskatchewan.  A truly amazing conference.  My talk: "The Collaborative Future of Amateur Editions".

July 16-20 -- Digital Humanities at the University of Nebraska, Lincoln.  Panel "Text Theory, Digital Document, and the Practice of Digital Editions".  My brief talk discussed the importance of blending both theoretical rigor and good usability into editorial tools.

July 23 -- Interview with Sarah Allen, Presidential Innovation Fellow at the Smithsonian Institution.  Sarah's notes are at her blog Ultrasaurus under the posts "Why Crowdsourced Transcription?" and "Crowdsourced Transcription Landscape".

September 12 -- University of Southern Mississippi. "Crowdsourcing and Transcription".  An introduction to crowdsourced transcription for a general audience.

September 20 -- Interview with Nathan Raab for Forbes.com.  Nathan and I had a great conversation, although his article "Crowdsourcing Technology Offers Organizations New Ways to Engage Public in History" was mostly finished by that point, so my contributions were minor.  His focus on the engagement and outreach aspects of crowdsourcing and its implications for fundraising is one to watch in 2014.

September 25 -- Wisconsin Historical Society"The Crowdsourced Transcription Landscape".  Same presentation as USM, with minor changes based on their questions.  Contents: 1. Methodological and community origins.  2. Volunteer demographics and motivations.  3. Accuracy.  4. Case study: Harry Ransom Center Manuscript Fragments.  5. Case study: Itinera Nova at Stadarchief Leuven.

September 26-27 -- Midwest Archives Conference Fall Symposium in Green Bay, Wisconsin.  "Crowdsourcing Transcription with Open Source Software".  1. Overview: why archives are crowdsourcing transcription.  2. Selection criteria for choosing a transcription platform.  3. On-site tools: Scripto, Bentham Transcription Desk, NARA Transcribr Drupal Module, Zooniverse Scribe.  4. Hosted tools deep-dive: Virtual Transcription Laboratory, Wikisource, FromThePage.

October 9-10 -- THATCamp Leadership at George Mason University.  In "Show Me Your Data", Jeff McClurken and I talked about the issues that have come up in our collaboration to put online the database he developed for his book, Take Care of the Living.  See my summary or the expanded notes.

November 1-2 -- Texas State Genealogy Society Conference in Round Rock, Texas.  Attempting to explore public interest in transcribing their own family documents, I set up as an exhibitor, striking up conversations with attendees and demoing FromThePage.  The minority of attendees who possessed family papers were receptive, and in some cases enthusiastic about producing amateur editions.  Many of them had already scanned in their family documents and were wondering what to do next.  That said, privacy and access control was a very big concern -- especially with more recent material which mentioned living people.

November 7 -- THATCamp Digital Humanities & Libraries in Austin, Texas. Great conversations about CMS APIs and GIS visualization tools.

November 19-20 -- Duke UniversityI worked with my hosts at the Duke Collaboratory for Classics Computing to transcribe a 19th-century travel diary using FromThePage, then spoke on "The Landscape of Crowdsourcing and Transcription", an expansion of my talks at USM and WHS.  (See a longer write-up and video.)

December 17-20 -- iDigBio Citizen Science HackathonDue to schedule conflicts, I wasn't able to attend this in person, but followed the conversations on the wiki and the collaborative Google docs.  For the hackathon, I built ReconciliationUI, a Ruby on Rails app for reconciling different NotesFromNature-produced transcripts of the same image on the model of FamilySearch Indexing's arbitration tool.


All these projects promise to keep me busy in the new year, though I anticipate taking on more development work in the summer and fall.  If you're interested in collaborating with me in 2014--whether to give a talk, work on a software project, or just chat about crowdsourcing and transcription--please get in touch.

Saturday, November 23, 2013

"The Landscape of Crowdsourcing and Transcription" at Duke University

I spent part of this week at Duke University with the Duke Collaboratory for Classics Computing -- Josh Sosin, Hugh Cayless, and Ryan Baumann. We discussed ideas for mobile epigraphy applications, argued about text encoding, and did some hacking. We loaded an instance of FromThePage onto the DC3's development machine, seeded it with the 1859 journal of Viscontess Emily Anne Beaufort Smyth Strangford (part of Duke Libraries' amazing collection of Women's Travel Diaries). Transcribing six pages of her tour through Smyrna and Syria together suggested some exciting enhancements for the transcription tool, revealing a few bugs along the way. I'm really looking forward to collaborating with the DC3 on this project.

On Wednesday, I gave an introductory talk on crowdsourced manuscript transcription at the Perkins Library: "The Landscape of Crowdsourcing and Transcription":
One of the most popular applications of crowdsourcing to cultural heritage is transcription. Since OCR software doesn’t recognize handwriting, human volunteers are converting letters, diaries, and log books into formats that can be read, mined, searched, and used to improve collection metadata. But cultural heritage institutions aren’t the only organizations working with handwritten material, and many innovations are happening within investigative journalism, citizen science, and genealogy.
This talk will present an overview of the landscape of crowdsourced transcription: where it came from, who’s doing it, and the kinds of contributions their volunteers make, followed by a discussion of motivation, participation, recruitment, and quality controls.
The talk and visit got a nice write-up in Duke Today, which includes this quote by Josh Sosin:
Sosin said that although many students and professors visit the library's collections and partially transcribe the sources that are pertinent to their research, nearly all of these transcripts disappear once the researchers leave the library.
"Scholars or students come to the Rubenstein, check out these precious materials, they transcribe and develop all sorts of interesting ideas about them," Sosin said. "Then they take their notebooks out of the library and we lose all the extra value-added materials developed by these students. If we can host a platform for students and scholars to share their notes and ideas on our collections, the library's base of knowledge will grow with every term paper or book that our scholars produce."
Video of "The Landscape of Crowdsourcing and Transcription" (by Ryan Baumann):

Slides from the talk:

Previous versions of this talk were delivered at University of Southern Mississippi (2013-09-12) and the Wisconsin Historical Society (2013-09-25). It differs substantially in the discussion of quality control mechanisms (on the video from 26:15 through 31:30, slides 37-40), an addition which was suggested by questions posed at USM and WHS.

Friday, October 25, 2013

Feature: TEI-XML Export

How do you get the data out?

This is a question I hear pretty often, particularly from professional archivists.  If an institution and its users have put the effort into creating digital editions on FromThePage, how can they pull the transcripts out of FromThePage to back it up, repurpose it, or import it into other systems?

This spring, I created an XHTML exporter that will generate a single-page XHTML file containing transcripts of a work's pages, their version history, all articles written about subjects within the work, and internally-linked indices between subjects and pages.  Inspired by conversations at the TEI and SDSE conferences and informed by my TEI work for a client project, I decided to explore a more detailed export in TEI.

This is the result, posted on github for discussion:
Zenas Matthews' Mexican War Diary was scanned and posted by Southwestern University's Smith Library Special Collections.  It was transcribed, indexed, and annotated by Scott Patrick, a retired petroleum worker from Houston.

Julia Brumfield's 1919 Diary was scanned and posted by me, transcribed largely by volunteer Linda Tucker, and indexed and annotated by me.

I requested comment on the TEI mailing list (see the thread "Draft TEI Export from FromThePage"), and got a lot of really helpful, generous feedback both on- and off-list.  It's obvious that I've got more work to do for certain kinds of texts--which will probably involve creating a section header notation in my wiki mark-up--but I'm pretty pleased with the results.

One of the most exciting possibilities of TEI export is interoperability with other systems.  I'd been interested in pushing FromThePage editions to TAPAS, but after I posted the TEI-L announcement, Peter Robinson pulled some of the exports into Textual Communities.  We're exploring a way to connect the two systems, which might give editors the opportunity to do the sophisticated TEI editing and textual scholarship supported by Textual Communities starting from the simple UI and powerful indexing of FromThePage.   I can imagine an ecosystem of tools good at OCR correction, genetic mark-up, display and analysis of correspondence, amateur-accessible UIs, or preservation -- all focusing on their strengths and communicating via TEI-XML.

I'm interested in more suggestions for ways to improve the exports, new things to do with TEI, or systems to explore integration options before I deploy the export feature on production. 

Sunday, October 20, 2013

A Gresham's Law for Crowdsourcing and Scholarship?

This is a comment I wanted to make at Neil Fraistat's "Participatory DH" session (proposal, notes) at THATCamp Leadership, but ended up having on twitter instead.

Much of the discussion in the first half of the session focused on the qualitative difference between the activities we ask amateurs to do and the activities performed by scholars.  One concern voiced was that we're not asking "citizen scholars" to do real scholarly work, and then labeling their activity scholarship -- a concern I share with regard to editing.  If most crowdsourcing projects ask amateurs to do little more than wash test tubes, where are the projects that solicit scholarly interpretation?

The Harry Ransom Center's Manuscript Fragments Project is just such a crowdsourcing project, and I think the results may be disquieting.  In this project, fragments of medieval manuscripts reused as binding for printed books are photographed and posted on Flickr.  Volunteers use the comments to identify the fragments, discussing the scribal hand and researching the source texts. I'd argue that while this does not duplicate the full range of an academic medievalist's scholarly activities, it's certainly not just "bottle-washing" either.

The project has been very successful.  (See organizer Micah Erwin's talks for details.)  Most of the contributions to the project have been made on Flickr in the comments by a few "super volunteers" -- retired rare book dealers and graduate students among them.  However, around 20% of the identifications were made by professional medievalists who learned about the project, visited the Flickr site, and then called or emailed the project organizer.  None of their contributions were made on the public Flickr forum at all.

So why did professional scholars avoid contributing in public?  I related this on Twitter, and got some interesting suggestions
Many of these suggest a sort of Gresham's Law of crowdsourcing, in which inviting the public to participate in an activity lowers that activity's status, driving out professionals concerned with their reputation. 

There's a more reassuring explanation as well -- many people with domain expertise still aren't very comfortable with technology.  Asking them to use a public forum puts additional pressure on them, as any mistakes typing, encoding, and using the forum will be public and likely permanent.  This challenge is not confined to professionals, either -- I receive commentary on the Julia Brumfield Diaries via email from people without high school degrees, who have no professional reputation to protect.

Wednesday, July 24, 2013

University of Delaware and Cecil County Historical Society on FromThePage

Over the last few months, the University of Delaware and the Cecil County Historical Society have been using FromThePage to transcribe the diary of a minister serving in the American Civil War.  They're using the project to expose undergraduates to primary sources while also improving access to an important local history document.

The county has documented the process with an extensive post on the Cecil County Historical Society Blog, which was picked up by the Cecil Daily.

The university also put together a lovely video providing background on the project and interviewing students and faculty members involved in the project:

One of the things I find most interesting about the project is the collaboration between digital humanities-focused university faculty and the county historical society:
Kasey Grier, director of the Museum Studies Program and the History Media Center at the university, says the transcription will be done by students in a process called “crowd sourcing.”

“Crowd sourcing,” according to Grier, “is when students in remote locations, review the handwritten text and try their hand at transcribing it. They then submit their contributions which are reviewed and put up online. Eventually, all of the diary entires will be available for anyone to access and read.”
Historical Society of Cecil County President Paul Newton says the society welcomes this collaboration with the University of Delaware and hopes to strengthen it because it broadens the society’s horizons and reach.
“The university’s focus is in the area of the digital humanities, which allows us to take largely unused and un-accessed collections and get the material out to a broader audience for study. It is also a preservation method as it reduces handling and makes interpretation much easier,” Grier said.
 You can see the Joseph Brown Diary and the students' work on it at the project site on FromThePage.com.

Saturday, July 13, 2013

The Collaborative Future of Amateur Editions

This is the transcript of my talk at Social Digital Scholarly Editing at the University of Saskatchewan in Saskatoon on July 11 2013.
I'm Ben Brumfield.  I'm not a scholarly editor, I'm an amateur editor and professional software developer.  Most of the talks that I give talk about crowdsourcing, and crowdsourcing manuscript transcription, and how to get people involved. I'm not talking about that today -- I'm here to talk about amateur editions.

So let's talk about the state of amateur editions as it was, as it is now, as it may be, and how that relates to the people in this room.
Let's start with a quote from the past.  This was written in 1996, representing what I think may be a familiar sort of consensus [scholarly] opinion about the quality of amateur editions, which can be summed up in the word "ewww!"
So what's going on now?  Before I start looking at individual examples of amateur editions, let's define--for the purpose of this talk--what an amateur edition is.

Ordinarily people will be talking about three different things:
  • They can be talking about projects like Paul's, in which you have an institution who is organizing and running the project, but all the transcription, editing, and annotation is done by members of the public.
  • Or, they can be talking about organizations like FreeREG, a client of mine which is a genealogy organization in the UK which is transcribing all the parish registers of baptisms, marriages, and burials from the reformation up to 1837.  In that case, all the material--all the documents--are held at local records offices and and archives, who in many cases are quite hostile to the volunteer attempt to  put these things online.  Nevertheless, over the last fifteen years, they've managed to transcribe twenty-four million of these records, and are still going strong.
  • Finally, amateur run editions of amateur-held documents.  These are cases like me working on my great-great grandmother's diaries, which is what got me into this world [of editing].
I'm going to limit that [definition] slightly and get rid of crowdsourcing.  That's not what I want to talk about right now.  I don't want to talk about projects that have the guiding hand of an institutional authority, whether that's an archive or a [scholarly] editor.
So let's take a look at amateur editions.  Here's a site called Soldier Studies.  Soldier Studies is entirely amateur-run.  It's organized by a high-school history teacher who got really involved in trying to rescue documents from the ephemera trade.
The sources of the transcripts of correspondence from the American Civil War are documents that are being sold on E-Bay.  He sees the documents that are passing through--and many of them he recognizes as important, as an amateur military historian--and he says, I can't purchase all of these, and I don't belong to an institution that can purchase them. Furthermore, I'm not sure that it's ethical to deal in this ephemera trade--there is some correlation to the antiquities trade--but wouldn't it be great if we could transcribe the documents themselves and just save those, so that as they pass from a vendor to a collector, some of the rest of us can read what's on these documents?
So he set up this site in which users who have access to these transcripts can upload letters.  They upload these transcripts, and there's some basic metadata about locations and subjects that makes the whole thing searchable.  
But the things that I think people in here--and I myself--will be critical about are the transcription conventions that he chose, which are essentially none.  He says, correspondence can be entered as-is--so maybe you want to do a verbatim transcript, but maybe not--and the search engines will be able to handle it.

A little bit more shocking is that -- you know, he's dealing with people who have scans--they have facsimile images--so he says, we're going to use that.  Send us the first page, so that we know that you're not making this piece of correspondence up completely, fabricating it out of whole cloth. 
So that's not a facsimile edition, and we don't have transcription conventions.  He has this caveat, in which he explains that this [site] is reliable because we have "the first page of the document attached to the text transcription as verification that it was transcribed from that source."  So you'll be able to read one page of facsimile from this transcript you have.  We do our best, we're confident, so use them with confidence, but we can't guarantee that things are going to be transcribed validly.

Okay, so how much use is that to a researcher? 
This puts me in the mind of Peter Shillingsburg's "Dank Cellar of Electronic Texts", in which he talks about the world "being overwhelmed by texts of unknown provenance, with unknown corruptions, representing unidentified or misidentified versions."

He's talking about things like Project Gutenberg, but that's pretty much what we're dealing with right here.  How much confidence could a historian place in the material on this site?  I'm not sure.

Here's an example of an amateur edition which is in a noble cause, but which is really more ammunition for the earlier quote.
So what about amateur editions that are done well?  This is the Papa's Diary Project, which is a 1924 diary of a Jewish immigrant to New York, transcribed by his grandson.

What's interesting about this -- he's just using Blogger, but he's doing a very effective job of communicating to his reader:
So here is a six-word entry.  We have the facsimile--we can compare and tell [the transcript] is right: "At Kessler's Theater.  Enjoyed Kreuzer Sonata."

So the amateur who's putting this up goes through and explains what Kessler's theater is, who Kessler was.
Later on down in that entry, he explains that Kessler himself died, and the Kreuzer Sonata is what he died listening to.  Further down the page you can listen to the Kreuzer Sonata yourself.

So he's taken this six-word diary entry and turned it into something that's fascinating, compelling reading.  It was picked up by the New York Times at one point, because people got really excited about this.
Another thing that amateurs do well is collaborate.  Again: Papa's Diary Project.  Here is an entry in which the diarist transcribed a poem called "Light". 
Here in the comments to that entry, we see that Jerroleen Sorrensen has volunteered: Here's where you can find [the poem] in this [contemporary] anthology, and, by the way, the title of the poem is not "Light", but "The Night Has a Thousand Eyes".

So we have people in the comments who are going off and doing research and contributing.
I've seen this myself.  When I first started work on FromThePage, my own crowdsourced transcription tool, I invited friends of mine to do beta testing.

I started off with an edition that I was creating based on an amateur print edition of the same diary from fifteen years previously.

If you look at this note here, what you see is Bryan Galloway looking over the facsimile and seeing this strange "Miss Smith sent the drugg... something" and correcting the transcript--which originally said "drugs"--saying, Well actually that word might be "drugget", and "drugget" is, if you look on Wikipedia, is a coarse woolen fabric.  Which--since it's January and they're working with [tobacco] plant-beds--that's probably what it is.

Well, I had no idea--nobody who's read this had any idea--but here's somebody who's going through and doing this proofreading, and he's doing research and correcting the transcription and annotating at the same time.
Another thing that volunteers do well is translate.  This is the Kriegstagebuch von Dieter Finzen, who was a soldier in World War I, and then was drafted in World War II.  This is being run by a group of volunteers, primarily in Germany.

What I want to point out is, that here is the entry for New Year's Day, 1916.  They originally post the German, and then they have volunteers who go online and translate the entry into English, French, and Italian.

So now, even though my German is not so hot, I can tell that they were stuck drinking grenade water.
So, what's the difference?

What's the difference between things that amateurs seem to be doing poorly, and things that they're doing well?

I think that it comes down to something that Gavin Robinson identified in a blog post that he wrote about six years ago about the difference between professional historians/academic historians and amateur historians.  What he essentially says is that professionals--particularly academics, but most professionals--are particularly concerned with theory.  They're concerned with their methodologies and with documenting their methodologies.

This is something that amateurs, in many cases, are not concerned with -- don't know exist -- maybe have never even been exposed to.
So, based on that, let's talk about the future.

How can we get amateurs--doing amateur editions on their own--to move from the things that they're doing well and poorly to being able to do everything well that's relevant to researchers' needs?

I see three major challenges to high-quality amateur editions.

The first one is one which I really want to involve this community in, which is ignorance of standards.  The idea that you might actually include facsimiles of every page with your transcription -- that's a standard.  I'm not talking about standards like TEI -- I'd love for amateur editions to be elevated to the point that print editions were in 1950 -- we're just talking about some basics here.

Lack of community and lack of a platform.
So let's talk about standards.

How does an amateur learn about editorial methodologies?  How do they learn about emendations?  How do they learn about these kinds of things?

Well, how do they learn about any other subject?  How do they learn about dendrochronology if they're interested in measuring tree rings? 

Let's go check out Wikipedia!
Wikipedia has a problem for most subjects, which is that Wikipedia is filled with jargon.  If you look up dendrochronology, you don't really have a starting place, a "how to".  If you look up the letter X, you get this wonderful description of how 'X' works in Catalan orthography, but it presupposes you being familiar with the International Phonetic Alphabet, and knowing that that thing which looks like an integral sign is actually the 'sh' sound.

Now if amateurs are trying to do research on scholarly editing and documentary editing in Wikipedia, they have a different problem:
There's nothing there. There's no article on documentary editing.
There's no article on scholarly editing.

These practices are invisible to amateurs
So if they can't find the material online that helps them understand how to encode and transcribe texts, where are they going to get it?

Well--going back to crowdsourcing--one example is by participation in crowdsourcing projects.  Crowdsourcing projects--yes, they are a source of labor; yes they are a way to do outreach about your material--but they are a way to train the public in editing.  And they are training the public in editing whether that's the goal of the transcription project or not.  The problem is that the teacher in this school is the transcription software--is the transcription website.

This means that the people who are teaching the public about transcription--the people who are teaching the public about editing--are people like me: developers.

So, how do developers learn about transcription?

Well, sometimes, as Paul [Flemons] mentioned, we just wing it.  If we're lucky, we find out about TEI, and we read the TEI Guidelines, and we find out that there's so much editorial practice that's encoded in the TEI Guidelines that that's a huge resource.

If we happen to know the people in this room or the people who are meeting at the Association for Documentary Editing in Ann Arbor, we might discover traditional editorial resources like the Guide to Documentary Editing.  But that requires knowing that there's a term "Documentary Editing".

So what does that mean?  What that means is that people like me--developers with my level of knowledge or ignorance--are having a tremendous amount of influence on what the public is learning about editing.  And that influence does not just extend to projects that I run -- that influence extends to projects that archives and other institutions using my software run.  Because if an archive is trying to start a transcription project, and the archivist has no experience with scholarly editing, I say, You should pick some transcription conventions.  You should decide how to encode this.  Their response is, What do you think?  We've never done this before.  So I'm finding myself giving advice on editing.
Okay, moving on.

The other thing that amateurs need is community.

Community is important because community allows you to collaborate.  Communities evaluate each [member's] work and say, This is good.  This is bad.  Communities teach each [member].  And communities create standards -- you don't just hang out on Flickr to share your photos -- you hang out on Flickr to learn to be a better photographer.  People there will tell you how to be a better photographer.

We have no amateur editing community for people who happen to have an attic full of documents and want to know what to do with them.
So communities create standards, and we know this.  Let me quote my esteemed co-panelist, Melissa Terras, who, in her interviews with the managers of online museum collections--non-institutional online "museums"--found that people are coming up with "intuitive metadata" standards of their own, without any knowledge or reference to existing procedures in creating traditional archival metadata.
The last big problem is that there's currently no platform for someone who has an attic full of documents that they want to edit.  They can upload their scans to Flickr, but Flickr is a terrible platform for transcription.

There's no platform that will guide them through best practices of editing.

What's worse, if there were one, it would need a "killer feature", which is what Julia Flanders describes in the TAPAS project as a compelling reason for people to contribute their transcripts and do their editing on a platform that enforces rigor and has some level of permanence to it -- rather than just slapping their transcripts up on a blog.
So, let's talk about the future.  In his proposal for this conference, Peter Robinson describes a utopia and dystopia: utopia in which textual scholars train the world in how to read documents, and a dystopia in which hordes of "well-meaning but ill-informed enthusiasts will strew the web willy-nilly with error-filled transcripts and annotations, burying good scholarship in rubbish." 
This is what I think is the road to dystopia:
  1. Crowdsourcing tools ignore documentary editing methodologies.  If you're transcribing using the Transcribe Bentham tool, you learn about TEI.  You learn from a good school.  But almost all of the other crowdsourced transcription tools don't have that.  Many of them don't even contain a place for the administrator to specify transcription conventions to their users!
  2. As a result, the world remains ignorant of the work of scholarly editors, because we're not finding you online--because you're invisible on Wikipedia--and we're not going to learn about your work through crowdsourcing.
  3. So you have the public get this attitude that, well, editing is easy -- type what you see.  Who needs an expert?  I think that's a little bit worrisome.
  4. The final thing--which, when I started working on this talk, was a sort of wild bogeyman--is the idea that new standards come into being without any reference whatsoever to the tradition of scholarly or documentary editing.
I thought that [idea] was kind of wild.  But, in March, an organization called the Family History Information Standards Organization--which is backed by Ancestry.com, the Federation of Genealogy Societies, BrightSolid, a bunch of other organizations--announced a Call for Papers for standards for genealogists and family historians to use -- sometimes for representing family trees, sometimes for source documents.
And, in May, Call for Papers Submission number sixty-nine, "A Transcription Notation for Genealogy", was submitted.
Let's take a look at it.

Here we have what looks like a fairly traditional print notation.  It's probably okay.
What's a little bit more interesting, though, is the bibliography.

Where is your work in this bibliography?  It's not there.

Where is the Guide to Documentary Editing?  It's not there.

So here's a new standard that was proposed the month before last.  Now, I hope to respond to this--when I get the time--and suggest a few things that I've learned from people like you.  But these standards are forming, and these standards may become what the public thinks of as standards for editing.
All right, so let's talk about the road to utopia.

The road to the utopia that Peter described I see as in part through partnerships between amateurs and professionals:  you get amateurs participating in projects that are well run -- that teach them useful things about editing and how to encode manuscripts.

Similarly, you get professionals participating in the public conversation, so that your methodologies are visible.   Certainly your editions are visible, but that doesn't mean that editing is visible.  So maybe someone here wants to respond to that FHISO request, or maybe they just want to release guides to editing as Open Access.

As a result, amateurs produce higher-quality editions on their own, so that they're more useful for other researchers; so that they're verifiable.

And then, amateurs themselves become advocates -- not just for their material and the materials they're working on through crowdsourcing projects, but for editing as a discipline.

So that's what I think is the road to utopia.
So what about the past?

Back in Shillingsburg's "Dank Cellar" paper, he describes the problems with the e-texts that he's seeing, and he really encourages scholarly editors not to worry about it -- to disengage -- [and] instead to focus on coming up with methodologies--and again, this is 2006--for creating digital editions.  He says that these aren't well understood yet.  Let's not get distracted by these [amateur] things -- let's focus on what's involved in making and distributing digital editions.

Is he still right?  I don't know.

Maybe--if we're in the post-digital age--it's time to re-engage.

Sunday, June 2, 2013

Crowdsourcing + Machine Learning: Nicholas Woodward at TCDL

I was so impressed by Nicholas Woodward's presentation at TCDL this year that I asked him if I could share "Crowdsourcing + Machine Learning: Building an Application to Convert Scanned Documents to Text" on this blog.

Hi. My name is Nicholas Woodward, and I am a Software Developer for the University of Texas Libraries. Ben Brumfield has been so kind as to offer me an opportunity to write a guest post on his blog about my approach for transcribing large scanned document collections that combines crowdsourcing and computer vision. I presented my application at the Texas Conference on Digital Libraries on May 7th, 2013, and the slides from the presentation are available on TCDL’s website. This purpose of this post is to introduce my approach along with a test collection and preliminary results. I’ll conclude with a discussion on potential avenues for future work.

Before we delve into algorithms for computer vision and what-not, I’d first like to say a word about the collection used in this project and why I think it’s important to look for new ways to complement crowdsourcing transcription. The Guatemalan National Police Historical Archive (or AHPN, in Spanish) contains the records of the Guatemalan National Police from 1882-2005. It is estimated that AHPN contains more than 80 million pages of documents (8,000 linear meters) such as handwritten journals and ledgers, birth certificate and marriage license forms, identification cards and typewritten letters. To date, the AHPN staff have processed and digitized approximately 14 million pages of the collection, and they are publicly available in a digital repository that was developed by UT Libraries.

While unique for its size, AHPN is representative of an increasingly common problem in the humanities and social sciences. The nature of the original documents precludes any economical OCR solution on the scanned images (See below), and the immense size of the collection makes page-by-page transcription highly impractical, even when using a crowdsourcing approach. Additionally, the collection does not contain sufficient metadata to support browsing via commonly used traits, such as titles or authors of documents.

These characteristics of AHPN informed my idea to develop a different method for transcribing a large scanned document collection that draws on the work of popular crowdsourcing transcription tools such as FromThePage, Scripto and Transcribe Bentham. These tools allow users to transcribe individual records of a collection, maintaining quality control largely through redundancy and error checking.

The only drawback from this approach is that the results of crowdsourcing are only applicable to the document being transcribed. In contrast, my approach looks to break up documents into individual words with the idea that though no two documents are exactly alike they are likely to contain similar words. And across an entire corpus, particularly very large ones such as AHPN, words are likely to appear many times. Consequently, if users transcribe the words of one document, then I can use image matching algorithms to find other images of the same words and apply the crowdsourced transcription to the new images. The process looks like this:
  1. Segment scanned documents into words
  2. Crowdsource the transcription of a fraction of those words
  3. Use image matching to pair images of transcribed images with unknown images
  4. In the case of a match, associate the crowdsourced text with the document containing the unknown image
Step 1 attempts to segment images like so.
The key point here is that due to a host of factors (smudges, speckles, light text, poor typewriters, etc.) the segmentation will not be 100% accurate. This is OK because the goal is really only to get as many words as possible, understanding that if we can successfully capture the type of terms that users of the collection will typically use to search, i.e. dates or names of people, places, and organizations, then the online version of AHPN will become much more useful than it is now. Here are a few typical examples of documents from AHPN with the segmentation algorithm I developed using OpenCV libraries.
The result is a folder of small images containing individual words.
The second step is to crowdsource the transcription of as many of these words as possible. I developed an online application using CakePHP and MySQL that allows users to select documents and then transcribe the words they see.

The crowdsourcing output from the testing phase of the project looked like this.
Because there were so few images per word, I abandoned the original plans to create a multi-class SVM classifier that would find matching words in bulk. Instead, I drew on OpenCV again to develop a workflow for image matching that consisted of six steps.
  1. Compare the width and height of each image
  2. Compare the histograms of each image
  3. Extract ‘interest points’ of each image using SURF algorithm
  4. Match interest points between them
  5. Calculate average distance between interest points
  6. If the average is below a certain threshold, consider the images a match
The first step is fairly intuitive. Longer images likely contain words with more letters than shorter images, and a different number of letters indicates they are clearly not the same word. This step is relatively basic, but it turns out that in many, many cases it obviates the subsequent more computationally intensive work.

The second step is also relatively straightforward. Image histograms are just the counts of pixels of different hues in an image. In a typical color image, a histogram could be the number of pixels of each color. In this case the scanned documents are grayscale TIFF images, and calculating histograms is simply a matter of adding up all the black and white pixels separately (See below). Two images of different words may contain the same number of letters but the letters are different shapes and so their ratio of black to white pixels will also differ. Note: this does not solve the problem of words with similar letters in a different order, e.g. “la” and “al”. This case is handled in the next step.

Steps 3 through 6 represent the most computationally intensive part of the entire matching process, and they are based on the Speeded Up Robust Features (SURF) implementation in OpenCV. The basic idea is to use the SURF algorithm to find the “interest points” of an image and then measure the distance between points in two separate images. Interest points are curves, blobs and points where two lines meet. Going this route is both a strength and weakness to my approach. I’ll explain. Even if we don’t speak Spanish, it’s relatively easy to infer that the images below contain the same word, just in a different format. Whether they’re in all caps or all lowercase or the first letter is capitalized, the word is the same to a human.

But not to a computer.

And we see this when we use SURF to calculate the interest points of each image.

The final step to matching images involves computing the distance between interest points and finding the matches below a certain distance. In this case, I used the ratio test to eliminate poor interest point matches and then considered to images the same if their average distance was below .20. The example below attempts to match the word “placas” (plates) with several other words, and finds a match with another image of the word placas, despite the poor quality of the second image.
The point here is that we will need to match each of these images separately, even though they contain the same word. This means we’ll need more images to train a classifier (or image match), and we will have to find every possible way that a word (WORD, Word, word, etc.) appears in a corpus and crowdsource the transcription of at least a few of them before we classify unknown words. Ergo more time and work to finish.

But here is how it’s also a strength of the approach in at least two ways. The images above are Spanish words, but to a computer they’re just objects. They could really be in any language and the same basic workflow would apply. The only adjustments would be to crowdsourcers who understood the language (helpful for determining the word when it’s hard to make out and context helps) and tweaks to the algorithm parameters in steps 3-6. Second, while the format of the word is not necessarily important for keyword searches, it is relevant to many computation research methods such as document clustering, classification and finding named entities.
The output of this approach on the test collection from AHPN is as follows:

There are several implications from these results. First, shorter images, i.e. one or two letters, basically bring us back to the same issues of OCR. In the case of AHPN the quality of the scanned documents varies, and in many instances there are just not enough interest points to differentiate between ‘a’, ‘o’ and ‘e’. So my approach may only be suitable for longer words. This may not be too serious of an issue since in most cases short words are also stopwords, which are generally not needed for either keyword searches or computational research. Second, without a doubt, I will need to do the crowdsourcing on a much larger scale. 2.33 images per word is not enough training data to create an image classifier capable of finding other images of that word. And there is also a greater need for transcribing output because this approach requires images of each word in all of its forms, along with the transcribed text.

The future steps of the project, then, must include an avenue for acquiring more crowdsourced transcription. I think the best approach for this will be to refactor the CakePHP code into modules for Omeka, Drupal and other popular open source CMSs. Similar to the crowdsourcing tools mentioned above, Scripto, FromThePage, etc., I’ll need to enlist the users of particular collections who may have the most vested interest in seeing them become more accessible and functional. Another important component will be to continue development on the algorithms for image segmentation and matching. I am interested in looking at how to segment handwritten text, especially cursive text where it is particularly difficult to determine the spaces between words. Additionally, the image matching detailed above may be useful going forward because it is not particularly memory intensive and the work can be divided into separate tasks for a distributed computing environment, maybe something along the lines of SETI@Home. But with more output from crowdsourcing it would be good to incorporate either a Bag of Words or SVM classifier approach to process more words at a time. So there’s definitely plenty to do. Stay tuned!