Thursday, December 30, 2010

Two New Diary Transcription Projects

The last few weeks have seen the announcement of two new transcription projects. I'm particularly excited about them because--like FromThePage--their manuscripts are diaries and they plan to open the transcription tasks to the public.

Dear Diary announces the digitization of University of Iowa's Historic Iowa Children's Diaries:
We have a deep love for historic diaries as well, and we’re currently hard at work developing a site that will allow the public to help enhance our collections through “crowdsourcing” or collaborative transcription of diaries and other manuscript materials. Stay tuned!
A Look Inside J. Paul Getty’s Newly Digitized Diaries describes J. Paul Getty's diaries and their contents, and mentions in passing that
We will soon launch a website that will invite your participation to perform the transcriptions (via crowdsourcing), thus rendering the diaries keyword-searchable and dramatically improving their accessibility.
I will be very interested to follow these projects and see which transcription systems they use or develop.

Wednesday, December 15, 2010

NABPP Transcription User Survey Results

The ever-innovative North American Bird Phenology Program has just released the results of their user satisfaction survey. In addition to offering kudos to the NABPP for pleasing its users--who had transcribed 346,786 species observation cards as of November--I'd like to highlight some of the survey results -- after all, this is the first user survey for a transcription tool that I'm aware of.

This chart shows answers to the question "What inspires you to volunteer?":
Users overwhelmingly cite the importance of the program and their love of nature. The importance response obviously speaks to the "save the world" aspect of the NABPP's mission tracking climate change. But how does a love of nature inspire someone to sit in front of a computer and decipher seventy thousand pieces of old hand-writing? Based on my experience with the NABPP (and indeed with Julia Brumfield's diaries), I think the answer is simple: transcription is an immersive activity. It is rare that we read more deeply than when we transcribe a text, so the transcriber is transported to a different place and time in the same way as a reader lost in a novel or a player in a good video game.

While the whole survey is worth reading, one other response stood out for me. Answering the open-ended "how can we improve" question, one respondent requested examples of properly transcribed "difficult" cards on the transcription page. I know that I as a tool-maker tend to concentrate on providing users help using the software I develop. However, transcription tools need to provide users with non-technical guidance on paleographic challenges, unusual formatting, and other issues that may be unique to the material being transcribed. I'm not entirely sure how to accomplish this as a developer, other than by facilitating communication among transcribers, editors, and project coordinators.

Let me conclude by offering the NABPP my congratulations on their success and my thanks for their willingness to share these results with the rest of us.

Friday, December 10, 2010

Militieregisters.nl and Velehanden.nl

Militieregisters.nl is a new transcription project organized by the City Archive Amsterdam that plans to use crowdsourcing to index militia registers from several Dutch archives. It's quite ambitious, and there are a number of innovative features about the project I'd like to address. However, I haven't seen any English-language coverage of the project so I'll try to translate and summarize it as best as my limited Dutch and Google's imperfect algorithms allow before offering my own commentary.

With the project "many hands make light work", Stadsarchief Amsterdam will make all militia records searchable online through crowdsourcing -- not just inventories, but indexes.

About
To research how archives and online users can work together to improve access to the archives, the Stadsarchief Amsterdam has set up the "Many Hands" project. With this project, we want to create a platform where all Dutch archives can offer their scans to be indexed and where all archival users can contribute in exchange for fun, contacts, information, honor, scanned goods, and whatever else we can think of.

To ask the whole Netherlands to index, we must start with archives that are important to the whole Netherlands. As the first pilot, we have chosen the Militia Registers, but there will soon be more archival files to be indexed so that everyone can choose something within his interest and skill-level.

All Militia Registers Online

Militia registers contain the records of all boys who were entered for conscription into military service during almost the entire 19th and part of the 20th centuries. These records were created in the entire Netherlands and are kept in many national and municipal archives.

The militia records are eminently suitable for large-scale digitization. The records consist of printed sheets. This uniformity makes scanning easy and thus inexpensive. More importantly, this resource is interesting for anyone with Dutch ancestry. Therefore we expect many volunteers to lend a hand to help unlock this wonderful resource, and that the online indexes will eventually attract many visitors.

But the first step is to scan the records. Soon the scanning of approximately one million pages will begin as a start. The more records we have digitized, the cheaper the scanning becomes, and the more attractive the indexing project becomes to volunteers. The Stadsarchive therefore calls upon all Dutch archival institutions to join!
FAQ
  • At our institution, online scans are provided for free. Why should people pay for scans?
    Revenues from sales of scans over the two years duration of the project are a part of the financing of the project. The budget is based on the rates as used in the City Archives: € 0.50 to € 0.25 per scan, depending on the number of scans that someone buys. We ask the institutions that participate throughout the project do not sell their own scans or make them available for free. After the completion of the project, each institution may follow its own policy for providing the scans.
  • If we participate, who is the owner of the scans and index data?
    After production and payment, the scans will be delivered immediately to the institution which provided the militia records. The index information will also be supplied to the institutions after completion of the project. The institution remains the owner, but during the project period of approximately two years the material may not be used outside of the project.
  • What are the financial risks for participating archives?
    Participants pay only for their scans: the actual costs and preparation of the scanning process. The development and deployment of the index tool, volunteer recruitment and two years maintenance of the website from the project has been funded by grants and contributions by Stadsarchief Amsterdam. There are no financial surprises.
  • What does the schedule for the project look like?
    On July 12 and September 13 we are organizing meetings with potential participants to answer your questions. Until October 1, 2010, participants will sign up to participate in the project, in order for the scanning to start on that day. The tender process runs about 2 months, so a supplier can be contracted in 2010. In January 2011 we will start scanning, volunteers can begin indexing in the spring. The sister site www.velehanden.nl--where the indexing will take place--will continue online for at least one year.
  • Will the indexing tool be developed as Open Source software?
    It is currently impossible to say whether the indexing tool will be developed via/as open source software. Of primary importance is finding the most cost-effective solution and that the software performs well and is user-friendly. The only hard requirement is the use of open standards for the import and export of metadata, so that vendor independence is guaranteed.
RFP (Warning: very loose translation!)
Below are some ideas SAA has formulated regarding the functionality and sustainability of VeleHanden.nl:
  • Facilities for importing and managing scans, and for exporting data in XML format.
  • Scan viewer with advanced features.
  • Functionality to simultaneously run multiple projects for indexing, transcription, and translation of scans.
  • Features for organizing and managing data from volunteer groups and for selectively enabling features for participants and volunteer coordinators.
  • Features for communication between archival staff and volunteers, as well as for volunteers to provide support to each other.
  • Automated features for control of the data produced.
  • Rewards system (material and immaterial) for volunteers.
  • Many volunteers may work in parallel to process scans quickly and effectively.
  • Facilities to search, view and share scans online.
Other Dutch bloggers have covered the unique approach that Stadsarchief Amsterdam envisions for volunteer motiviation and project support: Militieregisters.nl users who want to download scans may either pay for them in cash or in labor, by indexing N scanned pages. Christian van der Ven's blog post Crowdsourcen rond militieregisters and the associated comment thread discusses this intensely and is worth reading in full. Here's a loosly-translated excerpt:
The project assumes that it can not allow the volunteer to indicate whether he wants to index Zeeland or Groningen. It is--in the words of the project leader--the Orange feeling, to see if the rural people can volunteer and not just concentrate on their own location. Indexing people from their own village? Please, not that!

Well since the last World Cup I'm feeling Orange again, but overall experience and research in archives teaches that all country people are more interested in the history of themselves, their own ancestors, their homes and the surrounding area. The closer [the data], the more motivation to do something.

And if the purpose of this project is to build an indexing tool, to scan registers, and then to obtain indexes through crowdsourcing as quickly as possible, it seems to me that the public should be given what it wants: local resources if desired. What I suggest is a choice menu: do you want records from your source environment? Do you want them maybe only from a certain period? Or do you want them filtered by time and place? That kind of choice will trigger as many people as possible to participate, I think.
My Observations:
  • The pay-or-transcribe approach for acquiring scans is a really innovative approach. Offering people alternatives for supporting the project is a great way of serving the varied constituencies that compose genealogical researchers, allowing cash-poor, time-rich users (like retirees) an easy way to access the project.
  • Although I have no experience in the subject, I suspect that this federated approach to digitization--taking structurally-similar material from regional archives and scanning/hosting it centrally--has a lot of possibilities.
  • Christian's criticism is quite valid, and drives right to the paradox of motivation in crowdsourcing: do you strive for breadth using external incentives like scoreboards and free recognition, or do you strive for depth and cultivate passionate users through internal incentives like deep engagement with the source material? Volunteer motivation and the trade-offs involved is a fascinating topic, and I hope to do a whole post on it soon.
  • One potential flaw is that it will be very hard to charge to view the scans when transcribers must be able to see the scans to do their indexing. I gather that the randomization in VeleHanden will address this.
  • The budget described in the RFP is maximum 150000 Euros. As a real-life software developer, it's hard for me to see how this would pay for building a transcription tool, index database, scan import tool, scan CMS, search database and (since they expect to sell the searched scans) eCommerce. And that includes running servers too!
  • This is yet another project that's transcribing structured data from tabular sources, which would benefit from the FamilySearch Indexer, if only it were open-source (or even for sale).

Saturday, December 4, 2010

Errata

Two important errata in previous posts are worth highlighting:

Wikisource for Manuscript Transcription

Commenters on my most recent post, "Wikisource for Manuscript Transcription" have pointed out that the Wikisource rule prohibiting the addition of unpublished works--thereby almost entirely prohibiting manuscript transcription projects--is specific to the language domains. The English and French language Wikisource projects enforce the prohibition, but the German and Hebrew language Wikisource projects do not.

Dovi, a Wikipedia editor whom I used to bump into on Ancient Near Eastern language articles, points to his work on the the Wikisource edition of Arukh Hashulchan, pointing to this example of simanim. Sadly, my Hebrew isn't up to the task of commenting on this effort.

I was delighted to see that the post also inspired a bit of commentary on the German-language Wikisource Skriptorium. In particular, WikiAnika seemed to agree with my criticism of flat transcription: Man scheint in gewisser Weise noch an der Printform „zu kleben“. Oder um einen Dritten zu zitieren: „WS ist eine Sackgasse – man findet vielleicht zu einem interessanten Text hin, aber man kommt nicht mehr weiter.“

A Short Introduction to before_filter

In my 2007 post on Rails filters, I mentioned using filters to authenticate users:
Filters are called filters because they return a Boolean, and if that return value is false, the action is never called. You can use the the logged_in? method of :acts_as_authenticated to prohibit access to non-users — just add before_filter :logged_in? to your controller class and you're set!
Thanks to some changes made in Rails 2.0, this is not just wrong but dangerously wrong. Rails now ignores filter return values, allowing unauthorized access to your actions if you followed my advice.

Because Rails no longer pays any attention to the return values of controller filters, I've had to replace all my return condition statements with unless condition redirect_to somewhere.

Wednesday, July 21, 2010

Wikisource for Manuscript Transcription

Of the crowdsourcing projects that have real users doing manuscript transcription, one of the largest is an offshoot of Wikisource. ProofreadPage was an extension to MediaWiki created around 2006 on the French-language Wikisource as a Wikisource/Internet Archive replacement for Project Gutenberg's Distributed Proofreaders. They were taking DjVu files from the InternetArchive and using them as sources (via OCR and correction) for WikiSource pages. This spread to the other Wikisource sites around 2008, radically changing the way Wikisource worked. More recently the German Wikisource has started using ProofreadPage for letters, pamphlets, and broadsheets.


The best example of ProofreadPage for handwriting is Winkler's Remarks on the Russian Campaign 1812-1813. First, the presentation is lovely. They've dealt with a typographically difficult text and are presenting alternate typefaces, illustrations, and even marginalia in a clear way in the transcription. The page numbers link to the images of the pages, and they're come up with transcription conventions which are clearly presented at the top of the text. This is impressive for a volunteer-driven edition!

Technically, the Winkler example illustrates ProofreadPage's solution to a difficult problem: how to organize and display pages, sections, and works in the appropriate context. This is not an issue that I've encountered with FromThePage—the Julia Brumfield Diaries are organized with only one entry per page—but I've worried about it since XML is so poorly suited to represent overlapping markup. When viewing Winkler as a work, paragraphs span multiple manuscript pages but are aggregated seamlessly into the text: search for "sind den Sommer" and you'll find a paragraph with a page-break in the middle of it, indicated by the hyperlink "[23]". Clicking on the page in which that paragraph begins shows the page and page image in isolation, along with footnotes about the page source and page-specific information about the status of the transcription. This is accomplished by programmatically stitching pages together into the work display while excluding page-specific markup via a noinclude tag.

But the transcription of Winkler also highlights some weaknesses I see in ProofreadPage. All annotation is done via footnotes which—although they are embedded within the source—are a far cry from the kind of markup we're used to with TEI or indeed HTML. In fact, aside from the footnotes and page numbers, there are no hyperlinks in the displayed pages at all. The inadequacies of this for someone who wants extensive text markup are highlighted by this personal name index page — it's a hand-compiled index! Had the tool (or its users) relied on in-text markup, such an index could be compiled by mining the markup. Of course, the reason I'm critical here is that FromThePage was inspired by the possibilities offered by using wiki-links within text to annotate, analyze, edit and index, and I've been delighted by the results.

When I originally researched ProofreadPage, one question perplexed me: why aren't more manuscripts being transcribed on Wikisource? A lot has happened since I last participated in the Wikisource community in 2004, especially within the realm of formalized rules. There now is a rule on the English, French, and German Wikisource sites banning unpublished work. Apparently the goal was to discourage self-promoters from using the site for their own novels or crackpot theories, and it's pretty drastic. The English language version specifies that sources must have been previously published on paper, and the French site has added "Ne publiez que des documents qui ont été déjà publiés ailleurs, sur papier" to the edit form itself! It is a rare manuscript indeed that has already been published in a print form which may be OCRed but which is worth transcribing from handwriting anyway. As a result, I suspect that we're not likely to see much attention paid to transcription proper within the ProofreadPage code, short of a successful non-Wikisource Mediawiki/ProofreadPage project.

Aside from FromThePage (which is accepting new transcription projects!) ProofreadPage/Mediawiki is my favorite transcription tool. Its origins outside the English-language community and Wikisource community policy have obscured its utility for transcribing manuscripts, which is why I think it's been overlooked. It's got a lot of momentum behind it, and while it is still centered around OCR, I feel like it will work for many needs. Best of all, it's open-source, so you can start a transcription project by setting up your own private wikisource instance.

Thanks to Klaus Graf at Archivalia for much of the information in this article.

Tuesday, June 15, 2010

Facebook versus Twitter for Crowdsourcing Document Transcription

Last week, I posted a new document to FromThePage and inadvertently conducted a little experiment on how to publicize crowdsourcing. Sometime in the early 1970s, my uncle was on a hunting trip when he was driven into an old, abandoned building by a thunderstorm. While waiting for the weather to moderate, he found a few old documents -- two envelopes bearing Confederate stamps, one of which contained a letter. I photographed this letter and uploaded it to FromThePage while on vacation last week, and that's where the experiment begins.

I use both Facebook and Twitter, but post entirely different material to them. My 347 Facebook friends include most of my high school classmates, several of my friends and classmates from college, much of my extended family, and a few of my friends in here in Austin. I mostly post status updates about my personal life, only occasionally sharing links to Julia Brumfield diaries whenever I read an especially moving passage. My 344 Twitter followers are almost entirely people I've met at or through conferences like THATCamp08 or THATCamp Austin. They consist of academics, librarians, archivists, and programmers -- mostly ones who identify in some way with the "digital humanities" label. I usually tweet about technical/theoretical issues I've encountered in my own DH work. At least a few of my followers even run their own transcription software projects. Given the overlap between interesting content and FromThePage development, I decided to post news of the East Civil War letters to both systems.

My initial tweet/status--posted while I was still cropping the images--got similar responses from both systems. Two people on Twitter and five on Facebook replied, helping me resolve the letter's year to 1862. Here's what I posted on FaceBook:










And here's the post on Twitter:




After I got one of the envelopes created in FromThePage, I tested the images out by posting again to Facebook. This update got no response.


The next day, I uploaded the second envelope and letter, then posted to Twitter and Facebook while packing for our return trip.

Facebook:


Twitter:


This time the contrast in responses was striking. I got 3 click-throughs from Twitter in the first three days, and I'm not entirely sure that one of those wasn't me clicking the bit.ly link by accident. While my statistics aren't as good for Facebook click-throughs, there were at least 6 I could identify. More important, however, was the transcription activity -- which is the point of my crowdsourcing project, after all. Within 3 hours of posting the link on Facebook, one very-occasional user had contributed a transcription, and I'd gotten two personal emails requesting reminders of login credentials from other people who wanted to help with the letter.

What accounts for this difference? One possibility is that the archivists and humanists who comprise my Twitter followership are less likely to get excited about a previously-unpublished Civil War letter -- after all, many of them have their own stacks of unpublished material to transcribe. Another possibility is that the envelope link I posted on Facebook increased people's anticipation and engagement. However, I suspect that the most important difference is that the Facebook link itself was more compelling due to the inclusion of an image of the manuscript page. Images are just more compelling than dry text, and Facebook's thumbnail service draws potential volunteers in.

My conclusion is that it's worth the effort to build an easy Facebook sharing mechanism into any document crowdsourcing tool, especially if that mechanism provides the scanned document as the image thumbnail.

Tuesday, March 2, 2010

Feature plan for 2010

Three years ago, I laid out a plan for getting FromThePage to general availability. Since that time, I completed most of the features I thought necessary, gained some dedicated users, and saw the software used to transcribe and annotate over a thousand pages of Julia Brumfield's diaries. However, most of the second half of 2009 was spent using the product in my editorial capacity and developing features to support that effort, rather than moving FromThePage out of its status as a limited beta.

Here's what my top priorities are for 2010:

Release FromThePage as Free/Open Source Software.
I've written before about my position as a hobbyist-developer and my desire not to see my code become abandonware if I am no longer able to maintain it. Open Source seems to be the obvious solution, and despite my concerns about Open Access use, I am taking some good advice and publishing the source code under the GNU Affero GPL. I've taken some steps to do this, including migrating towards a GitHub repository, but hope to wait until I've made a few test drives and developed some installation instructions before I announce the release.

Complete PDF Generation and Publish-on-Demand Integration.
While I think that releasing FromThePage is the best way forward for the software itself, it doesn't get me any closer to accomplishing my original objective for the project -- sharing Julia Brumfield's diaries with her family. It's hard to balance these goals, but at this point I think that my efforts after the F/OSS release should be directed towards printable and print-on-demand formats for the diary transcriptions. I've built one proof-of-concept and I've settled on an output technology (RTeX), so all that remains is writing the code to do it.

I'll still work on other features as they occur to me, I'm still editing Julia's 1921 diary, and I'm still looking for other transcription projects to host, but these two goals will be the main focus of my development this year.

Monday, December 21, 2009

Feature: Related Pages

I've been thinking a lot about page-to-subject links lately as I edit and annotate Julia Brumfield's 1921 diary. While I've been able to exploit the links data structure in editing, printing, analyzing and displaying the texts, I really haven't viewed it as a way to navigate from one manuscript page to another. In fact, the linkages I've made between pages have been pretty boring -- next/previous page links and a table of contents are the limit. I'm using the page-to-subject links to connect subjects to each other, so why not pages?

The obvious answer is that the subjects which page A would have most in common with page B are the same ones it would have in common with nearly every other page in the collection. In the corpus I'm working with, the diarist mentions her son and daughter-in-law in 95% of pages, for the simple reason that she lives with them. If I choose two pages at random, I find that March 12, 1921 and August 12, 1919 both contain Ben and Jim doing agricultural work, Josie doing domestic work, and Julia's near-daily visit to Marvin's. The two pages are connected through those four subjects (as well as this similarly-disappointing "dinner"), but not in a way that is at all meaningful. So I decided that a page-to-page relatedness tool couldn't be built from the page-to-subject link data.

All that changed two weeks ago, when I was editing the 1921 diary and came across the mention of a "musick box". In trying to figure out whether or not Julia was referring to a phonograph by the term, I discovered that the string "musick box" occurred only two times: when the phonograph was ordered and the first time Julia heard it played. Each one of these mentions shed so much light on the other that I was forced to re-evaluate how pages are connected through subjects. In particular, I was reminded of the "you and one other" recommendations that LibraryThing offers. This is a feature that find other users with whom you share an obscure book. In this case, obscurity is defined as the book occurring only twice in the system: once in your library, once in the other user's.

This would be a relatively easy feature to implement in FromThePage. When displaying a page, perform this algorithm:
  • For each subject link in the page, calculate how many times it is referenced within the collection, then
  • Sort those subjects by reference count, and
  • Take the 3 or 4 subject links with the lowest reference count and,
  • Display the pages which link to those subjects.
For a really useful experience, I'd want to display keyword-in-context, showing a few words to explain the context in which that other occurrence of "musick box" appears.

Friday, June 26, 2009

Connecting With Readers

While editing and annotating Julia Brumfield's 1919 diary, I've tried to do research on the people who appear there. Who was Josie Carr's sister? Sites like FindAGrave.com can help, but the results may still be ambiguous: there are two Alice Woodings buried in the area, and either could be a match.

These questions could be resolved pretty easily through oral interviews -- most of the families mentioned in the diaries are still in the area, and a month spent knocking on doors could probably flesh out the networks of kinship I need for a complete annotation. However, that's really not time I have, and I can't imagine cold-calling strangers to ask nosy questions about their families -- I'm a computer programmer, after all.

It turns out that there might be an easier way. After Sara installed Google Analytics on FromThePage, I've been looking at referral log reports that show how people got to the site. Here's the keyword report for June, showing what keywords people were searching for when they found FromThePage:

Keyword Visits Pages/Visit Avg. Time on Site
"tup walker" 21 12.47619 992.8571
"letcher craddock" 7 12.42857 890.1429
julia craddock brumfield 3 28 624.3333
juliacraddockbrumfield 3 74.33333 1385
"edwin mayhew" 2 7 117.5
"eva mae smith" 2 4 76.5
"josie carr" virginia 2 6.5 117
1918 candy 2 4 40
clack stone hubbard 2 55.5 1146

These website visitors are fellow researchers, trying to track down the same people that I am. I've got them on my website, they're engaged -- sometimes deeply so -- with the texts and the subjects, but I don't know who they are, and they haven't contacted me. Here are a couple of ideas that might help:
  1. Add an introductory HTML block to the collection homepage. This would allow collection editors to explain their project, solicit help, and whatever contact information.
  2. Add a 'contact us' footer to be displayed on every page of the collection, whether the user is viewing a subject article, reading a work, or viewing a manuscript page. Since people are finding the site via search engines, they're navigating directly to pages deep within a collection. We need to display 'about this project', 'contact us', or 'please help' messages on those pages.
One idea I think would not work is to build comment boxes or a 'contact us' form. I'm trying to establish a personal connection to these researchers, since in addition to asking "who is Alice Wooding", I'd like to locate other diaries or hunt down other information about local history. This is really best handled through email, where the barriers to participation are low.

Tuesday, June 2, 2009

Interview with Hugh Cayless

One of the neatest things to happen in the world of transcription technology this year was the award of an NEH ODH Digital Humanities Start-Up Grant to "Image to XML", a project exploring image-based transcription at the line and word level. According to a press release from UNC, this will fund development of "a product that will allow librarians to digitally trace handwriting in an original document, encode the tracings in a language known as Scalable Vector Graphics, and then link the tracings at the line or even word level to files containing transcribed texts and annotations." This is based on the work of Hugh Cayless in developing Img2XML, which he has described in a presentation to Balisage, demonstrated at this static demo, and shared at this github repository.

Hugh was kind enough to answer my questions about the Img2XML project and has allowed me to publish his responses here in interview form:


First, let me congratulate you on img2xml's award of a Digital Humanities Start-Up Grant. What was that experience like?

Thanks! I've been involved in writing grant proposals before, and sat on an NEH review panel a couple of years ago. But this was the first time I've been the primary writer of a grant. Start-Up grants (understandably) are less work than the larger programs, but it was still a pretty intensive process. My colleague at UNC, Natasha Smith, and I worked right down to the wire on it. At research institutions like UNC, the hard part is not the writing of the proposal, but working through the submission and budgeting process with the sponsored research office. That's the part I really couldn't have done in time without help.

The writing part was relatively straightforward. I sent a draft to Jason Rhody, who's one of the ODH program officers, and he gave us some very helpful feedback. NEH does tell you this, but it is absolutely vital that you talk to a program officer before submitting. They are a great resource because they know the process from the inside. Jason gave us great feedback, which helped me refine and focus the narrative.

What's the relationship between img2xml and the other e-text projects you've worked on in the past? How did the idea come about?

At Docsouth, they've been publishing page images and transcriptions for years, so mechanisms for doing that had been on my mind. I did some research on generating structural visualizations of documents using SVG a few years ago, and presented a paper on it at the ACH conference in Victoria, so I'd some experience with it. There was also a project I worked on while I was at Lulu where I used Inkscape to produce a vector version of a bitmap image for calendars, so I knew it was possible. When I first had the idea, I went looking for tools that could create an SVG tracing of text on a page, and found potrace (which is embedded in Inkscape, in fact). I found that you can produce really nice tracings of text, especially if you do some pre-processing to make sure the text is distinct.

What kind of pre-processing was necessary? Was it all manual, or do you think the tracing step could be automated?

It varies. The big issue so far has been sorting out how to distinguish text from background (since potrace converts the image to black and white before running its tracing algorithm), particularly with materials like papyrus, which is quite dark. If you can eliminate the background color by subtracting it from the image, then you don't have to worry so much about picking a white/black cutover point--the defaults will work. So far it's been manual. One of the agendas of the grant is to figure out how much of this can be automated, or at least streamlined. For example, if you have a book with pages of similar background color, and you wanted to eliminate that background as part of pre-processing, it should be possible to figure out the color range you want to get rid of once, and do it for every page image.

I've read your Balisage presentation and played around with the viewer demonstration. It looks like img2xml was in proof-of-concept stage back in mid 2008. Where does the software stand now, and how far do you hope to take it?

It hasn't progressed much beyond that stage yet. The whole point of the grant was to open up some bandwidth to develop the tooling further, and implement it on a real-world project. We'll be using it to develop a web presentation of the diary of a 19th century Carolina student, James Dusenbery, some excerpts from which can be found on Documenting the American South at http://docsouth.unc.edu/true/mss04-04/mss04-04.html.

This has all been complicated a bit by the fact that I left UNC for NYU in February, so we have to sort out how I'm going to work on it, but it sounds like we'll be able to work something out.

It seems to me that you can automate generating the SVG's pretty easily. In the Dusenbery project, you're working with a pretty small set of pages and a traditional (i.e. institutionally-backed) structure for managing transcription. How well suited do you think img2xml is to larger, bulk digitization projects like the FamilySearch Indexer efforts to digitize US census records? Would the format require substantial software to manipulate the transcription/image links?

It might. Dusenbery gives us a very constrained playground, in which we're pretty sure we can be successful. So one prong of attack in the project is to do something end-to-end and figure out what that takes. The other part of the project will be much more open-ended and will involve experimenting with a wide range of materials. I'd like to figure out what it would take to work with lots of different types of manuscripts, with different workflows. If the method looks useful, then I hope we'll be able to do follow-on work to address some of these issues.

I'm fascinated by the way you've cross-linked lines of text in a transcription to lines of handwritten text in an SVG image. One of the features I've wanted for my own project was the ability to embed a piece of an image as an attribute for the transcribed text -- perhaps illustrating an unclear tag with the unclear handwriting itself. How would SVG make this kind of linking easier?

This is exactly the kind of functionality I want to enable. If you can get close to the actual written text in a referenceable way then all kinds of manipulations like this become feasible. The NEH grant will give us the chance to experiment with this kind of thing in various ways.

Will you be blogging your explorations? What is the best way for those interested in following its development to stay informed?

Absolutely. I'm trying to work out the best way to do this, but I'd like to have as much of the project happen out in the open as possible. Certainly the code will be regularly pushed to the github repo, and I'll either write about it there, or on my blog (http://philomousos.blogspot.com), or both. I'll probably twitter about it too (@hcayless). I expect to start work this week...


Many thanks to Hugh Cayless for spending the time on this interview. We're all wishing him and img2xml the best of luck!

Sunday, May 17, 2009

Review: USGS North American Bird Phenology Program

Who knew you could track climate change through crowdsourced transcription? The smart folks at the U. S. Geological Survey, that's who!

The USGS North American Bird Phenology program encouraged volunteers to submit bird sightings across North America from the 1880s through the 1970s. These cards are now being transcribed into a database for analysis of migratory pattern changes and what they imply about climate change.

There's a really nice DesertUSA NewsBlog article that covers the background of the project:
The cards record more than a century of information about bird migration, a veritable treasure trove for climate-change researchers because they will help them unravel the effects of climate change on bird behavior, said Jessica Zelt, coordinator of the North American Bird Phenology Program at the USGS Patuxent Wildlife Research Center.

That is — once the cards are transcribed and put into a scientific database.

And that’s where citizens across the country come in - the program needs help from birders and others across the nation to transcribe those cards into usable scientific information.

CNN also interviewed a few of the volunteers:
Bird enthusiast and star volunteer Stella Walsh, a 62-year-old retiree, pecks away at her keyboard for about four hours each day. She has already transcribed more than 2,000 entries from her apartment in Yarmouth, Maine.

"It's a lot more fun fondling feathers, but, the whole point is to learn about the data and be able to do something with it that is going to have an impact," Walsh said.

Let's talk about the software behind this effort.

The NABPP is fortunate to have a limited problem domain. A great deal of standardization was imposed on the manuscript sources themselves by the original organizers, so that for example, each card describes only a single species and location. In addition, the questions the modern researchers are asking of the corpus also limits the problem domain: nobody's going to be doing analysis of spelling variations between the cards. It's important to point out that this narrow scope exists in spite of wide variation in format between the index cards. Some are handwritten on pre-printed cards, some are type-written on blank cards, and some are entirely freeform. Nevertheless, they all describe species sightings in a regular format.

Because of this limited scope, the developers were (probably) able to build a traditional database and data-entry form, with specialized attributes for species, location, date, or other common fields that could be generalized from the corpus and the needs of the project. That meant custom-building an application specifically for the NABPP, which seems like a lot of work, but it does not require building the kind of Swiss Army Knife that medieval manuscript transcription requires. This presents an interesting parallel with other semi-standardized, hand-written historical documents like military muster rolls or pension applications.

One of the really neat possibilities of subject-specific transcription software is that you can combine training users on the software with training them on difficult handwriting, or variations in the text. NABPP has put together a screencast for this, which walks users through transcribing a few cards from different periods, written in different formats. This screencast explains how to use the software, but it also explains more traditional editorial issues like what the transcription conventions are, or how to process different formats of manuscript material.

This is only one of the innovative ways the NABPP deals with its volunteers. I received a newsletter by email shortly after volunteering, announcing their progress to date (70K cards transcribed) and some changes in the most recent version of the software. This included some potentially-embarrassing details that a less confident organization might not have mentioned, but which really do a lot. Users may get used to workarounds to annoying bugs, but in my experience they still remember them and are thrilled when those bugs are finally fixed. So when the newsletter announces that "The Backspace key no longer causes the previous page to be loaded", I know that they're making some of their volunteers very happy.

In addition to the newletter, the project also posts statistics on the transcription project, broken down both by volunteer and by bird. The top-ten list gives the game-like feedback you'd want in a project like this, although I'd be hesitant to foster competition in a less individually-oriented project. They're also posting preliminary analyses of the data, including the phenology of barn swallows, mapped by location and date of first sighting, and broken down by decade.

Congratulations to the North American Bird Phenology Program for making crowdsourced transcription a reality!

Saturday, May 2, 2009

Open Source vs. Open Access

I've reached a point in my development project at which I'd like to go ahead and release FromThePage as Open Source. There are now only two things holding me back. I'd really like to find a project willing to work together with me to fix any deployment problems, rather than posting my source code on GitHub and leaving users to fend for themselves. The other problem is a more serious issue that highlights what I think is a conflict between Open Access and Open Source Software.

Open Source/Free Software and Rights of Use

Most of the attention paid to Open Source software focuses on the user's right to modify the software to suit their needs and to redistribute that (or derivative) code. However, there is a different, more basic right conferred by Free and Open source licenses: the user's right to use the software for whatever purpose they wish. The Free Software Definition lists "Freedom 0" as:
  • The freedom to run the program, for any purpose.
    Placing restrictions on the use of Free Software, such as time ("30 days trial period", "license expires January 1st, 2004") purpose ("permission granted for research and non-commercial use", "may not be used for benchmarking") or geographic area ("must not be used in country X") makes a program non-free.
Meanwhile, the Open Source Definition's sixth criterion is:
6. No Discrimination Against Fields of Endeavor
The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research.
Traditionally this has not been a problem for non-commercial software developers like me. Once you decide not to charge for the editor, game, or compiler you've written, who cares how it's used?

However, if your motivation in writing software is to encourage people to share their data, as mine certainly is, then restrictions on use start to sound pretty attractive. I'd love for someone to run FromThePage as a commercial service, hosting the software and guiding users through posting their manuscripts online. It's a valuable service, and is worth paying for. However, I want the resulting transcriptions to be freely accessible on the web, so that we all get to read the documents that have been sitting in the basements and file folders of family archivists around the world.

Unfortunately, if you investigate the current big commercial repositories of this sort of data, you'll find that their pricing/access model is the opposite of what I describe. Both Footnote.com and Ancestry.com allow free hosting of member data, but both lock browsing of that data behind a registration wall. Even if registration is free, that hurdle may doom the user-created content to be inaccessible, unfindable or irrelevant to the general public.

Open Access

The open access movement has defined this problem with regards to scholarly literature, and I see no reason why their call should not be applied to historical primary sources like the 19th/20th century manuscripts FromThePage is designed to host. Here's the Budapest Open Access Initiative's definition:
By "open access" to this literature, we mean its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself.
Both the Budapest and Berlin definitions go on to talk about copyright quite a bit, however since the documents I'm hosting are already out-of-copyright, I don't really think that they're relevant. What I do have control over is my own copyright interest in the FromThePage software, and the ability to specify whatever kind of copyleft license I want.

My quandry is this: none of the existing Free or Open Source licenses allow me to require that FromThePage be used in conformance with Open Access. Obviously, that's because adding such a restriction -- requiring users of FromThePage not to charge for people reading the documents hosted on or produced through the software -- violates the basic principles of Free Software and Open Source. So where do I find such a license?

Have other Open Access developers run into such a problem? Should I hire a lawyer to write me a sui generis license for FromThePage? Or should I just get over the fear that someone, somewhere will be making money off my software by charging people to read the documents I want them to share?

Sunday, April 5, 2009

Feature: Full Text Search/Article Link integration

In the last couple of weeks, I've implemented most of the features in the editorial toolkit. Scribes can identify unannotated pages from the table of contents, readers can peruse all pages in a collection linked to a subject, and users can perform a full text search.

I'd like to describe the full text search in some detail, since there are some really interesting things you can do with the interplay between searching and linking. I also have a few unresolved questions to explore.

Basic Search

There are a lot of technologies for searching, so my first task was research. I decided on the simple MySQL fulltext search over SOLR, Sphinx, and acts_as_ferret because all the data I wanted to search was located within the PAGES table. As a result, this only required a migration script, a text input, and a new controller action to implement. You can see the result on the right hand side of the collection homepage.

Article-based Search

Once basic search was working, I could start integrating the search capability with subject indexes. Since each subject link contains the wording in the original text that was used to link to a subject, that wording can be used to seed a text search. This allows an editor to double-check pages in a collection to see if any references to a subject have been missed.

For example, Evelyn Brumfield is a grandchild who is mentioned fairly often in Julia's diaries. Julia spells her name variously as "Evylin", "Evelyn", and "Evylin Brumfield". So a link from the article page performs a full text search for "Evylin Brumfield" OR Evelyn OR Evylin.

While this is interesting, it doesn't directly address the editors need to find references they might have missed. Since we're able to see all the fulltext matches for Evelyn Brumfield, since we can see all pages that link to the Evelyn Brumfield subject, why not subtract the second set from the first? An additional link on the subject page searches for precisely this set: all references to Evelyn Brumfield within the text that are not on pages linked to the Evelyn Brumfield subject.

At the writing of this blog post, the results of such a search are pretty interesting. The first two pages in the results matched the first name in "Evylin Edmons", in pages that are already linked to Evelyn Edmonds subject. Matched pages 4-7 appear to be references to Evelyn Brumfield in pages that have not been annotated at all. But we hit pay dirt with page number 3: it's a page that was transcribed and annotated very early during the transcription project, containing reference to Evelyn Brumfield that should be linked to that subject but is not.

Questions

I originally intended to add links to search for each individual phrase linked to a subject. However, I'm still not sure this would be useful -- what value would separate, pre-populated searches for "Evelyn", "Evylin", and "Evylin Brumfield" add?

A more serious question is what exactly I should be searching on. I adopted a simple approach of searching the annotated XML text for each page. However, this means that subject name expansions will match a search, even if the words don't appear in the text. A search for "Brumfield" will return pages in which Julia never wrote Brumfield, merely because they link to "John", which is expanded to "John Brumfield". This is not a literal text search, and might astonish users. On the other hand, would a user searching for "Evelyn" expect to see the Evelyns in the text, even though they had been spelled "Evylin"?

Monday, March 23, 2009

Feature: Mechanical Turk Integration

At last week's Austin On Rails SXSW party, my friend and compatriot Steve Odom gave me a really neat feature idea. "Why don't you integrate with Amazon's Mechanical Turk?" he asked. This is an intriguing notion, and while it's not on my own road map, it would be pretty easy to modify FromThePage to support that. Here's what I'd do to use FromThePage on a more traditional transcription project, with an experienced documentary editor at the head and funding for transcription work:

Page Forks: I assume that the editor using Mechanical Turk would want double keyed transcriptions to maintain quality, so the application needs to present the same, untranscribed page to multiple people. In the software world, when a project splits, we call this forking, and I think that the analogy applies here. This feature needs to be able to track an entirely separate edit history for the different forks of a page. This means a new attribute on the master page record describing whether more than one fork exists, and a separate edit history for each fork of a page that's created. There's no reason to limit these transcriptions to only two forks, even if that's the most common use case, so I'd want to provide a URL that will automatically create a new fork for a new transcriber to work in. The Amazon HIT (Human Intelligence Task) would have a link to that URL, so the transcriber need never track which fork they're working in, or even be aware of the double keying.

Reconciling Page Forks: After a page has been transcribed more than one time, the application needs to allow the editor to reconcile the transcriptions. This would involve a screen displaying the most recent version of two transcriptions alongside the scanned page image. Likely there's a decent Rails plug in already for displaying code diffs, so I could leverage that to highlight differences between the two transcriptions. A fourth pane would allow the editor to paste in the reconciled transcription into the master page object.

Publishing MTurk HITs: Since each page is an independent work unit, it should be possible to automatically convert an untranscribed work into MTurk HITs, with a work item for each page. I don't know enough about how MTurk works, but I assume that the editor would need to enter their Amazon account credentials to have the application create and post the HITs. The app also needs to prevent the same user from re-transcribing the same page in multiple forks.

In all, it doesn't sound like more than a month or two worth of work, even performed part-time. This isn't a need I have for the Julia Brumfield diaries, so I don't anticipate building this any time soon. Nevertheless, it's fun to speculate. Thanks, Steve!

Wednesday, March 18, 2009

Progress Report: Page Thumbnails and Sensitive Tags

As anyone reading this blog through the Blogspot website knows, visual design is not one of my strengths. One of the challenges that users have with FromThePage is navigation. It's not apparent from the single-page screen that clicking on a work title will show you a list of pages. It's even less obvious from the multi-page work reading screen that the page images are accessible at all on the website.

Last week, I implemented a suggestion I'd received from my friend Dave McClinton. The work reading screen now includes a thumbnail image of each manuscript page beside the transcription of that page. The thumbnail is a clickable link to the full screen view of the page and its transcription. This should certainly improve the site's navigability, and I think it also increases FromThePage's visual appeal.

I tried a different approach to processing the images from the one I'd used before. For transcribable page images, I modified the images offline through a batch process, then transferred them to the application, which serves them statically. The only dynamic image processing the FromThePage software did for end-users was involved in zoom. This time, I added a hook to the image link code, so that if a thumbnail was requested by the browser, the application would generate it on the fly. This turned out to be no harder to code than a batch process, and the deployment was far easier. I haven't seen a single broken thumbnail image yet, so it looks like it's fairly robust, too.

The other new feature I added last week was support for sensitive tags. The support is still fairly primitive -- enclose text with and it will only be desplayed to users authorized to transcribe the work -- but it gets the job done and solves some issues that had come up with Julia Brumfield's 1919 diary. Happily, this took less than 10 minutes to implement.

Sunday, March 15, 2009

Feature: Editorial Toolkit

I'm pleased to report that my cousin Linda Tucker has finished transcribing the 1919 diary. I've been trying my best to keep up with her speed, but she's able to transcribe two pages in the amount of time it takes me to edit and annotate a single, simple page. If the editing work requires more extensive research, or (worse) reveals the need to re-do several previous pages, there is really no contest. In the course of this intensive editing, I've come up with a few ideas for new features, as well as a few observations on existing features.

Show All Pages Mentioning a Subject

Currently, the article page for each subject shows a list of the pages on which the subject is mentioned. This is pretty useful, but it really doesn't serve the purposes of the reader or editor who wants to read every mention of that subject, in context. In particular, after adding links to 300 diary pages, I realized that "Paul" might be either Paul Bennett, Julia's 20-year-old grandson who is making a crop on the farm, or Paul Smith, Julia's 7-year-old grandson who lives a mile away from her and visits frequently. Determining which Paul was which was pretty easy from the context, but navigating the application to each of those 100-odd pages took several hours.

Based on this experience, I intend to add a new way of filtering the multi-page view, which would display all the transcriptions of all pages that mention a subject. I've already partially developed this as a way to filter the pages within a work, but I really need to 1) see mentions across works, and 2) make this accessible from the subject article page. I am embarrassed to admit that the existing work-filtering feature is so hard to find, that I'd forgotten it even existed.

Autolink

The Autolink feature has proven invaluable. I originally developed it to save myself the bother of typing [[Benjamin Franklin Brumfield, Sr.|Ben]] every time Julia mentioned "Ben". However, it's proven especially useful as a way of maintaining editorial consistency. If I decided that "bathing babies" was worth an index entry on one page, I may not remember that decision 100 pages later. However, if Autolink suggests [[bathing babies]] when it sees the string "bathed the baby", I'll be reminded of that. It doesn't catch every instance , but for subjects that tend to cluster (like occurrences of newborns), it really helps out.

Full Text Search

Currently there is no text search feature. Implementing one would be pretty straightforward, but in addition to that I'd like to hook in the Autolink suggester. In particular, I'd like to scan through pages I've already edited to see if I missed mentions of indexed subjects. This would be especially helpful when I decide that a subject is noteworthy halfway through editing a work.

Unannotated Page List

This is more a matter of work flow management, but I really don't have a good way to find out which pages have been transcribed but not edited or linked. It's really hard to figure out where to resume my editing.

[Update: While this blog post was in draft, I added a status indicator to the table of contents screen to flag pages with transcriptions but no subject links.]

Dual Subject Graphs/Searches

Identifying names is especially difficult when the only evidence is the text itself. In some cases I've been able to use subject graphs to search for relationships between unknown and identified people. This might be much easier if I could filter either my subject graphs or the page display to see all occurrences of subjects X and Y on the same page.

Research Credits

Now that the Julia Brumfield Diaries are public, suggestions, corrections, and research is pouring in. My aunt has telephoned old-timers to ask what "rebulking tobacco" refers to. A great-uncle has emailed with definitions of more terms, and I've had other conversations via email and telephone identifying some of the people mentioned in the text. To my horror, I find that I've got no way to attribute any of this information to those sources. At minimum, I need a large, HTML acknowledgments field at the collection level. Ideally, I'd figure out an easy-to-use way to attribute article comments to individual sources.

Monday, February 9, 2009

GoogleFight Resolves Unclear Handwriting

I've spent the last couple of weeks as a FromThePage user working seriously on annotation. This mainly involves identifying the people and events mentioned in Julia Brumfield's 1918 diary and writing short articles to appear as hyperlinked pages within the website, or be printed as footnotes following the first mention of the subject. Although my primary resource is a descendant chart in a book of family history, I've also found Google to be surprisingly helpful for people who are neighbors or acquaintances.

Here's a problem I ran into in the entry for June 30, 1918:

In this case, I was trying to identify the name in the middle of the photo. Bo__d Dews. The surname is a bit irregular for Julia's hand, but Dews is a common surname and occurs on the line above. In fact, this name is in the same list as another Mr. Dews, so I felt certain about the surname.

But what to make of the first name? The first two and final letters are clear and consistent: BO and D. The third letter is either an A or a U, and the fourth is either N or R. We can eliminate "Bourd" and "Boand" as unlikely phonetic spellings of any English name, leaving "Bound" and "Board". Neither of these are very likely names... or are they?

I thought I might have some luck by comparing the number and quality of Google search results for each of "Board Dews" and "Bound Dews". This is a pretty common practice used by Wikipedia editors to determine the most common title of a subject, and is sometimes known as a "Google fight". Let's look at the results:

"Bound Dews" yields four search results. The first two are archived titles from FromThePage itself, in which I'd retained a transcription of "Bound(?) Dews" in the text. The next two are randomly-generated strings on a spammer's site. We can't really rule out "Bound Dews" as a name based on this, however.

"Board Dews" yields 104 search results. The first page of results contains one person named Board Dews, who is listed on a genealogist's site as living from 1901 to 1957, and residing in nearby Campbell County. Perhaps more intriguing is the other surnames on the site, all from the area 10 miles east of Julia's home. The second page of results contains three links to a Board Dews, born in 1901 in Pittsylvania County.

At this point, I'm certain that the Bo__d Dews in the diary must be the Board Dews who would have been a seventeen-year-old neighbor. But I'm still astonished that I can resolve a legibility problem in a local diary with a Google search.

Thursday, February 5, 2009

Progress Report: Eight Months after THATCamp

It's been more than half a year since I've updated this blog. During that period, due to some events in my personal life, I was only able to spend a month or so on sustained development, but nevertheless made some real progress.

The big news is that I announced the project to some interested family members and have acquired one serious user. My cousin-in-law, Linda Tucker, has transcribed more than 60 pages of Julia Brumfield's 1919 diary since Christmas. In addition to her amazing productivity transcribing, she's explored a number of features of the software, reading most of the previously-transcribed 1918 diary, making notes and asking questions, and fighting with my zoom feature. Her enthusiasm is contagious, and her feedback -- not to mention her actual contributions -- has been invaluable.

During this period of little development, I spent a lot of time as a user. Fewer than 50 pages remain to transcribe in the 1918 diary, and I've started seriously researching the people mentioned in the diary for elaboration in footnotes. It's much easier to sustain work as a user than as a developer, since I don't need an hour or so of uninterrupted concentration to add a few links to a page.

I've also made some strides on printing. I jettisoned DocBook after too many problems and switched over to using Bruce Williamson's RTeX plugin. After some limited success, I will almost certainly craft my own set of ERb templates that generate LaTeX source for PDF generation. RTeX excels in serving up inline PDF files, which is somewhat antithetical to my versioned approach. Nevertheless, without RTeX, I might have never ventured away from DocBook. Thanks go to THATCamper Adam Solove for his willingness to share some of his hard-won LaTeX expertise in this matter.

Although I'm new to LaTeX, I've got footnotes working better than they were in DocBook. I still have many of the logical issues I addressed in the printing post to deal with, but am pretty confident I've found the right technology for printing.

I'm also working on re-implementing zoom in GSIV, rather than my cobbled-together solution. The ability to pan a zoomed image has been consistently requested by all of my alpha testers, the participants at THATCamp, and its lack is a real pain point for Linda, my first Real User. I really like the static-server approach GSIV takes, and will post when the first mock-up is done.

Saturday, June 21, 2008

Workflow: flags, tags, or ratings?

Over the past couple of months, I've gotten a lot of user feedback relating to workflow. Paraphrased, they include:
  • How do I mark a page "unfinished"? I've typed up half of it and want to return later.
  • How do I see all the pages that need transcription? I don't know where to start!
  • I'm not sure about the names or handwriting in this page. How do I ask someone else to review it?
  • Displaying whether a page has transcription text or not isn't good enough -- how do we know when something is really finished?
  • How do we ask for a proofreader, a tech savvy person to review the links, or someone familiar with the material to double-check names?
In a traditional, institutional setting, this is handled both through formal workflows (transcription assignments, designated reviewers, and researchers) and through informal face-to-face communication. None of these methods are available to volunteer-driven online
projects.

The folks at THATCamp recommended I get around this limitation by implementing user-driven ratings, similar to those found at online bookstores. Readers could flag pages as needing review, scribes could flag pages in which they need help, and volunteers could browse pages by quality to look for ways to help out. An additional benefit would be the low barrier to user-engagement, as just about anyone can click a button when they spot an error.

The next question is what this system should look like. Possible options are:
  1. Rating scale: Add a one-to-five scale of "quality" to each page.
    • Pros: Incredibly simple.
    • Cons: "Quality" is ambiguous. There's no way to differentiate a page needing content review (i.e. "what is this placename?") from a page needing technical help (i.e. "I messed up the subject links"). Low quality ratings also have an almost accusatory tone, which can lead to lots of problems in social software.
  2. Flags: Define a set of attributes ("needs review", "unfinished", "inappropriate") for pages and allow users to set or un-set them independently of each other.
    • Pros: Also simple.
    • Cons: Too precise. The flags I can think of wanting may be very different from those a different user wants. If I set up a flag-based data-model, it's going to be limited by my preconceptions.
  3. Tags: Allow users to create their own labels for a page.
    • Pros: Most flexible, easy to implement via acts_as_taggable or similar Rails plugins.
    • Cons: Difficult to use. Tech-savvy users are comfortable with tags, but that may be a small proportion of my user base. An additional problem may be use of non-workflow based tags. If a page mentions a dramatic episode, why not tag it with that? (Admittedly this may actually be a feature.)
I'm currently leaning towards a combination of tags and flags: implement tags under the hood, but promote a predefined subset of tags to be accessible via a simple checkbox UI. Users could tag pages however they like, and if I see patterns emerge that suggest common use-cases, I could promote those tags as well.

Sunday, June 8, 2008

THATCamp Takeaways

I just got back from THATCamp, and man, what a ride! I've never been to a conference with this level of collaboration before -- neither academic nor technical. Literally nobody was "audience" -- I don't think a single person emerged from the conference without having presented in at least one session, and pitched in their ideas in half a dozen more.

To my astonishment, I ended up demoing FromThePage in two different sessions, and presented a custom how-to on GraphViz in a third. I was really surprised by the technical savvy of the participants -- just about everyone at the sessions I took part in had done development of one form or another. The feedback on FromThePage were far more concrete than I was expecting, and got me past several roadblocks. And since this is a product development blog, here's what they were:
  • Zoom: I've looked at Zoomify a few times in the past, but have never been able to get around the fact that their image-preparation software is incompatible with Unix-based server-side processing. Two different people suggested workarounds for this, which may just solve my zoom problems nicely.

  • WYSIWYG: I'd never heard of the Yahoo WYSIWYG before, but a couple of people recommended it as being especially extensible, and appropriate for linking text to subjects. I've looked over it a bit now, and am really impressed.
  • Analysis: One of the big problems I've had with my graphing algorithm is the noise that comes from members of Julia's household. Because they appear on 99 of 100 entries, they're more related everything, and (worse) show up on relatedness graphs for other subjects as more related than the subjects that's I'm actually looking for. Over the course of the weekend, while preparing my DorkShorts presentation, discussing it, and explaining the noise quandary in FromThePage, both problem and solution clarified.
    The noise is due to the unit of analysis being a single diary entry. The solution is to reduce the unit of analysis. Many of the THATCampers suggested alternatives: look for related subjects within the same paragraph, or within N words, or even (using natural language toolkits) within the same sentence.
    It might even be possible to do this without requiring markup of each mention of a subject. One possibility is to automate this by searching the entry text for likely mentions of the same subject that has occurred already. This search could be informed by previous matches -- the same data I'm using for the autolink feature. (Inspiration for this comes from Andrea Eastman-Mullins' description of how Alexander Street Press is using well-encoded texts to inform searches of unencoded texts.)
  • Autolink: Travis Brown, whose background is in computational linguistics, suggested a few basic tools for making autolink smarter. Namely, permuting the morphology of a word before the autolink feature looks for matches. This would allow me to clean up the matching algorithm, which currently does some gross things with regular expressions to approach the same goal.

  • Workflow: The participants at the Crowdsourcing Transcription and Annotation session were deeply sympathetic to the idea that volunteer-driven projects can't use the same kind of double-keyed, centrally organized workflows that institutional transcription projects use. They suggested a number of ways to use flagging and ratings to accomplish the same goals. Rather than assigning transcription to A, identification and markup to B, and proofreading to C, they suggested a user-driven rating system. This would allow scribes or viewers to indicate the quality level of a transcribed entry, marking it with ratings like "unfinished", "needs review", "pretty good", or "excellent". I'd add tools to the page list interface to show entries needing review, or ones that were nearly done, to allow volunteers to target the kind of contributions they were making.
    Ratings also would provide an non-threatening way for novice users to contribute.

  • Mapping: Before the map session, I was manually clicking on Google's MyMaps, then embedding a link within subject articles. Now I expect to attach latitude/longitude coordinates to subjects, then generate maps via KML files. I'm still just exploring this functionality, but I feel like I've got a clue now.

  • Presentation: The Crowdsourcing session started brainstorming presentation tools for transcriptions. I'd seen a couple of these before, but never really considered them for FromThePage. Since one of my challenges is making the reader experience more visually appealing, it looks like it might be time to explore some of these.
These are all features I'd considered either out-of-reach, dead-ends, or (in one case) entirely impossible.

Thanks to my fellow THATCampers for all the suggestions, correction, and enthusiasm. Thanks also to THATCamp for letting an uncredentialed amateur working out of his garage attend. I only hope I gave half as much as I got.