Saturday, June 21, 2008

Workflow: flags, tags, or ratings?

Over the past couple of months, I've gotten a lot of user feedback relating to workflow. Paraphrased, they include:
  • How do I mark a page "unfinished"? I've typed up half of it and want to return later.
  • How do I see all the pages that need transcription? I don't know where to start!
  • I'm not sure about the names or handwriting in this page. How do I ask someone else to review it?
  • Displaying whether a page has transcription text or not isn't good enough -- how do we know when something is really finished?
  • How do we ask for a proofreader, a tech savvy person to review the links, or someone familiar with the material to double-check names?
In a traditional, institutional setting, this is handled both through formal workflows (transcription assignments, designated reviewers, and researchers) and through informal face-to-face communication. None of these methods are available to volunteer-driven online
projects.

The folks at THATCamp recommended I get around this limitation by implementing user-driven ratings, similar to those found at online bookstores. Readers could flag pages as needing review, scribes could flag pages in which they need help, and volunteers could browse pages by quality to look for ways to help out. An additional benefit would be the low barrier to user-engagement, as just about anyone can click a button when they spot an error.

The next question is what this system should look like. Possible options are:
  1. Rating scale: Add a one-to-five scale of "quality" to each page.
    • Pros: Incredibly simple.
    • Cons: "Quality" is ambiguous. There's no way to differentiate a page needing content review (i.e. "what is this placename?") from a page needing technical help (i.e. "I messed up the subject links"). Low quality ratings also have an almost accusatory tone, which can lead to lots of problems in social software.
  2. Flags: Define a set of attributes ("needs review", "unfinished", "inappropriate") for pages and allow users to set or un-set them independently of each other.
    • Pros: Also simple.
    • Cons: Too precise. The flags I can think of wanting may be very different from those a different user wants. If I set up a flag-based data-model, it's going to be limited by my preconceptions.
  3. Tags: Allow users to create their own labels for a page.
    • Pros: Most flexible, easy to implement via acts_as_taggable or similar Rails plugins.
    • Cons: Difficult to use. Tech-savvy users are comfortable with tags, but that may be a small proportion of my user base. An additional problem may be use of non-workflow based tags. If a page mentions a dramatic episode, why not tag it with that? (Admittedly this may actually be a feature.)
I'm currently leaning towards a combination of tags and flags: implement tags under the hood, but promote a predefined subset of tags to be accessible via a simple checkbox UI. Users could tag pages however they like, and if I see patterns emerge that suggest common use-cases, I could promote those tags as well.

Sunday, June 8, 2008

THATCamp Takeaways

I just got back from THATCamp, and man, what a ride! I've never been to a conference with this level of collaboration before -- neither academic nor technical. Literally nobody was "audience" -- I don't think a single person emerged from the conference without having presented in at least one session, and pitched in their ideas in half a dozen more.

To my astonishment, I ended up demoing FromThePage in two different sessions, and presented a custom how-to on GraphViz in a third. I was really surprised by the technical savvy of the participants -- just about everyone at the sessions I took part in had done development of one form or another. The feedback on FromThePage were far more concrete than I was expecting, and got me past several roadblocks. And since this is a product development blog, here's what they were:
  • Zoom: I've looked at Zoomify a few times in the past, but have never been able to get around the fact that their image-preparation software is incompatible with Unix-based server-side processing. Two different people suggested workarounds for this, which may just solve my zoom problems nicely.

  • WYSIWYG: I'd never heard of the Yahoo WYSIWYG before, but a couple of people recommended it as being especially extensible, and appropriate for linking text to subjects. I've looked over it a bit now, and am really impressed.
  • Analysis: One of the big problems I've had with my graphing algorithm is the noise that comes from members of Julia's household. Because they appear on 99 of 100 entries, they're more related everything, and (worse) show up on relatedness graphs for other subjects as more related than the subjects that's I'm actually looking for. Over the course of the weekend, while preparing my DorkShorts presentation, discussing it, and explaining the noise quandary in FromThePage, both problem and solution clarified.
    The noise is due to the unit of analysis being a single diary entry. The solution is to reduce the unit of analysis. Many of the THATCampers suggested alternatives: look for related subjects within the same paragraph, or within N words, or even (using natural language toolkits) within the same sentence.
    It might even be possible to do this without requiring markup of each mention of a subject. One possibility is to automate this by searching the entry text for likely mentions of the same subject that has occurred already. This search could be informed by previous matches -- the same data I'm using for the autolink feature. (Inspiration for this comes from Andrea Eastman-Mullins' description of how Alexander Street Press is using well-encoded texts to inform searches of unencoded texts.)
  • Autolink: Travis Brown, whose background is in computational linguistics, suggested a few basic tools for making autolink smarter. Namely, permuting the morphology of a word before the autolink feature looks for matches. This would allow me to clean up the matching algorithm, which currently does some gross things with regular expressions to approach the same goal.

  • Workflow: The participants at the Crowdsourcing Transcription and Annotation session were deeply sympathetic to the idea that volunteer-driven projects can't use the same kind of double-keyed, centrally organized workflows that institutional transcription projects use. They suggested a number of ways to use flagging and ratings to accomplish the same goals. Rather than assigning transcription to A, identification and markup to B, and proofreading to C, they suggested a user-driven rating system. This would allow scribes or viewers to indicate the quality level of a transcribed entry, marking it with ratings like "unfinished", "needs review", "pretty good", or "excellent". I'd add tools to the page list interface to show entries needing review, or ones that were nearly done, to allow volunteers to target the kind of contributions they were making.
    Ratings also would provide an non-threatening way for novice users to contribute.

  • Mapping: Before the map session, I was manually clicking on Google's MyMaps, then embedding a link within subject articles. Now I expect to attach latitude/longitude coordinates to subjects, then generate maps via KML files. I'm still just exploring this functionality, but I feel like I've got a clue now.

  • Presentation: The Crowdsourcing session started brainstorming presentation tools for transcriptions. I'd seen a couple of these before, but never really considered them for FromThePage. Since one of my challenges is making the reader experience more visually appealing, it looks like it might be time to explore some of these.
These are all features I'd considered either out-of-reach, dead-ends, or (in one case) entirely impossible.

Thanks to my fellow THATCampers for all the suggestions, correction, and enthusiasm. Thanks also to THATCamp for letting an uncredentialed amateur working out of his garage attend. I only hope I gave half as much as I got.

Saturday, May 17, 2008

Progress Report: De-duping catastrophe and a host change

After a very difficult ten days of coding, I'm almost where I was at the beginning of May. The story:

Early in the month, I got a duplicate identifier feature coded. The UI was based on LibraryThing's, which is the best de-duping interface I've ever seen. Mine still falls short, but it's able to pair "Ren Worsham" up with "Wren Worsham", so it'll probably do for now. With that completed, I built a tool to combine subjects: if you see a possible duplicate of the subject you're viewing, you click combine, and it updates all textual references to the duplicate to point to the main article, then deletes the duplicate. Pretty simple, right?

Enter DreamHost's process killer. Now I love DreamHost, and I want to love them more, but I really don't think their cheap shared hosting plan is appropriate for a computationally intensive web-app. In order to insulate users from other, potentially clueless users, they have a daemon that monitors running processes and kills off any that look "bad". I'm not sure what criteria constitute "bad", but I should have realized the heuristic might be over-aggressive when I wasn't able to run basic database migrations without running afoul of it. Nevertheless, it didn't seem to be causing anything beyond the occasional "Rails Application failed to start" message that could be solved with a browser reload.

However. Killing a de-duping process in the middle of reference updates is altogether different from killing a relatedness graph display. Unfortunately, I wasn't quite aware of the problem before I'd tried to de-dup several records, sometimes multiple times. My app assumes its data will be internally consistent, so my attempts to clean up the carnage resulted in hundreds more duplicates being created.

So I've moved FromThePage from DreamHost to HostingRails, which I completed this morning. There remains a lot of back-end work to clean up the data, but I'm pretty sure I'll get there before THATCamp.

Sunday, April 27, 2008

The Trouble with Names

The trouble with people as subjects is that they have names, and that personal names are hard.
  • Names in the text may be illegible or incomplete, so that Reese _____ and Mr ____ Edmonds require special handling.
  • Names need be remembered by the scribe during their transcription. I discovered this the hard way.

    After doing some research in secondary documents, I was able to "improve" the entries for Julia's children. Thus Kate Harvey became Julia Katherine Brumfield Harvey and Mollie Reynolds became Mary Susan Brumfield Reynolds.

    The problem is that while I'm transcribing the diaries, I can't remember that "Mary Susan" == Mollie. The diaries consistently refer to her as Mollie Reynolds, and the family refers to to her as Mollie Reynolds. No other person working on the diaries is likely to have better luck than I've had remembering this. After fighting with the improved names for a while, I gave up and changed all the full names back to the common names, leaving the full names in the articles for each subject.

  • Names are odd ducks, when it comes to strings. "Mr. Zach Abel" should be sorted before "Dr. Anne Zweig", which requires either human intervention to break the string into component parts or some serious parsing effort. At this point my subject list has become unwieldy enough to require sorting, and the index generation code for PDFs is completely dependent on this kind of separation.
I'm afraid I'll have to solve all of these problems at the same time, as they're all interdependent. My initial inclination is to have subject articles for people allow the user to specify a full name in all its component parts. If none is chosen, I'll populate the parts via a series of regular expressions. This will probably also require a hard look at how both TEI and DocBook represent names.

Friday, April 25, 2008

Feature: Data Maintenance Tools

With only two collections of documents, fewer than a hundred transcriptions, and only a half-dozen users who could be charitably described as "active", FromThePage is starting to strain under the weight of its data.

All of this has to do with subjects. These are the indexable words that provide navigation, analysis, and context to readers. They're working out pretty well, but frequency of use has highlighted some urgent features to be developed and intolerable bugs to be fixed:
  • We need a tool to combine subjects. Early in the transcription process, it was unclear to me whether "the Island" referred to Long Island, Virginia -- a nearby town with a post office and railroad station -- or someplace else. Increasing familiarity with the texts, however, has shown "the Island" to be definitely the same as "Long Island".

    The best interface for doing this sort of deduping is implemented by LibraryThing, which is so convenient that it has inspired the ad-hoc creation of a group of "combining" enthusiasts -- an astonishing development, since this is usually the worst of dull chores. A similar interface would invite the user viewing "the Island" to consider combining that subject with "Long Island". This requires an algorithm to suggest matches for combination, which is itself no trivial task.

  • We need another tool to review and delete orphans. As identification improves, we've been creating new subjects "Reese Smith" and linking previous references to "Reese" to that improved subject. This leaves the old, incomplete subject without any referents, but also without any way to prune it.

Autolink has become utterly essential to transcriptions, since it allows the scribe to identify the appropriate subject as well as the syntax to use for its link. Unfortunately, it has a few serious problems:

  • Autolink looks for substrings within the transcription without paying attention to word boundaries. This led to some odd autolink suggestions before this week, but since the addition of "ink" and "hen" to the subjects link text, autolink has started behaving egregiously. The word "then" is unhelpfully expanded to "t[[chickens|hen]]". I'll try to retain matches for display text regardless of inflectional markings, but it's time to move the matching algorithm to a more conservative heuristic.
  • Autolink also suggests matches from subjects that reside within different collections. It's wholly unhelpful for autolink to suggest a tenant farmer in rural Virginia within a context of UK soldiers in a WWI prison camp. This is a classical cross-talk bug, and needs fixing.

Monday, April 14, 2008

Collaborative transcription, the hard way

Archivalia has kept us informed of lots of manuscript projects going online in Europe last week, offering commentary along the way. Perhaps my favorite exchange was about the Chronicle of Sweder von Schele on the Internet:
Mit dem Projekt wird zunächst bezweckt, die Transkription zu ergänzen und zu verbessern. Hierzu können neue Abschriften per Mail an die am Projekt beteiligten Institutionen geschickt werden. Nach redaktioneller Prüfung werden die Seiten ausgetauscht.

Meine Güte, wie vorsintflutlich. Hat man noch nie etwas von einem Wiki gehört?
That's right -- the online community may send in corrections and additions to the transcription by mail.

Friday, April 11, 2008

Who do I build for?

Over at ClioWeb, Jeremy Boggs is starting a promising series of posts on the development process for digital humanites process. He's splitting the process up into five steps, which may happen at the same time, but still follow a rough sort of order. Step one, obviously, is "Figure out what exactly you’re building."

But is that really the first step? I'm finding that what I build is dependent on who I'm building for. Having launched an alpha version of the product and shaken out many of the bugs in the base functionality, I'm going to have to make some tough decisions about what I concentrate my time on next. These all revolve around who I'm developing the product for:

My Family: Remember that I developed the transcription software to help accomplish a specific goal: transcribe Julia Brumfield's diaries to share with family members. The features I should concentrate on for this audience are:
  • Finish porting my father's 1992 transcription of the 1918 diary to FromThePage
  • Fix zoom.
  • Improve the collaborative tools for discussing the works and figuring out which pages need review.
  • Build out the printing feature, so that the diaries can be shared with people who have limited computer access.
THATCamp attendees: Probably the best feedback I'll be able to receive on the analytical tools like subject graphs will come from THATCamp. This means that any analytical features I'd like to demo need to be presentable by the middle of May.

Institutional Users: This blog has drawn interest from a couple of people looking for software for their institutions to use. For the past month, I've corresponded extensively with John Rumm, editor of the Digital Buffalo Bill Project, based at McCracken Research Library, at the Buffalo Bill Historical Center. Though our conversations are continuing, it seems like the main features his project would need are:
  • Full support for manuscript transcription tags used to describe normalization, different hands, corrections/revisions to the text, illegible text and other descriptors used in low-level transcription work. (More on this in a separate post)
  • Integration with other systems the project may be using, like Omeka, Pachyderm, MediaWiki, and such.
My Volunteers: A few of the people who've been trying out the software have expressed interest in using it to host their own projects. More than any other audience, their needs would push FromThePage towards my vision of unlocking the filing cabinets in family archivists' basements and making these handwritten sources accessible online. We're in the very early stages of this, so I don't yet know what requirements will arise.

The problem is that there's very little overlap between the features these groups need. I will likely concentrate on family and volunteers, while doing the basics for THATCamp. I realize that's not a very tight focus, but it's much clearer to me now than it was last week.

Monday, April 7, 2008

Progress Report: One Month of Alpha Testing

FromThePage has been on a production server for about a month now, and the results have been fascinating. The first few days' testing revealed some shocking usability problems. In some places (transcription and login most notoriously) the code was swallowing error messages instead of displaying them to the user. Zoom didn't work in Internet Explorer. And there were no guardrails that kept the user from losing a transcription-in-progress.

After fixing these problems, the family and friends who'd volunteered to try out the software started making more progress. The next requests that came in were for transcription conventions. After about three requests for these, I started displaying the conventions on the transcription screen itself. This seems to have been very successful, and is something I'd never have come up with on my own.

The past couple of weeks have been exciting. My old college roommate started transcribing entries from the 1919 diary, and entered about 15 days in January -- all in two days work. In addition to his technical feedback, two things I'd hoped for happened:
  • We started collaborating. My roommate had transcribed the entries on a day when he'd had lots of time. I reviewed his transcriptions and edited the linking syntax. Then my father edited the resulting pages, correcting names of people, events, and animals based on his knowledge of the diaries' context.
  • My roommate got engaged. His technical suggestions were peppered with comments on the entries themselves and the lives of the people in the diaries. I've seen this phenomenon at Pepys Diary Online, but it's really heartening to get a glimpse of that kind of engagement with a manuscript.
I've also had some disappointments. It looks like I'll have to discard my zoom feature and replace it with something using the Google Maps API. People now expect to pan by dragging the image, and I my home-rolled system simply can't support that.

I really planned on moving developing printing and analytical tools next, but we're finding that the social software features are becoming essential. The bare-bones "Recent Activity" panel I slapped together one night has become the most popular feature. We need to know who edited what, what pages remain to be transcribed, and which transcriptions need review. I've resuscitated this winter's comment feature, polished the "Recent Activity" panel, and am working on a system for displaying a page's transcription status in the work's table of contents.

All of these developments are the results of several hours of careful, thoughtful review by volunteers like my father and my roommate. There is simply no way I could have invited the public in to the site as it stood a month ago, which I did not know at the time. There's still a lot to be done before FromThePage is "ready for company", but I think it's on track.

If you'd like to try the software out, leave me a comment or send mail to alpha.info@fromthepage.com.

Friday, April 4, 2008

Review: IATH Manuscript Transcription Database

This is the second of two reviews of similar transcription projects I wrote in correspondence with Brian Cafferelli, an undergraduate working on the WPI Manuscript Transcription Assistant. In this correspondence, I reviewed systems by their support for collaboration, automation, and analysis.

The IATH Manuscript Transcription Database was a system for producing transcriptions developed for the Salem Witch Trials court records. It allowed full collaboration within an institutional setting. An administrator assigned transcription work to a transcriber, who then pulled that work from the queue and edited and submitted a transcription. Presumably there was some sort of review performed by the admin, or a proofreading step done by comparison with another user's transcription. Near as I can tell from the manual, no dual-screen transcription editor was provided. Rather, transcription files were edited outside the system, and passed back and forth using MTD.

I'm a bit hazy on all this because after reviewing the docs and downloading the source code, I sent an inquiry to IATH about the legal status of the code. It turns out that while the author intended to release the system to the public, this was never formally done. The copyright status of the code is murky, and I view it as tainted IP for my purpose of producing a similar product. I'm afraid I deleted the files from my own system, unread.

For those interested in reading more, here is the announcement of the MTD
on the Humanist list.
The full manual, possibly with archived source code is accessible via the wayback machine. They've pulled this from the IATH site, presumably because of the IP problems.

So the MTD was another pure-production system. Automation and collaboration were fully supported, though the collaboration was designed for a purely institutional setting. Systems of assignment and review are inappropriate for a volunteer-driven system like my own.

My correspondent at IATH did pass along a piece of advice from someone who had worked on the project: "Use CVS instead". What I gather they meant by this was that the system for passing files between distributed transcribers, collating those files, and recording edits is a task that source code repositories already perform very well. This does nothing to replace transcription production tools akin to mine or the TEI editors, but the whole system of editorial oversight and coordination provided by the MTD is a subset of what a source code repository can do. A combination of Subversion and Trac would be a fantastic way to organize a distributed transcription effort, absent a pure-hosted tool.

This post contains a lot more speculation and supposition than usual, and I apologize in advance to my readers and the IATH for anything I've gotten wrong. If anyone associated with the MTD project would like to set me straight, please comment or send me email.

Thursday, March 27, 2008

Rails: Logging User Activity for Usability

At the beginning of the month, I started usability testing for FromThePage. Due to my limited resources, I'm not able to perform usability testing in control rooms, or (better yet) hire a disinterested expert with a background in the natural sciences to conduct usability tests for me. I'm pretty much limited to sending people the URL for the app with a pleading e-mail, then waiting with fingers crossed for a reply.

For anyone who finds themselves in the same situation, I recommend adding some logging code to your app. We tried this last year with Sara's project, discovering that only 5% of site visitors were even getting to the features we'd spent most of our time on. It was also invaluable resolving bugs reports. When a user complains they got logged off the system, we could track their clicks and see exactly what they were doing that killed their session.

Here's how I've done this for FromThePage in Rails:

First, you need a place to store each user action. You'll want to store information about who was performing the action, and what they were doing. I was willing to violate my sense of data model aesthetics for performance reasons, and abandon third normal form by combining these two distinct concepts into the same table.

# who's doing the clicking?
browser
session_id
ip_address
user_id #null if they're not logged in

Tracking the browser lets you figure out whether your code doesn't work in IE (it doesn't) and whether Google is scraping your site before it's ready (it is). The session ID is the key used to aggregate all these actions -- one of these corresponds to several clicks that make up a user session. Finally, the IP address give you a bit of a clue as to where the user is coming from.

Next, you need to store what's actually being done, and on what objects in your system. Again, this goes within the same table.

# what happened on this click?
:action
:params
:collection_id #null if inapplicable
:work_id #null if inapplicable
:page_id #null if inapplicable

In this case, every click will record the action and the associated HTTP parameters. If one of those parameters was collection_id, work_id, or page_id (the three most important objects within FromThePage), we'll store that too. Put all this in a migration script and create a model that refers to it. In this case, we'll call that model "activity".

Now we need to actually record the action. This is a good job for a before_filter. Since I've got a before_filter in ApplicationController that set up important variables like the page, work, or collection, I'll place my before_filter in the same spot and call it after that one.

before_filter :record_activity

But what does it do?

def record_activity
@activity = Activity.new
# who is doing the activity?
@activity.session_id = session.session_id #record the session
@activity.browser = request.env['HTTP_USER_AGENT']
@activity.ip_address = request.env['REMOTE_ADDR']
# what are they doing?
@activity.action = action_name # grab this from the controller
@activity.params = params.inspect # wrap this in an unless block if it might contain a password
if @collection
@activity.collection_id = @collection.id
end
# ditto for work, page, and user IDs
end

For extra credit, add a status field set to 'incomplete' in your record_activity method, then update it to 'complete' in an after_filter. This is a great way for catching activity that throws exceptions for users and presents error pages you might not know about otherwise.

P.S. Let me know if you'd like to try out the software.

Wednesday, March 26, 2008

THATCamp 2008

I'm going to THATCamp at the end of May to talk about From The Page and a few dozen other cool projects that are going on in the digital humanities. If anybody can offer advice on what to expect from an "unconference", I'd sure appreciate it.

This may be the thing that finally drives me to use Twitter.

Monday, March 17, 2008

Rails 2.0 Gotchas

The deprecation tools for Rails 2.0 are grand, but they really don't tell you everything you need to know. The things that have bitten me so far are:

  • The built-in pagination has been removed from the core framework. Unlike tools like acts_as_list and acts_as_tree, however, there's no obvious plugin that makes the old code work. This is because the old pagination code was really awful: it performed poorly and hid your content from search engines. Fortunately, Sara was able to convert my paginate calls to use the will_paginate plugin pretty easily.
  • Rails Engines, or at least the restful_comments plugin built on top of them, don't seem to work at all. So I've had to disable the comments and proofreading request system I spent November through January building.
  • Rails 2.0 adds some spiffy automated code to prevent cross-site-scripting security holes. For some reason this breaks my cross-controller AJAX calls, so I've had to add
    protect_from_forgery :except => [my old actions]
    to those controllers after getting InvalidAuthenticityToken exceptions.
  • The default session has been changed from a filesystem-based storage engine to one that shoves session data into the browser cookie. So if you're persisting large-ish objects across requests in the session, this will fail. Sadly, basic tests may pass, while serious work will break: I found my bulk page transformation
    code to work fine for 20 pages, but break for 180. The solution for this is to add
    config.action_controller.session_store = :p_store
    in your environment.rb file.

Sunday, March 9, 2008

Collaborative Transcription as Crowdsourcing

Yesterday morning I saw Derek Powazek present on crowdsourcing -- user-generated content and collaborative communities. While he covered a lot of material that (users will do unexpected things, don't exploit people, design for the "selfish user"), there was one anecdote I thought especially relevant for FromThePage.

A publishing house had polled a targeted group of people to figure out whether they'd be interested in contributing magazine articles. The response was overwhelmingly positive. The appropriate studies were conducted, and the site was launched -- A blank page, ready for article contributions.

The response from those previously enthusiastic users was silence. Crickets. Tumbleweeds. The editors quickly changed tack and posted a list of ten subjects who'd agreed to be interviewed by the site's contributors,
asking for volunteers to conduct and write up the interviews. This time, people responded with the same enthusiasm they'd shown at the original survey.

The lesson was that successful editors of collaborative content endeavorshave less in common with traditional magazine/project editors thanthey do with community managers. Absent thecommand-and-control organizational structure, a volunteer community still needs to have its effortsdirected. However, this must be done through guidance andpersuasion through concrete suggestions, goal-setting, and feedback. In future releases, I need to add featuresto help work owners communicate suggestions and rewards* to scribes.

(Powazek suggests attaboys, not complex replacements for currency here)

Wednesday, March 5, 2008

Meet me at SXSWi 2008

I'll be at South by Southwest Interactive this weekend. If any of my readers are also attending, please drop me a message or leave a comment. I'd love to meet up.

Thursday, February 7, 2008

Google Reads Fraktur

Yesterday, German blogger Archivalia reported that the quality of Fraktur OCR at Google Books has improved. There are still some problems, but they're on the same order of those found in books printed in Antiqua. Compare the text-only and page-image versions of Geschichte der teutschen Landwirthschaft (1800) with the text and image versions of Antigua Altnordisches Leben (1856).

This is a big deal, since previous OCR efforts produced results that were not only unreadable, but un-searchable as well. This example from the University of Michigan's MBooks website (digitized in partnership with Google) gives a flavor of the prior quality: "Ueber den Ursprung des Uebels." ("On the Origin of Evil") results in "Us-Wv ben Uvfprun@ - bed Its-beEd."

It's thrilling that these improvements are being made to the big digitization efforts — my guess is that they've added new blackletter typefaces to the OCR algorithm and reprocessed the previously-scanned images — but this highlights the dependency OCR technology has on well-known typefaces. Occasionally, when I tell friends about my software and the diaries I'm transcribing, I'm asked, "Why don't you just OCR the diaries?" Unfortunately, until someone comes with a OCR plugin for Julia Brumfield (age 72) and another for Julia Brumfield (age 88), we'll be stuck transcribing the diaries by hand.

Monday, February 4, 2008

Progress Report: Four N steps to deployment

I've completed one of the four steps I outlined below: my Rails app is now living in a SubVersion repository hosted somewhere further than 4 feet from where I'm typing this.

However, I've had to add a few more steps to the deployment process. These included:

  • Attempting to install Trac
  • Installing MySql on DreamHost
  • Installing SubVersion on DreamHost
  • Successfully installing BugZilla on DreamHost
None of these were included in my original estimate.

Name Update: FromThePage.com

I've finally picked a name. Despite its attractiveness, "Renan" proved unpronounceable. No wonder my ancestors said "REE-nan": it's at least four phonemes away from a native English word, and nobody who was shown the software was able to pronounce its title.



FromThePage is the new name. It not as lovely as some of the ones that came out of a brainstorming session (like "HandWritten"), but at least there are no existing software products that use it. I went ahead and registered fromthepage.com and fromthepage.org under the assumption that I'd be able to pull off the WordPress model of open-source software that's also hosted for a fee.

Monday, January 21, 2008

Four steps to deployment

Here are the things I need to do to deploy the 0.5 app on my shared hosting provider:
  • Install Capistrano and get it working
  • Upgrade my application stack to Rails 2.0
  • Switch my app from a subdirectory deep within another CVS module to its own Subversion module
  • Move the app to Dreamhost
But what order should I tackle this in? My temptation is to try deploying to Dreamhost via Capistrano, since I'm eager to get the app on a production server. Fortunately for my sanity, however, I read part of Cal Henderson's Building Scalable Websites this weekend. Henderson recommends using a staging site. While he probably had something different in mind, this seems like a perfect way to isolate these variables: get Capistrano scripts working on a staging location within my developent box, then once I really understand how deployment automation works, then point the scripts at the production server.

As for the rest, I'm not really sure when to do them. But I will try to tackle them one at a time.

Friday, January 18, 2008

What's next?

Now that I'm done with development driven only by my sense of what would be a good feature, it's time to move to step #2 in my year-old feature plan: deploying an alpha site.

I'm no longer certain about the second half of that plan -- other projects have presented themselves as opportunities that might have a more technically sophisticated user base, and thus might present more incremental enhancement requests. But getting the app to a server where I can follow Matt Mullenweg's advice and "become [my] most passionate user" seems more sensible now than ever.

Chris Wehner's SoldierStudies.org

This is the first of two reviews of similar transcription projects I wrote in correspondence with Brian Cafferelli, an undergraduate working on the WPI Manuscript Transcription Assistant. In this correspondence, I reviewed systems by their support for collaboration, automation, and analysis.

SoldierStudies.org is a non-academic/non-commercial effort like my own. It's a combined production-presentation system with simple but effective analysis tools. If you sign up for the site, you can transcribe letters you possess, entering metadata (name and unit of the soldier involved) and the transcription of the letter's text. You may also flag a letter's contents as belonging to N of about 30 subjects using a simple checkbox mechanism. The UI is a bit clunky in my opinion, but it actually has users (unlike my own program), so perhaps I shouldn't cast stones.

Nevertheless, SoldierStudies has some limitations. Most surprisingly, they are doing no image-based transcription whatsoever, even though they allow uploads of scans. Apparently those uploaded photos of letters are merely to authenticate that the user hasn't just created the letter out of nothing, and only a single page of a letter may be uploaded. Other problems seem inherent to the site's broad focus. SoldierStudies hosts some WebQuest modules intended for K-12 pedagogy. It also keeps copies of some letters transcribed in other projects, like letter from James Booker digitized as part of the Booker Letters Project at the University of Virginia. Neither of these seem part of the site's core goal to "to rescue Civil War letters before they are lost to future generations".

Unlike the pure-production systems like IATH MTD or WPI MTA, SoldierStudies transcriptions are presented dynamically. This allows full-text searching and browsing the database by metadata. Very cool.

So they've got automation mostly down (save the requirement that a scribe be in the same room as a text), analysis is pretty good, and there's a stab at collaboration, although texts cannot be revised by anybody but the original editor. Most importantly, they're online, actively engaged in preserving primary sources and making them accessible to the public via the web.

Thursday, January 17, 2008

Feature Triage for v1.0

I've been using this blog to brainstorm features since its inception. Partly this is to share my designs with people who may find them useful, but mainly it's been a way to flush this data out of my brain so that I don't have to worry about losing it.

Last night, I declared my app feature-complete for the first version. But what features are actually in it?

Let's do some data-flow analysis on the collaborative transcription process. Somebody out in the real world has a physical artifact containing handwritten text. I can't automate the process of converting that text into an image — plenty of other technologies do that already — so I have to start with a set of images capturing those handwritten pages. Starting with the images, the user must organize them for transcription, then begin transcription, then view the transcriptions. Only after that may they analyze or print those transcriptions.

Following the bulk of that flow, I've cut support for much of the beginning and end of the process, and pared away the ancillary features of page transcription. The resulting feature set is below, with features I've postponed till later releases struck out.
  1. Image Preparation
  2. Page Transcription
  3. Transcription Display
    • Table of Contents for a work
    • Page display
    • Multi-page transcription display
    • Page annotations
  4. Printable Transcription
    • PDF generation
    • Typesetting
    • Table of contents page
    • Title/Author pages
    • Expansion footnotes
    • Subject article footnotes
    • Index
  5. Text Analysis

Wednesday, January 16, 2008

Whew

I've just finished coding review requests, the last feature slated for release 1.0.

I figure I've completed 90% of the work to get to 1.0, so I'll dub this check-in Version 0.5.

The next step is to deploy to a production machine for alpha testing, which I expect will involve at least a few more code changes. But from here on out, every modification is a bug fix.

I'll try to post a bit more on my feature triage, my experience mis-applying the Evil Twin plugin design pattern, and my plans for the future.