A mass-etymological breakdown tool that I'd create if I could, but I can't.

As a writer, what I would love most of all, more than freshly baked raisin bread prepared each morning and served to me on a bedside table with real butter from well-loved hill-dwelling tinkling-bell cows, is a mass etymological breakdown tool. With such a tool, you could feed it something you've written, a few pages worth, and it could tell you the origins of all your words - how many Anglo-Saxon-rooted words, how many Latinate terms, how much French and how much Spanish - until you had a clear sense of the historical patterns of speech you'd been picking up on your linguistic antennae. You'd learn about your personal set of language influences. Hemingway, for instance, eschewed Latinate words as much as possible; a huge proportion of the words in his works are raw hard Anglo-Saxon. It would be great, I think, to compare Hemingway's percentage of Anglo Saxon to Fitzgerald's.

Let's say you're working on an advertisement for a power drill. You feed it in to the Etymologizer, and find that you're using, say, 60% Latinate verbs and nouns. You'd know you had a problem, since the main power drill market isn't all that Latinate. That's a little forced, but if you were having a doctor give a speech in a screenplay, or a scholar from Samuel Johnson's era, and historical accuracy mattered, you might want to influence his speech with Latin, and the tool was connected to an etymologically informed semantic thesaurus, it could suggest other, more era-and-culture-appropriate terms, including euphemisms. Other users might be Civil War re-enactors and fantasy gamers of a historical bent who want to fit their language exactly into the bent of the game.

A smart, smart system could analyze all the works of Jane Austen, put them into a randomly accessible "lookup hash," and go through your text and find individual words and replace them with Austenite synonyms. Thus, the Austenizer. If you could write sentences in the rhythm of Austen, the system could suggest replacement words that Austen actually used, and you end up with truly Austenish prose. It would be fun to run Austen's Emma through the Eliotizer and see what the process of Middlemarchization does to Emma's romantic travails. It wouldn't work, exactly, but it would do something interesting, like annoy literary types.

I'm just playing (although I did spend four long days trying to build a prototype of such a tool a month ago, with no success), and there are many problems with the ideas above, but I keep wondering when we're going to begin using computers to truly process the amazing wealth of knowledge we have, to truly synthesize. When will we go native, and begin to use all this processing power? The problem is that data and information, like source code, is extremely proprietary and difficult to encode.

Now, the entire text of the Oxford Unabridged English Dictionary is encoded in SGML. This makes it, in essence, a giant database, ready for searching, manipulating, sorting, and otherwise fiddling with. But people - libraries, consumers, you, and until a few weeks ago, me (but not necessarily the OED people themselves, who are exceptionally forward-thinking) - still perceive it as a book, or a set of books, even in its electronic form. It's not.

It would be great to get my hands on that data; it would make the Etymologizer much simpler to create, or at least feasible. The problem is that the value of the OED database is too great to just allow people to fool with it as they willed, and licenses cost hundreds or thousands of dollars just to look at the thing, without any access to the "source code" of the dictionary. Plus, as far as I know, they won't sell "slices" of the dictionary; you can't buy the Etymology slice of all 9 trillion words. Some information-must-be-free types might argue that the OED, and other extremely valuable cultural documents, should go "open source" and be freely available, so that their usefulness can be magnified amongst the peoples of the English-speaking earth. But why? What have you and I done for the OED?

(I often wonder what could happen if the community around Open Source - think Slashdot - advocated decent health care for the poor instead of complaining about Microsoft. Pushed for campaign finance reform. Tried to make direct change in international foreign policy. Went after the FCC for its absolute sell-out to corporate interests. What a difference they could make.)

So let's write off using the OED as a pipe dream; the OED's sacred trove of SGML-encoded word-ideas is as far from my hands as [insert metaphor about something far-away here]. The closest "open-source" equivalent of the OED is the The DICT Development Group

The DICT Development Group DICT project is a RFC-developing, free-software project, creating a protocol for the retrieval of dictionary-style definitions along a variety of linguistic axes. , which uses a variety of freely available texts to create a sort of meta-dictionary. But nothing has the wealth of the OED.

As has been relentlessly beaten into our heads by role-playing-games Libertarian types, the advantage of Open-Source software, typified by Linux circa 1995, before the Linux penguin put on make-up and a cheap nylon dress and went down to the docks of Corporationville and whored itself, is that you can go in and muck with things; you don't have to conform to the standards of software, if you're willing to learn a huge mess of arcane nonsense. I can make my application windows look any way I want; they can look like cubes, or Fiat automobiles; I can have my computer greet me by saying "You fuckwit! Get to work!" and build custom applications to make things function in an interesting, engaging fashion according to my own principles and beliefs, as long as I can find a manual.

Most software is still proprietary, but a significant amount isn't, because of the open source movement - enough to put together a working system, enough to build Ftrain. However, nearly all current formalized data, or information, or knowledge, what have you, comes pre-packaged with a set interface, even on the Web, because that data is proprietary and the companies feel they have more to gain by setting up walls than sharing. Often, they're right. Thus, electronic dictionaries and encyclopedias, e-books, and so forth all have their own encoding and database formats and secret methods of access, and if you want to re-use the data, you have to pay a large licensing fee, if they'll let you use it at all. Usually, they will, if you don't compete with them directly; licensing is free money. However, for the freely-available-on-the-Web Etymologizer, that's impossible. There's no money to spend past, say, $100-$200 I could dig up in quarters under the bed. So I'm caught trying to massage an Etymological dictionary out of a poorly encoded source.

If I'm running into this bind, as an amateur programmer and half-assed Web writer, then others will too. The answer is to create new ways to share information, which has been the goal of the XML "community," but what they're doing is closed off to the commonfolk because it's confusing, and no one has come down off the mountaintop at the W3C to make it clear what they're actually up to over there, yet:

PEF: "I don't understand how all this XML/XHTML/XLink/XPointer/XPath/XSL/SVG/FO stuff is going to work together, what the goals are, where the vision is. I mean, it's all great, don't get me wrong. I use it to build Ftrain.com."

W3C: "Just look at the standard and all will be manifest."

PEF: "But it's 9000 pages, and is filled with Backus-Naur grammar statements. I'm a human, not a computer! What are you guys really trying to do? What vision are you trying to promote?"

W3C: "We're trying to build <bigbrightlights>The Semantic Web</bigbrightlights>"

PEF: "But what is it? Can Ftrain.com be part of <bigbrightlights>The Semantic Web</bigbrightlights>?"

W3C: "Whether you wish to or not, all must belong to <bigbrightlights>The Semantic Web</bigbrightlights>."

PEF: "You're transforming into a giant terrifying aluminum robot!"

W3C: "<loud>Must...<louder> have... <loudest>corporate...<loudest-yet> funding... </loudest-yet> </loudest> </louder> </loud>"

W3C: "Just look at the standard and all will be manifest."

PEF: "But it's 9000 pages, and is filled with Backus-Naur grammar statements. I'm a human, not a computer! What are you guys really trying to do? What vision are you trying to promote?"

W3C: "We're trying to build <bigbrightlights>The Semantic Web</bigbrightlights>"

PEF: "But what is it? Can Ftrain.com be part of <bigbrightlights>The Semantic Web</bigbrightlights>?"

W3C: "Whether you wish to or not, all must belong to <bigbrightlights>The Semantic Web</bigbrightlights>."

PEF: "You're transforming into a giant terrifying aluminum robot!"

W3C: "<loud>Must...<louder> have... <loudest>corporate...<loudest-yet> funding... </loudest-yet> </loudest> </louder> </loud>"

The W3C wants to connect all data through semantic pathways. It's not enough, they feel, to put a site up on the Web with proprietary content; you need to find ways to make that content into objects that can fit into other people's objects, and vice versa. Good luck to 'em. I think you'd need to change the culture, first; do Americans really want to share? Does anyone want to share with the Americans?

So, in any case, back to the Etymologizer: an etymological analyzer with a complete database of words and their word histories, like in the Oxford English Dictionary, connected to a large semantic WordNet that would pluck out synonyms by traversing a variety of different linguistic trees, could tell you if your speakers were anachronistic or not. It could catch the speaker in England describing trunks for boots and other tiny, narrow things that keep writers in horror before the screen.

Here's how it could work:

First, take text and break it into sentences, using familiar routines. The Perl computer language, for instance, makes this feasible.

Then, scan the sentences with a link grammar parser or similar technology. This identifies the parts of speech of the sentences -- nouns, verbs, adjectives, etc.

Now, use an etymological dictionary to look up each word and code it according to its origins. There is a large range of problems here that need to be solved.

First, as I've stated above, there's the lack of a manipulatable electronic dictionary. The best candidate is the Merriam Webster dictionary from 1911, from Project Gutenberg, which has been encoded into a somewhat clean HTML form by the GNU Dictionary project. The problem, here, is that only root words include etymology, so "anger" might include etymological records, while "angry" won't. The best approach I can come up with is the either use some sort of word stemming technology - I think there's a Perl module for this - and when that doesn't work, simply keep cutting the words back, if they don't have etymological information, until you can guess. So when I come across "relentless," which has no etymological information, I check to see if I have information for "relentles" (no), "relentle", (no), "relentl" (no), and "relent" (yes) and cross reference the definition for "relent" to the word "relentless." Okay, but I'm screwed for fury/furious; it'll think that the root of furious is the same as the root for fur.

Now, of course, the link grammar isn't perfect, but it does a fairly good job of guessing which words are which. Notice how, in the two sentences "Is the owl's cloaca properly lubricated?" and "I applied proper lubrication to the owl's cloaca," the link parser can tell the difference in part-of-speech between "lubricated" (verb) and "lubrication" (noun).


    |       +----------------------Pv----------------------+      |
    |       +----------SIs---------+                       |      |
    +---Qd--+   +-Ds-+--YS-+--D*u--+                       |      |
    |       |   |    |     |       |                       |      |
LEFT-WALL is.v the owl.n 's.p cloaca[?].n [properly] lubricated.v ? 

    |                +-------------MVp-------------+                            | 
    |                +---------Os---------+        +----------Jp---------+      | 
    +---Wd---+---Ss--+         +-----A----+        |  +-Ds-+--YS-+--D*u--+      | 
    |        |       |         |          |        |  |    |     |       |      | 
LEFT-WALL i[?].n applied.v proper.a lubrication.n to the owl.n 's.p cloaca[?].n . 



Pretty neat, eh? Now, assuming (big assumption) that all works, we have to put it together. And here's the real problem - etymology, word history, is a range of values. Is a word with a Latin root but French inflection French or Latin? Words have traveled through millennia to get to us, through some awkward paths; at the root is a sort of Indo-European metalanguage from which all water flows. There's almost no way to look at a term and point out a definitive year and place for its usage.

My take, since we're analyzing texts, tracing back habits of the writer, would be a large corpus analysis for all the languages that influenced English. You find out when certain words were being used most, and create a kind of frequency table; show word chains and common roots for writer's prose. Then you'll know when words were most common. Then you can track the language use of your text back to certain eras. Run Shakespeare through and find out how much Latin and Greek really affected him (quite a bit, we already know). Run translations of Horace or Ovid or Homer through to find out how much the translators use native-language-inflected words. Hours of fun - beats the hell out of, say, cribbage.

My overall point, even though I didn't set out with this as my overall point, isn't that the world needs an Etymologizer, although it desperately does. It's that computers can provide tools to do things we never thought we needed, not just things that we did in the "real world," with other technologies. Too much computing is like Photoshop, based originally on the real-world process of photolab work and bound into that metaphor forever after, making an awful lot of magazine illustrations less interesting in the process, or MSWord, which confuses printing with writing. The metaphor of the desktop, the file folder, the garbage can, the relentless insistence that things be familiar and intuitive in ways that make the most immediate sense to bored administrative assistants, these all ultimately bound us to some of the blandest aspects of the "real world" - aspects of sorting, manipulating, processing, key-stroking drudgery - that computers are supposed to eliminate; such interfaces lack soul just as a filing cabinet lacks soul, and if there is a place where people are changing this - not just putting a new face on it, but looking to build a whole new set of abstract tools for dealing with knowledge and ideas - I wish I could find it, and go there, and sit and drink coffee.




Ftrain.com is the website of Paul Ford and his pseudonyms. It is showing its age. I'm rewriting the code but it's taking some time.


There is a Facebook group.


You will regret following me on Twitter here.


Enter your email address:

A TinyLetter Email Newsletter

About the author: I've been running this website from 1997. For a living I write stories and essays, program computers, edit things, and help people launch online publications. (LinkedIn). I wrote a novel. I was an editor at Harper's Magazine for five years; then I was a Contributing Editor; now I am a free agent. I was also on NPR's All Things Considered for a while. I still write for The Morning News, and some other places.

If you have any questions for me, I am very accessible by email. You can email me at ford@ftrain.com and ask me things and I will try to answer. Especially if you want to clarify something or write something critical. I am glad to clarify things so that you can disagree more effectively.


Syndicate: RSS1.0, RSS2.0
Links: RSS1.0, RSS2.0


© 1974-2011 Paul Ford


@20, by Paul Ford. Not any kind of eulogy, thanks. And no header image, either. (October 15)

Recent Offsite Work: Code and Prose. As a hobby I write. (January 14)

Rotary Dial. (August 21)

10 Timeframes. (June 20)

Facebook and Instagram: When Your Favorite App Sells Out. (April 10)

Why I Am Leaving the People of the Red Valley. (April 7)

Welcome to the Company. (September 21)

“Facebook and the Epiphanator: An End to Endings?”. Forgot to tell you about this. (July 20)

“The Age of Mechanical Reproduction”. An essay for TheMorningNews.org. (July 11)

Woods+. People call me a lot and say: What is this new thing? You're a nerd. Explain it immediately. (July 10)

Reading Tonight. Reading! (May 25)

Recorded Entertainment #2, by Paul Ford. (May 18)

Recorded Entertainment #1, by Paul Ford. (May 17)

Nanolaw with Daughter. Why privacy mattered. (May 16)

0h30m w/Photoshop, by Paul Ford. It's immediately clear to me now that I'm writing again that I need to come up with some new forms in order to have fun here—so that I can get a rhythm and know what I'm doing. One thing that works for me are time limits; pencils up, pencils down. So: Fridays, write for 30 minutes; edit for 20 minutes max; and go whip up some images if necessary, like the big crappy hand below that's all meaningful and evocative because it's retro and zoomed-in. Post it, and leave it alone. Can I do that every Friday? Yes! Will I? Maybe! But I crave that simple continuity. For today, for absolutely no reason other than that it came unbidden into my brain, the subject will be Photoshop. (Do we have a process? We have a process. It is 11:39 and...) (May 13)

That Shaggy Feeling. Soon, orphans. (May 12)

Antilunchism, by Paul Ford. Snack trams. (May 11)

Tickler File Forever, by Paul Ford. I'll have no one to blame but future me. (May 10)

Time's Inverted Index, by Paul Ford. (1) When robots write history we can get in trouble with our past selves. (2) Search-generated, "false" chrestomathies and the historical fallacy. (May 9)

Bantha Tracks. (May 5)

Tables of Contents