Running the Machine

Under the hood of Ftrain.com

This is the code for Ftrain with demo XML, except for the journal-manipulation stuff: ftrain_code.tar.gz (55K).

This is boring and technical, but I promised about 10 people I'd do it. If you're not interested in web site structure and layered digital narratives and XSLT and XML and so forth, don't even bother reading further. Sometimes I say that in a joking manner, but writing this up was about as much fun as pulling off my own toes, because at this point there are about 9 billion little decisions embedded in the code and I can't take the time to explain each one right now, even though I'd like to. No fun follows.

.  .  .  .  .  

Anyone who wants to work on this code with me should drop a line. What's here is an enormously slimmed down micro-version of the original 9 billion lines of shit-code I wrote, and now that it's fast, I want to expand the features, create a core suite of small (PHP?) functions would would be able to auto-execute on each page to allow people to add content to the pages on an ad-hoc basis, and have about 30-40 different little tools to build to make a real, proper Web site publishing framework - tools that for some reason no one else seems to be bothering with, like creating an interactive fiction layer over the narrative, tokenizing the narrative to allow people to move it around as they went. Alowing people to create "guided tours" of this content for their own sites, where their own comments and ideas about a page are published at the top of the page when they link someone to the page. Finding ways for people to take narrative tokens with them to other Web sites, creating a linked space between different Web sites and a narrative contiguous experience between entirely different content zones, editorially and contextually adaptive pages, multiple story pathways, varying forms from individual content (plays from stories, stories from novels), versioning, serialized cross-linked content, and so forth. Blah blah blah.

Anyone who seriously wants to make a Web site with this stuff should let me know, too.

Someone please give me a grant and an office for a few months, while you're at it? I promise to behave and post to the site every day and avoid prison sex jokes for the duration. The grant should be in "narrative technologies" and there should be a stipend and a tiny, clean apartment within walking distance of an office and a grocery. I'll deliver open-sourced code and a public thesis or presentation at the end and scrub floors and give seminars and ask questions and actually write a continuous multiaxial narrative (spatial/ digital/ character/ chronological/ emotional/ rhetorical) and read the right books and toe the line and I give up.

Sometimes having grand plans is totally isolating. The bottleneck is my brain.

.  .  .  .  .  

Right now, the system only works with the xsltproc commandline processor, which is part of libxslt, which requires libxml. Both are at http://www.xmlsoft.org.

Version: Using libxml 20310 and libxslt 1100

Later versions may not work! I'm not sure why. I used version 12 and no page text appeared; version 13 may have fixed the problem but I haven't tested. Version 11 of libxslt does work, however.

SAXON currently doesn't work because of a difference in the handling of the document output function between libxml and libxslt. If you'd like to know how to make it work with SAXON, let me know and I'll write it up and send you an email. This will probably change in the future, since they both claim to support the standard and one behavior or another for document output stuff must be right. I'm assuming right now that LibXSLT does it right, and perhaps the new version of SAXON does, too.

No other XSLT processors have been tested but assume they're not going to work unless really up-to-date. Sablotron probably won't; it's missing some important functions. If you're using Xalan, stop.

Once you've installed the libraries, you need to edit the file work/scripts/ftrain_vars.xsl in the installed directory and change the first variable to be appropriate to the full path on your own machine. So:

<xsl:variable name="dir_root">/home/ford/ftrainDEMO/</xsl:variable>


<xsl:variable name="dir_root">/my/home/dir/ftrainDEMO/</xsl:variable>

Then go into the work directory and run the shell script ./buildsite.sh and, if you have xsltproc available, it should spit out a whole bunch of HTML files at the top level, along with one RDF file (like the one at http://ftrain.com/ftrain.rdf. View the index.html file with your browser and you should be ready to go.

Basically, if you don't know XSLT it's going to seem like a big pot of nonsense. My stuff is probably not a good way to learn XSLT, as I learned everything wrong and now I just use a lot of tricks.

When you run the buildsite.sh script, it runs three XSLT scripts. The first script, ftrain_map.xsl slurps in demo.xml file, which is the top-level XML file describing the "site" and writes out a file called map.xml. This file contains an exact map of all the content, with all the titles, sections, descriptions, and dates. It ignores some sections, if I've flagged them as not-to-be-released.

The second script, ftrain_toc.xsl, slurps in map.xml, the file we just created. For me, this is much more efficient than using the original 2.5 meg file that is the Ftrain XML source; it's only 131K. (Remember that things can be 10-12x bigger in memory as DOM trees, so it's actually the difference between 30 megs and 1.2 megs.) This script spits out a bunch more XML files: a table of contents, a reverse-by-date table of contents, a forward-by-date table of contents, a last-10-entries listing, and an RDF representation of the site if people want to include that information on their own sites.

Now we run a third script, called ftrain_main.xsl. This reads in the demo.xml file again. The demo.xml file has entities that point to the various table-of-contents listings we just produced. So, essentially, we've made the file contain maps of itself. Instead of parsing just this file by itself, we load in the map created by the first step using xsl's document function and step through that. For each section in the map, we look up the corresponding section in the XML document using a key, and spit out a document. This original/map approach is much faster than dealing with the big XML document alone - 15 seconds vs 20 minutes.

They key to ftrain_main.xsl is that if the system doesn't have a rule for an element, it just passes it through. So essentially my DTD is HTML - <p> tags, <img src="whatever.gif"/> tags and the like, plus structural information. Each gathering of HTML is a sort of document, and gathered around it is the information that tells the document where it belongs. I've added a few things, like xrefs, which point another section in the site.

Should be enough to get you started if you're a Linux geek. Everyone else will have to wait - I've got things fairly abstract at this point but I don't know how other XSLT vendors implement their stuff and there are a bunch of little changes. Still, the basic functioning is all pretty standard and one script feeds into another. If there is any desire for a more generalized Ftrain system I'll try to meet it. But I know you're all weak, all talk, and that losing your e-commerce stocks took your fire away and you won't actually be joining me in uncovering the possibilities of new narrative connections via the global Interweb because the Web isn't cool anymore. Assholes. Me, I have a 20% stake in an e-commerce telecom startup and gave it hundreds of grievous painful hours of programming and consultancy time. 100,000 shares of nothing. That's the title of my success story. But I refuse to forsake the Web I love.

That's it. All documents linked together, all in harmony, all with full knowledge of their place in the hierarchy, but so many possibilities for each to transcend its place.




Ftrain.com is the website of Paul Ford and his pseudonyms. It is showing its age. I'm rewriting the code but it's taking some time.


There is a Facebook group.


You will regret following me on Twitter here.


Enter your email address:

A TinyLetter Email Newsletter

About the author: I've been running this website from 1997. For a living I write stories and essays, program computers, edit things, and help people launch online publications. (LinkedIn). I wrote a novel. I was an editor at Harper's Magazine for five years; then I was a Contributing Editor; now I am a free agent. I was also on NPR's All Things Considered for a while. I still write for The Morning News, and some other places.

If you have any questions for me, I am very accessible by email. You can email me at ford@ftrain.com and ask me things and I will try to answer. Especially if you want to clarify something or write something critical. I am glad to clarify things so that you can disagree more effectively.


Syndicate: RSS1.0, RSS2.0
Links: RSS1.0, RSS2.0


© 1974-2011 Paul Ford


@20, by Paul Ford. Not any kind of eulogy, thanks. And no header image, either. (October 15)

Recent Offsite Work: Code and Prose. As a hobby I write. (January 14)

Rotary Dial. (August 21)

10 Timeframes. (June 20)

Facebook and Instagram: When Your Favorite App Sells Out. (April 10)

Why I Am Leaving the People of the Red Valley. (April 7)

Welcome to the Company. (September 21)

“Facebook and the Epiphanator: An End to Endings?”. Forgot to tell you about this. (July 20)

“The Age of Mechanical Reproduction”. An essay for TheMorningNews.org. (July 11)

Woods+. People call me a lot and say: What is this new thing? You're a nerd. Explain it immediately. (July 10)

Reading Tonight. Reading! (May 25)

Recorded Entertainment #2, by Paul Ford. (May 18)

Recorded Entertainment #1, by Paul Ford. (May 17)

Nanolaw with Daughter. Why privacy mattered. (May 16)

0h30m w/Photoshop, by Paul Ford. It's immediately clear to me now that I'm writing again that I need to come up with some new forms in order to have fun here—so that I can get a rhythm and know what I'm doing. One thing that works for me are time limits; pencils up, pencils down. So: Fridays, write for 30 minutes; edit for 20 minutes max; and go whip up some images if necessary, like the big crappy hand below that's all meaningful and evocative because it's retro and zoomed-in. Post it, and leave it alone. Can I do that every Friday? Yes! Will I? Maybe! But I crave that simple continuity. For today, for absolutely no reason other than that it came unbidden into my brain, the subject will be Photoshop. (Do we have a process? We have a process. It is 11:39 and...) (May 13)

That Shaggy Feeling. Soon, orphans. (May 12)

Antilunchism, by Paul Ford. Snack trams. (May 11)

Tickler File Forever, by Paul Ford. I'll have no one to blame but future me. (May 10)

Time's Inverted Index, by Paul Ford. (1) When robots write history we can get in trouble with our past selves. (2) Search-generated, "false" chrestomathies and the historical fallacy. (May 9)

Bantha Tracks. (May 5)

Tables of Contents