Tuesday, November 11, 2008

Facebook Haiku/Doggerel #1

The other night, bored, I half watched TV and half watched my wife on Facebook. It inspired me to take up pen and produce some dreadful haikus, and in the best tradition of the Internet and blogs, I'm inflicting them on you. To draw out your pain and my evil glee, I'll deliver them one at a time.

Here's number one.

Heh. Enjoy.

Friends write on my wall.
Messages of cheer and joy
Why do they hassle me?

Friday, October 24, 2008

Concordances, Part 3: Positioning Tokens

Today, we're going to add to the processing information about where a token appears in the original document.

Tokens Today

Currently, a token is just the string containing the token data.

Easy enough.

Tokens Tomorrow

To hold more information about the token, we'll need a richer data type. To accommodate that, here's a struct for tokens:

(defstruct token :text :raw :line :start :end :filename)

This gives us slots to hold the token's text; its original text before case normalization, stemming, or whatever; the line it occurred on; the start and end indices where it can be found on that line; and the name of the file the token was read from.

Again, pretty simple.

Updating the Tokenization

The big changes happen in the tokenization procedure. Currently, it doesn't take lines into account.

Let's start with the highest-level functions and drill down to the lowest. First, these functions tokenize either a file or a string.

(defn split-lines [input-string]
  (.split #"\\r|\\n|\\r\\n" input-string))

(defn tokenize-str
   (tokenize-str-seq (split-lines input-string)))
  ([input-string stop-word?]
   (filter (comp :text (complement stop-word?))
                       (tokenize-str input-string))))

(defn tokenize
   (with-open in (BufferedReader. (FileReader. filename))
       (map (fn [tkn] (assoc tkn :filename filename))
            (tokenize-str-seq (line-seq in))))))
  ([filename stop-word?]
   (with-open in (BufferedReader. (FileReader. filename))
       (map (fn [tkn] (assoc tkn :filename filename))
            (filter (comp :text (complement stop-word?))
                    (tokenize-str-seq (line-seq in))))))))

split-lines breaks a string into lines based on a regex of line endings.

tokenize-str uses split-lines to break its input into lines, and it calls tokenize-str-seq with them. The second overload for this function then filters the tokens with a stop list.

tokenize opens a file with a java.io.BufferedReader, and it calls tokenize-str-seq with them. It sets the :filename key on the token structures.

doall is thrown in there because map is lazy, but with-open isn't. doall forces map to evaluate everything. Without it, with-open would close the file before its contents could be read. This is a common mistake, and it will probably bit you regularly. It does me.

We haven't seen tokenize-str-seq yet. What does it do?

(def token-regex #"\\w+")

(defn- tokenize-str-seq
  "This tokenizes a sequence of strings."
   (tokenize-str-seq strings 0))
  ([strings line-no]
   (when-first line strings
     (lazy-cat (tokenize-line line-no (re-matcher token-regex line) 0)
               (tokenize-str-seq (rest strings) (inc line-no))))))

This function tokenizes a sequence of strings. It walks through the sequence, numbering each line (line-no). For each input line, it constructs a lazy sequence by concatenating the tokens for that line (tokenize-line) with the tokens for the rest of the lines.

when-first is new. It is exactly equivalent to when plus let:

user=> (macroexpand-1 '(when-first line strings (println line)))
(clojure/when (clojure/seq strings)
  (clojure/let [line (clojure/first strings)]
    (println line)))

tokenize-line constructs a lazy sequence of the tokens in that line.

(defn- tokenize-line
  "This tokenizes a single line into a lazy sequence of tokens."
  ([line-no matcher]
   (tokenize-line line-no matcher 0))
  ([line-no matcher start]
   (when (.find matcher start)
     (lazy-cons (mk-token line-no matcher)
                (tokenize-line line-no matcher (.end matcher))))))

mk-token constructs a token struct from a regex and line number.

(defn- mk-token
  "This creates a token given a line number and regex matcher."
  [line-no matcher]
  (let [raw (.group matcher)]
    (struct token
            (.toLowerCase raw) raw
            line-no (.start matcher) (.end matcher))))

That's it. tokenize and tokenize-str create a sequence of strings of input data. Each item in the sequence is a line in the input.

tokenize-str-seq takes that input sequence and creates a lazy sequence of the tokens from the first line and the tokens from the rest of the input sequence.

tokenize-line takes a line and constructs a lazy sequence of the tokens in it, as defined by the regex held in token-regex.

Finally, mk-token constructs the token from the regex Matcher and the line number.

If you've made it this far, you've probably got Clojure up and running, but if not, Bill Clementson has a great post on how to set up Clojure+Emacs+SLIME. In the future, he'll be exploring Clojure in more detail. He's got a lot of good posts on Common Lisp and Scheme, and I'm looking forward to seeing what he does with Clojure.

I haven't really explained about Clojure's laziness. Next, I'll talk about that.

Friday, October 17, 2008

Work in Progress, Draft 2....

Is done!

This is a SF/F novel I've been working on all year.

Is it any good? Who knows? It's certainly better than my last two efforts.

For draft three, I'm just going to decide what point of view to use. I wrote it in first person, but I rewrote the first half of draft two in third-person. I'm going to read the whole thing and decide which half has more potential, and then I'll beat the other half into submission.

Then I will take a break to get some perspective on it, just in time for NaNoWriMo.

Wednesday, October 15, 2008

Blog Action Day

Lately, I've donated some time writing for a non-profit that promotes education in India to diminish poverty, hunger, child labor, illness, and other concerns. So when I saw that the topic for Blog Action Day 2008 was poverty, I thought I'd lend my voice.

When I sat down to write this entry, I looked on the Internet for some facts to throw at you.

Then I changed my mind.

In all those numbers and percent signs, it's easy to forget that poverty has a face.

It is the man who doesn't have the livestock and knowledge to get the most from his farm, and who thus cannot provide enough food for his family.

It is the single mother who doesn't have the education and resources to support her children.

It is the child who has to work in a factory to help her family survive.

The current world economic crisis makes poverty an even more pressing concern. Bill Gates and Warren Buffer will be fine. As worried as the stock brokers and bankers are, they won't bear the full impact of the crisis. They'll be fine. I'm no Bill Gates, but I'll be fine, too, thanks for your concern.

But the economic downturn will hit first and hardest on those who already have trouble getting enough to eat each day.

I would say, "Remember them," but I don't want you to remember them. I want you to find a reputable charity, such as one of these, and give.

Now. Go. Do it.

Concordances, Part 2: Making a Plan

In the last entry, I showed what a concordance looks like. Today, I'm going to talk about the changes that I'll need to make to what we already have and about what I'll need to add to the system that's here now.


The biggest underlying change is that tokens will need to know their position in the input document. The system that we're building can only handle plain text documents, so I get to pretend that position is a simple concept. But it's really not. For example, in an XML document, the position could be an XPath expression.

Even for a text document, position isn't entirely clear. Is the position of a token the byte it started on in the original raw file? Is it the after-Unicode-decoding character of the token in the text of the file?

For our case, I'm going to say that the position of a token is the line number and beginning and ending character in the Unicode data of the document. This is what my example yesterday had.

Input Text

To keep track of that, instead of slurping in a file's contents all at once, I'll now need to read in each line separately and keep track of the line numbers. That shouldn't be difficult, though.


Tokens will also need to be more than just plain strings. They'll now need to be structures that keep track of their location: file names, line numbers, and starting and ending character indices.

As an added bonus, tokens will also be able to keep track of the original form of the word, as well as a normalized or a stemmed form.


To display the documents in alphabetical order and to pull all the occurrences of each word together easily, I'll need to index the tokens by word. The index won't be industrial-strength, really. It will probably bog down with too large of a document. Other options would be to use Lucene or store the index in a database of some form. For now, though, I'll just keep things simple.


I can imagine displaying a concordance a number of different ways: text files, HTML, a GUI form. To keep the system flexible, I'm going to defer that decision until later, and the core concordance generator will just return some basic Clojure data types. I'll also add a function that prints them to the screen.

That's it. This should be a lot simpler than the Stemmer we just worked on, but it should give us a good idea of how various words are being used in the documents.

Thursday, October 9, 2008

Concordances, Part 1: What's that?

OK, that's enough of a break, don't you think?

Now we've got some processing tools under our belts: tokenizing and stemming. We'll tweak those, and we'll add more later. But for now let's take a look at our data.

One of the earliest ways of analyzing texts is the concordance. Simply put, it's an alphabetical list of the words used in a document, followed by every occurrence of that word, a little context around it (typically just a few words), and the location of that occurrence.

The first concordance was done in the thirteenth century for the Vulgate translation of the Christian Bible. However, because complete concordances are labor-intensive, they really weren't practical as a wide-spread tool. Because of this, concordances were only made for works deemed sufficiently important, such as religious texts or the works of Shakespeare or Chaucer.

But computers changed that. Now, given a text file, a computer can spit out a concordance in little time.

For example, here are the first few entries of a concordance of the first few paragraphs of this blog entry (although this may reflect an earlier draft). The numbers at the beginning are the line in a text file I copied the entry into and the start and end indices of the word in that line. This should give you a picture of what a concordance is, although a better one would pull context from surrounding lines. This one doesn't. Yet.

  1: 23: 24 K, that's enough of *a* break.
  5:  7:  8                take *a* look at our data.
 10: 35: 36 eren't practical as *a* wide-spread tool. B

  4: 17: 20      We'll probably *add* more later, and we'

  7: 30: 39 he earliest ways of *analyzing* texts is the [conco

  3: 66: 69 r belts: tokenizing *and* stemming.
  4: 33: 36 bly add more later, *and* we'll definitely tw

  9: 58: 61 mplete concordances *are* labor-intensive,

Next we'll analyze what we'll need to build the concordance and plan the next steps.

Friday, September 12, 2008

Stemming, Part 19: Debugging

Before I leave the Porter Stemmer behind, I want to show you some of the tools I used to debug the code as I went along.

There are some more modern options for debugging Clojure than what I'm presenting here. (Search the mailing list for details.) Personally, I generally use print statements for debugging. It's primitive, but effective. In some languages, it can also be painful. Fortunately, lisp languages take much of the pain out of print-debugging.


One common way to debug programs is to follow when a function is called and returns. This is called tracing, and this function and macro handle that.

(defn trace-call
  [f tag]
  (fn [& input]
    (print tag ":" input "-> ") (flush)
    (let [result (apply f input)]
      (println result) (flush)

trace-call returns a new function that prints the input arguments to a function, calls the function, prints the result, and returns it. It takes the function and a tag to identify what is being traced.

(defmacro trace
  `(def ~fn-name (trace-call ~fn-name '~fn-name)))

The trace macro is syntactic sugar on trace-call. It replaces the function with a traced version of it that uses its own name as a tag. For example, this creates and traces a function that upper-cases strings:

user=> (defn upper-case [string] (.toUpperCase string))
user=> (upper-case "name")
user=> (trace upper-case)
user=> (upper-case "name")
upper-case : (name) -> NAME

The debug Macro

Another common trick in print-debugging is to print the value of an expression. The macro below evaluates an expression, prints both the expression and the result, and returns the result.

(defmacro debug
  `(let [value# ~expr]
     (println '~expr "=>" value#)

For example:

user=> (debug (+ 1 2))
(+ 1 2) => 3

Lisp macros are especially helpful here, because they allow you to treat the expression both as data to print and as code to evaluate.

The debug-stem Function

This function is a debugging version to stem. It uses binding to replace all the major functions of the stemmer with traced versions of them.

(We'll talk more about binding later, when we deal with concurrency. Right now, just understand that binding changes the value of a top-level variable, like a function name, with a new value. But the variable only has that value for the duration of the binding. Afterward, it is returned to its former value.)

(defn debug-stem
  (binding [stem (trace stem),
            make-stemmer (trace make-stemmer),
            step-1ab (trace step-1ab),
            step-1c (trace step-1c),
            step-2 (trace step-2),
            step-3 (trace step-3),
            step-4 (trace step-4),
            step-5 (trace step-5)]
    (stem word)))

That's it. These were the main functions I used in debugging the stemmer as I ported it from C and made it more Clojure-native.

Next up, we'll create a concordance and look at other ways of presenting the texts that we're analyzing.

By the way, I've also finally updated the repository for sample code.

Thursday, September 11, 2008

I'm Ba-a-ack

Boy, but doesn't life just get in the way sometimes?

On the other hand, I have also let it keep me from posting. I was getting so bored with the Clojure tutorial series. I hate to think how tired you must have been with it.

But I'm back, and I'm going to make some changes.

  1. I've put a link to the table of contents for the Clojure tutorial in the sidebar. No matter how tired I am of it, it's still the main content on here.

  2. I'm going to continue the Clojure tutorial, but the pace won't be quite as relentless as it was. Hopefully, I'll be able to inject a little more energy into it, and it won't be quite as boring.

  3. I'm going to intersperse the tutorial with some other postings. I'll catch you up on what I've been doing, as well as talk about some other things that have caught my interest.

That's it. The moral: When you're getting tired, take a break and retool.

Thursday, August 7, 2008

Stemming, Part 18: Processing and Testing

All the pieces are in place, now here is the final piece. Also, I’ll describe how I tested this to make sure it was working correctly.

The stem Function

Everything that we’ve written so far happens under the hood. This function is finally the one function that will be called in other code. Without further ado, here it is.

(defn stem
  (if (<= (count word) 2)
    (apply str (-> word make-stemmer
                   step-1ab step-1c step-2 step-3 step-4 step-5

If the word has one or two letters, just return it. If it is longer, use the -> macro to thread the word through make-stemmer and the steps, and extract the stem vector.

The word vector gets passed to the apply function. This is a special higher-order function that takes a function and its arguments as a sequence. It applies the arguments to the function and returns the result. Let’s look at how it works.

user=> (+ 1 2 3)
user=> (apply + '(1 2 3))
user=> (apply + 1 '(2 3))

You can see that only the last argument to apply has to be a sequence of arguments to pass to the function. The other arguments can be listed individually before the final sequence, and they are put before the sequence. For example, you can’t do this:

user=> (apply + 1 2 3)   
java.lang.IllegalArgumentException: Don't know how to create ISeq from: Integer

Of course, if you’re doing that, you already know how many arguments you’re calling the function with, and in that case, you should just call it as is (that is, just call (+ 1 2 3)).

So in stem, we take a word vector and pass all of the characters in it to the str function. str converts all of this arguments to a string and concatenates them.

user=> (str \w \o \r \d)
user=> (apply str [\w \o \r \d])

Well, we have a new toy now, so let’s play with it:

porter=> (stem "porter")
porter=> (stem "porting")
porter=> (stem "ports")
porter=> (stem "ported")
porter=> (stem "stemming")


I’ve been presenting the code here as a finished product, perfect (I guess) as written. But it didn’t begin that way. In fact, I originally wrote something very close to the C version of the algorithm and made sure that worked right. Then I gradually changed it to make it more lispy. The is the result I have presented here.

To make sure it worked correctly, I downloaded the test input data and expected output from the Porter Stemmer web site. The first file contains 23,531 words for a test set. The second contains those same words after they’ve been run through the stemmer.

Next, I wrote a function that reads from both files, stems the input, and compares it to the output. I don’t always need to test every item in the test set. Sometimes I can get by with only testing the first so many words, so I’ve included a parameter to limit how many words to test. Also, sometimes I may want to see the output from every word in the test set, but most of the time, I really only want to see the errors. Finally, this returns the total number of words tested, the number the stemmer got right, and the number it got wrong.

(defn read-lines
  (with-open reader (new java.io.BufferedReader (new java.io.FileReader filename))
    (doall (line-seq reader))))

(defn test-porter
   (test-porter (.MAX_VALUE Integer) false))
  ([n output-all?]
   (loop [input (take n (read-lines "porter-test/voc.txt")),
          expected (take n (read-lines "porter-test/output.txt")),
          total 0, correct 0, error 0]
     (if (and input expected)
       (let [i (first input), e (first expected), a (stem i)]
         (if (= a e)
             (when output-all?
               (println "OK:" (pr-str i)))
             (recur (rest input)
                    (rest expected)
                    (inc total)
                    (inc correct)
             (println "ERROR:" (pr-str i)
                      "=> (" (pr-str a) "!=" (pr-str e) ")")
             (recur (rest input)
                    (rest expected)
                    (inc total)
                    (inc error)))))
       [total correct error]))))

The highlights of this are:

  • read-lines is a utility that opens a file using a Java BufferedReader and assigns that to reader. with-open always calls (. reader close) when it exits. line-seq takes a reader and returns a lazy sequence on the lines in the reader, and doall forces Clojure to read all the items in a lazy sequence. Basically, read-lines reads all the lines in a file and returns them in a sequence.

  • As we’ve seen before, take pulls the first n items from a list, which limits the number of words to be tested.

  • The loop continues while there is input from input and expected.

  • The input is stemmed and stored as the variable a (short for actual).

  • If the actual is the same as the expected, optionally output that, and loop, incrementing the number of total words tested and the number of words stemmed correctly.

  • If the actual and expected are not the same, always write this out and loop, incrementing the number of total words tested and the number of errors.

Tomorrow, I’ll talk about how I tracked down bugs that cropped up during testing.

Wednesday, August 6, 2008

Stemming, Part 17: The Final Step

Today, we pick apart step five. This will involve removing the final -e and -l from words. But each of these cases will be handled in a different function.

Final E

Silent -e is one of the more, um, endearing quirks of English spelling. Invariably, whenever someone suggests spelling reform, it is the first letter on the chopping block. Stemming isn’t spelling reform, but this step does get rid of silent and final -e in all words with more than one internal consonant cluster.

(defn rule-e
  "This removes the final -e from a word if
    - there is more than one internal consonant cluster; or
    - there is exactly one final consonant cluster and
      it is not preceded by a CVC sequence.
  (if (= (last (:word stemmer)) \e)
    (let [a (m stemmer)]
      (if (or (> a 1)
              (and (= a 1)
                   (not (cvc? stemmer (dec (:index stemmer))))))
        (pop-word stemmer)

Final L

Handling final -l just changes -ll to -l if there is more than one consonant cluster in the word. This cleans up any double-l’s that may be left around.

(defn rule-l
  "This changes -ll to -l if (> (m) 1)."
  (if (and (= (last (:word stemmer)) \l)
           (double-c? stemmer (dec (count (:word stemmer))))
           (> (m stemmer) 1))
    (pop-word stemmer)

Step 5

Once again, the function for step five is pretty simple:

  1. It pulls the word from the input stemmer and creates a new one with an index pointing to the end of the word;
  2. It runs that through both of the functions listed above.

Again, notice that we used -> to make it easier to read.

(defn step-5
  "Removes a final -e and changes -ll to -l if (> (m) 1)."
  (-> stemmer :word reset-index rule-e rule-l))


One quirk with this algorithm is that it sometimes removes letters from standard English words. For example, locate becomes locat. The output it produces isn’t standard, but it is correct. That is, all forms of locate collapse down to one stem: locat. So if you see a word that isn’t a word, don’t worry, it’s still correct. Remember, we’re identifying stems, not words.

Next, we’ll look at the stem function and at how to test a system like this.

Tuesday, August 5, 2008

Stemming, Part 16: More Suffixes

Today, we’ll look at step four of the Porter stemmer. This is a little different than previous and later steps, so we’ll just focus on it today.


Before we outline the step itself, however, we need to define another utility function. This tests whether the stemmer’s word has more than one internal consonant cluster. If it does, it strips off the ending; otherwise, it returns the original stemmer.

(defn chop
  "If there is more than one internal
  consonant cluster in the stem, this chops
  the ending (as identified by the index)."
  (if (> (m stemmer) 1)
    (assoc stemmer :word (subword stemmer))

Step Four

Once chop is defined, the rest of step four pretty much defines itself. Like steps two and three, it is a cond-ends? that tests for a variety of endings and strips them off.

There is one special case: If a word ends in -ion, preceded by a -s- or -t-, the -ion is removed, but the -s- or -t- is left. You can see how that is handled about half way through step-4.

(defn step-4
  "takes off -ant, -ence, etc., in context <c>vcvc<v>."
  (cond-ends? st stemmer
              "al" (chop st)
              "ance" (chop st)
              "ence" (chop st)
              "er" (chop st)
              "ic" (chop st)
              "able" (chop st)
              "ible" (chop st)
              "ant" (chop st)
              "ement" (chop st)
              "ment" (chop st)
              "ent" (chop st)
              "ion" (if (#{\s \t} (index-char st))
                      (chop st)
              "ou" (chop st)
              "ism" (chop st)
              "ate" (chop st)
              "iti" (chop st)
              "ous" (chop st)
              "ive" (chop st)
              "ize" (chop st)))

That’s it for step four. In the posting, I’ll outline step five, which is, if anything, more like step one than steps two to four.

Friday, August 1, 2008

Stemming, Part 15: Morphological Suffix

Now on to steps two and three. Here we strip off a bunch of morphological suffixes.

What Is a Morphological Suffix?

A morphological suffix changes a word from one part of speech to another. These can be joined together almost infinitely:

  • sense (verb)
  • sense + -ate = sensate (adjective)
  • sensate + -tion = sensation (noun)
  • sensation + al = sensational (adjective)
  • sensational + -ly = sensationally (adverb)
  • sensational + -ize = sensationalize (verb)

You get the idea. We could play this all day.

(I should drop in a warning here. The derivations above are purely morphological. I’m not making any statement about how a word developed historically or how it came into the language. There, I feel better. Thanks for putting up with my moment of pedantry.)

There are a set of rules to how morphological suffixes can be combined. You can’t just stick sensation and -ize together to make a verb. Also, different morphological suffixes change the stem’s root in different ways. Sense + -ate (sensate) is very different than sense + -ible (sensible), even though both -ate and -ible turn sense into an adjective.

The Porter stemmer leverages these rules to test for two different sets of endings in two different steps. The two steps are structured almost identically as two large cond-ends? expressions. In each case, they test for an ending, and if it is found, they replace it with another ending using r. The r function only makes the change if the word has an internal consonant cluster inside the stem. If it doesn’t have an internal consonant cluster, the ending is assumed to be part of the word and left alone.

For example, step 1c changes sensationally to sensationalli. Step 2 changes sensationalli to sensational.

On the other hand, the name calli, which also ends in -alli, should not be truncated to cal. Because the stemmer only truncates the ending if the stem has an internal consonant cluster, which calli does not, calli is left the way it is.

Our Functions for Today

Today, we’ll look at the functions for steps 2 and 3. As I said before, they are almost structurally identical, so I’ll show them both and comment on them together.

(defn step-2
  (cond-ends? st stemmer
              "ational" (r st stemmer "ate")
              "tional" (r st stemmer "tion")
              "enci" (r st stemmer "ence")
              "anci" (r st stemmer "ance")
              "izer" (r st stemmer "ize")
              "bli" (r st stemmer "ble")
              "alli" (r st stemmer "al")
              "entli" (r st stemmer "ent")
              "eli" (r st stemmer "e")
              "ousli" (r st stemmer "ous")
              "ization" (r st stemmer "ize")
              "ation" (r st stemmer "ate")
              "ator" (r st stemmer "ate")
              "alism" (r st stemmer "al")
              "iveness" (r st stemmer "ive")
              "fulness" (r st stemmer "ful")
              "ousness" (r st stemmer "ous")
              "fulness" (r st stemmer "ful")
              "ousness" (r st stemmer "ous")
              "aliti" (r st stemmer "al")
              "iviti" (r st stemmer "ive")
              "biliti" (r st stemmer "ble")
              "logi" (r st stemmer "log")))

(defn step-3
  "deals with -ic-, -full, -ness, etc., using
  a similar strategy to step-2."
  (cond-ends? st stemmer
              "icate" (r st stemmer "ic")
              "ative" (r st stemmer "")
              "alize" (r st stemmer "al")
              "iciti" (r st stemmer "ic")
              "ical" (r st stemmer "ic")
              "ful" (r st stemmer "")
              "ness" (r st stemmer "")))

These are nothing but cond-ends?. Each tests the input stemmer for a series of endings, and on the first ending found, it tests for an internal consonant cluster and changes the ending. If either of those conditions are false, the original stemmer is returned.

Possible Improvements

One obvious improvement would be to make a macro that takes an input stemmer and a sequence of ending/replacement pairs. It would expand into the cond-ends? above. It might look something like:

(replace-ending stemmer
                "icate" "ic"
                "ative" ""
                "alize" "al"
                "iciti" "ic"
                "ful" ""
                "ness" "")

I’ll leave that as an exercise to the reader.

In the next posting, we’ll look at stem 4, which is slightly different than steps 2 and 3.