How I messed up a medium-scale refactoring

Suggie is a back end Clojure app that is responsible for maintaining eight collections of stutters. The collections live in Redis and are consumed by two front-end apps. What goes in which collections, and how, is governed by a number of business rules. For example, one kind of new stutter produces two entries in one of the collections: the first being the new stutter and the second being an old one selected by a Lucene search.

The collections and business rules were added one by one. I wasn’t vigilant enough about keeping the code clean as I added them. At a point where I had a little bit of slack, I decided to spend up to two ideal days cleaning up the code (and adding one small new feature). I failed and ended up reverting about 70% of my changes.

What have I learned (or, mostly, relearned)?

Let’s start before I started:

  • It’s clear that I let the code get too messy before reacting. I should have made a smaller effort, earlier, to clean it up.

  • In general, I find that switching out of the “coding register” into the “explaining register” (talking vs. typing) helps me realize I’m going into the weeds. Because we’re a two-programmer shop, with only one of us (not me) competent at the front end, and we’re under time pressure (first big release of product 3, going for series A funding), I worked on Suggie too much without discussing my changes with Colin.

  • Relatedly, pairing would have helped. Unfortunately, Colin and I are of different editor religions - he’s vim, I’m emacs - and that has a surprisingly negative effect on pairing. We need to figure out how to do better.

As I did the refactoring, I’d say I had two major failures.

Execution.

I read over the code carefully and made diagrams with circles and arrows and a paragraph on the back of each one explaining what each one was. That was useful. But what I under-thought was the trajectory of the refactorings: which ones should come first, which next, so as to provide the most “You’re going askew!” information soonest. (Alternately: get the most innocuous and obvious changes out of the way first, so that they wouldn’t distract/tempt me as I was doing the more challenging ones.)

Intent.

I realized there were four design issues with this code.

  1. The terminology was out of date. (Bad names.)

  2. There was the oh-so-common problem that all the communication with Redis had gotten lumped into a single namespace (think “class”). The same code that put stutters into Redis hashes put ordered sequences of references-to-stutters into Redis sorted sets - and also put references-to-stutters into Redis plain sets. The code cried out to be separated into four different namespaces. Alone, that would have been a straightforward refactoring. But…

  3. But there was also the problem that the existing code was inefficient, in that it didn’t make good use of Redis’s pipelining. I want to be clear here: our initial move to Redis was motivated by real, measurable latency problems. And the switch to Redis was successful. But now that we were committed to Redis, I fooled myself in a particular way: “Efficiency’s good, all else being equal. We don’t know that we need pipelining here, but I see a pretty clear path toward just dropping it in during the refactoring that I’m doing anyway. So why not do it along the way?”

    Why not? Because, as it turned out, I’d have gone a lot faster if I’d first solved either problem 1 or 2 and then made the changes required to add pipelining. (That’s what I’m doing now.)

  4. Much of our Redis code is not atomic, which needs to be fixed. I decided I’d also fix that (for this app) at the same time I did everything else. As I write, that seems so obviously stupid that maybe I should find another profession. However, I convinced myself that this new refactoring would fall easily out of the pipeline refactoring (which would fall out of the rearrangement refactoring). In retrospect, I needed to think more carefully about atomicity without assuming that I really understood how it worked in Redis. But, again, I assumed I could learn that as I went.

So I mushed up many different things: renaming, moving code to the right place, introducing more pipelining, and keeping an eye out for atomicity. My brain proved too small to keep track of them. I should have sequenced them.

In addition to all that, I noticed some other things.

  • I would have done better to spend an hour a day over many days, rather than devoting full days to the refactoring. Because I have a compulsive personality, I must be forced to take time away from a problem to make me realize exactly how far down a rathole I’ve gone. (Alternately, I need a pair to reign me in.)

  • I kept all the tests passing, and I kept the system working, but I made a crucial mistake. There was a method called add-personal-X-plus-possible-Y. (The name alone is a clue that something’s gone wrong.) It was 16 lines of if-madness. Instead of modifying it (while keeping the tests passing and keeping the system working), I kept the system working by not changing it. I added a new function that was intended to be a drop-in replacement for it - come the glorious future when everything worked. So there was no connection between “system working” and “tests passing” while I was doing the replacement. The new function could have been completely broken, but the system would keep working, because the new function wasn’t used anywhere outside the tests.

    This seems to me a rookie mistake, a variant of “throw it away and rewrite it”. But somehow I allowed myself to gradually slip into that trap.

  • I suffered a bit from relative inexperience with the full panoply of immutable/functional programming styles. What I’d written was C-style imperative code. Transforming it into object-oriented code would have been straightforward, given my familiarity with various design patterns. Figuring out how to do the equivalent transformation idiomatically in Clojure, given all the constraints I’d placed on myself, took me too long. I only really figured out how to do it after I’d pulled the Eject lever.

Here’s something that’s interesting to me. I spent many years as an independent process consultant. In my spare time, I wrote code. Because that was a part-time thing, I had a lot of leisure to put the code aside and listen to that small, still voice telling me I was going astray.

Things are different now. This real world job has only strengthened my belief in what I preached as a consultant. In particular, I believe that teams must have the discipline to go slow to get fast. And yet: I keep going too fast. These days, it’s markedly harder for me to attend to the small, still voice.

It’s an interesting problem.

Looking to hire a programmer for a Chicago startup

Part of the reason I haven’t been posting is because Twitter has taken over my mind, but the main reason is that I’ve had a full-time job working for an education startup in Chicago (an early startup, but with paying customers already).

We’re looking for an experienced programmer interested in working in at least two of the following:

1. An ambitious web frontend (Javascript).
2. A growing number of backend Clojure services.
3. A smaller number of Rails apps that mediate between the two.

Although this isn’t exactly a devops or architecture job, experience with service-oriented architectures (their design, implementation, and tuning) is a plus. Right now, frontend skills are our bottleneck.

This is an agile shop, so we expect people to be dedicated to both keeping the code clean and deploying new features frequently.

“Plays well with others” is very important. So is valuing ease and smoothness of work (as in this essay). Everyone coming in to interview will get a poster:

Work with Ease

We currently have two programmers, a UX person, a product owner, a relationship manager, a salesperson, and a CEO. The product owner is that rare-as-a-unicorn Onsite Customer Who Sits With the Programmers.

Contact me if you’re interested.

Does mocking privates hurt testing as a design tool?

No.

.

.

.

.

.

Oh, OK.

@ctford asks:

@marick Have you blogged anywhere about with-redefs [a Clojure function used for mocking] and how it relates to mocking private fields in OO? I’d be interested in your thoughts.

@marick My concern is whether by using a test mechanism that prod code can’t exploit, am I reducing some of the design benefits of TDD?

——–

In order to design, the designer must find words (abstractions, if you wish) that give her leverage when thinking about the code.

In order for that leverage to carry over to writing and later changing the code, those words have to appear in it. Most importantly for a language like Clojure, verbs have to be reified as functions.

Testing makes design rigorous by forcing it to be concrete along a different dimension than coding (dynamic rather than static). Tests should therefore have executable (mockable) access to reified verbs.

A programmer using a function may need a conceptual understanding of those verbs, but does not need direct access.

Therefore the reified verbs may be marked private - but only if the tests have a way to evade that. (That is, the need to do the right thing with testing trumps the desire to prevent a future programmer from doing the wrong thing.)

In practice, I’ve found it mildly helpful to add a “testable” access level to public/private. That divides a chunk o’ code into (1) functions of interest to normal users of the code, (2) functions those users don’t need to care about, but that are important for understanding the idea behind the code, and (3) functions that are just a coding convenience.

Looking for young programmers to tell me about age-defying older programmers

I’ll be giving a keynote at ACCU 2013 on this topic: “Cheating Decline: Acting now to let you program well for a really long time”.

(Any resemblance to my own looming decrepitude is entirely non-coincidental.)

My premise is, roughly, this:

  • We know that age causes cognitive decline. There’s some evidence that it begins well before a typical person’s working life is over.

  • Programming, like mathematics, seems to be a field where that cognitive decline hits hard and relatively early. (There are all sorts of caveats around that statement, but let’s leave them for the talk.) Old programmers are thought not able to keep up with young programmers, to be wedded to old solutions to new problems, to be [fill in your stereotype here].

  • Nevertheless, some old programmers notably defy this trend. Some are superstars who (unexpectedly) didn’t burn out young. Some weren’t superstars, just good, solid, eminently employable programmers when they were young, who (unexpectedly) turned out to still be good, solid, eminently employable programmers today (once they get past the prejudice).

  • What’s special about those who smash the stereotype vs. those who reinforce it?

To answer the last question, I want to reach out to self-identified young programmers. If you believe you know someone who’s a stereotype smasher, I’d like to:

  1. … interview you about what’s special about that older programmer (with attention focused on behaviors and habits rather than innate qualities).

  2. … interview the older programmer and invite him or her to talk about the same topic.

If you are willing to participate, mail me.

How to install the Postgres pg gem on OSX Mountain Lion 10.8.2 (Ruby)

I have Mountain Lion on my new machine. (What a horrible release.) I innocently decided to upgrade my Postgres installation to 9.2.2. I had a lot of trouble getting the Ruby pg gem to build. I found no single source that solved the problem, so I thought I’d post the solution.

The symptom is this spewage:

ERROR:  Error installing pg:
	ERROR: Failed to build gem native extension.

        /Users/marick/.rvm/rubies/ruby-1.9.2-p320/bin/ruby extconf.rb
checking for pg_config... yes
Using config values from /usr/local/bin/pg_config
checking for libpq-fe.h... *** extconf.rb failed ***

Note: if the “checking for pg_config” line fails, you have a different problem. Either you need to set PGDATA correctly, or make sure that the `pg_config` command is in your $PATH. A number of pages out there on the web explain what’s going on.

  1. If you look at the `mkmf.log` file, you’ll see that (for some reason) the installer is trying to use `/usr/bin/gcc-4.2`. That doesn’t exist, so:

         sudo ln -s /Developer/usr/bin/gcc /usr/bin/gcc-4.2
    

    Thanks to Stackoverflow user dfrankow for that bit of the solution.

  2. That still fails because the program you’re compiling innocently tries to include `<stdio.h>` (etc.) Unlike Unix systems since the beginning of time, there is no `stdio.h` in `/usr/include`. To populate /usr/include you have to start Xcode (version 4.X), go to Preferences, pick the “Downloads” tab, and install the Command Line Tools.

    Thanks to Tim Burks on Twitter for this piece of the puzzle.

Three proposed workshops in Europe

I’ll be in Europe definitely between 9 April (Bristol) and 20 April (Kiev) and likely either before or after those dates. I have some ideas for workshops that we might have while I’m there, provided we can scare up someone to do the local arrangements (venue, mainly). I can work on finding participants, though help would be greatly appreciated.

Here are the ideas:

“Shell and Guts” combinations of object-oriented and functional programming

There’s been much talk about a strategy of having functional cores, nuggets, or guts embedded within an object-oriented shell. Here’s a picture from my recent book:

At SCNA, Gary Bernhardt described a different-but-similar approach. (Very neat - incorporating actor-ish communication between shells.)

This workshop would be about exploring the practicalities of such an approach. (Credit to Angela Harms, who proposed a workshop like this to me and Mike Feathers.)

Next steps in unit testing tools

This would bring together tool implementors, power users, and iconoclastic outsiders to make and critique proposals for moving beyond the state-of-the-practice of unit testing tools. Bold proposals might include: use QuickCheck-style generative/random-ish testing instead of or in addition to conventional TDD, or claiming that the next step in TDD is incorporating or replacing the sort of bottom-up tinkering in the interpreter that’s common in (for example) the Lisp tradition.

Community building for language and tool implementors

Although I released my first open-source tool in 1992, I’ve always been lousy at building communities around those tools. Other people have succeeded. This workshop would be a collection of people who’ve succeeded and who’ve failed, sharing and refining tips, tricks, viewpoints, and styles.


Format

Of the various meeting formats I’ve experienced, one of the most productive has been the LAWST format that I learned from Cem Kaner and Brian Lawrence. Some of the LAWST workshops produced concise, group-approved lists of statements that had substantial influence going forward. These three workshops seem like the kind that should produce such statements.

The LAWST format works best if it has a facilitator who has no stake in the topic at hand. (Such facilitators might well follow the style of How to Make Meetings Work or Facilitator’s Guide to Participatory Decision-Making.) I’d hope the local arrangements person could help find such.


If you’re interested in making one of these workshops happen, please contact me.

Europe trip?

I’ve been invited to speak at the ACCU conference 9-13 April, 2013. Because I hate long plane trips, I usually only want to go to Europe if I can get something like a total of three weeks of paid work there. That can be a single engagement at one company, typically doing hands-on programming coaching or just plain programming, or it can be several smaller gigs.

Given Functional Programming for the Object-Oriented Programmer, I can also do Clojure or functional programming coaching and training. I have a two day Clojure course on the topic, though it’ll need substantial revision to match the book.

If you think you have possibilities for me, contact me. Thanks!

SCNA talk blurb

I’ll be speaking at Software Craftsmanship North America, November 9th or 10th in Chicago. Here is the abstract of my talk:

Primum: logic programming computes values for variables based on relationships between known facts. In the main, introductions to it are either based on logic puzzles (cannibals and boats!) with an at best unclear relationship to the problems we write programs to solve, or on laboriously reimplementing things (arithmetic!) that we can already do perfectly well, thank you very much.

Secundum: generating complex test data is a hard problem, in part because constraints (relationships) amongst bits of the data have to be obeyed. The sadly common result is fragile tests that know too much about the details of their data.

Ergo: I will explain logic programming by showing how it can be used to generate test data from minimal descriptions of what’s needed.

Logic programming and test data generation

I wanted to learn Clojure’s implementation of logic programming (core.logic). So I wrote up a proof-of-concept of using it for test data generation. I think it turned out rather well, so I took the time to document it. There’s a github repository that includes wiki documentation and tests that are supposed to be explanatory.

It makes involved use of macros (in the sense of having macros that write macros). Those interested in macros might find what I’ve done interesting. That’s not documented, though.

Logic programming will be my talk topic at SCNA.

Functional Programming for the Object-Oriented Programmer

I’m somewhere between 1/3 and 1/2 of the way through a new book, Functional Programming for the Object-Oriented Programming, based on the tutorial I taught a couple of times last year.

You can find a description and sample chapters at Leanpub.