Archive for the 'programming' Category

Possible SCNA topic

I’ve been thinking about what talk I might give at Software Craftsmanship North America this year. I have a reputation for “Big Think” talks that pull together ideas from not-software fields and try to apply them to software. I’ve been trying to cut back on that, as part of my shift away from consulting toward contract programming. Also, I’ve heard rumblings that SCNA is too aspirational, not enough actionable.

However, old habits die hard, and I’m tempted to give a talk like the following. What do you think? Should I instead talk about functional programming in Ruby? Since this antique Wordpress installation makes commenting annoying, feel free to send me mail or reply on Twitter (@marick).

Persuasion, Scientists, and Software People

Science, as practiced, is a craft. Even more, it’s a high-stakes craft: scientists have to make big bets by joining in on what Imre Lakatos called “research programmes”. (A research programme is a Big Theory like Newtonian physics, quantum mechanics, Darwinian evolution, Freudian psychology, the viral theory of cancer, and so on.) It’s interesting to learn how scientists are persuaded to make those big bets.

It may also be useful. There are similarities between scientists and software people that go beyond how software people tend to like science and to be scientifically and mathematically inclined. Scientists build theories that let them build more theories. They build theories that let them build experiments that help them build more experiments. Software people build programs that help them build programs—and help them build theories about programming. So ways scientists are persuaded might also be ways software people can be persuaded.

If so, knowing how scientists persuade each other might help you persuade people around you to risk joining in on your craftsmanship “programming programme”.

In this talk, I’ll cover what the science students Imre Lakatos, Joan Fujimura, and Ian Hacking say about how science progresses. I’ll talk about viruses, jUnit, comets, Cucumber, Mercury’s orbit, Scrum, and positrons.

My story about cyclomatic complexity

In the 1980’s, I was interested in metrics, maybe even a fan, and cyclomatic complexity was trendy. I was never entirely comfortable with it: all the justification in terms of “basis paths” seemed hand-wavily disconnected from the reality of code, and the literature seemed to go out of its way to obscure the fact that the cyclomatic complexity of a routine is just the number of branches + 1 (including the implicit branches in boolean statements like a && b). (I don’t remember if there are any restrictions on that identity.)

Then I read a paper in CACM on the cyclomatic complexity of Unix commands, and it was quite a revelation. You see, I worked for a Unix porting shop. We were regularly presented with broken commands whose code we didn’t know well, commands we had to dive into and fix. Two of the commands whose code was ranked least complex by the paper were notorious among us for being hard to understand. Otherwise fearless programmers wept like children when told they’d have to work on `/bin/sh` and `adb`.

Why were those two commands so hard to understand, even if not complex? The one I remember best was `/bin/sh`. This was an older version, written by Steve Bourne, I believe. At that time (I assume he’s gotten over it), he was enamored with the idea of turning C into Algol using the C preprocessor.

That is, his code included a header file that looked like this:


#define BEGIN {
#define END }
#define AND &&

That meant the parts of C that are easy to understand were defined into another language. But you can’t do that with the parts of C that are hard to understand—”Is this a pointer to a function returning an array or a pointer to an array of functions?”—so all that was left (mostly) alone. The end result was like a combination of German and Sanskrit, which turns out to be hard to read even if you know both languages.

(As I recall, the “Algol” part wasn’t just a matter of lexical substitution: it also also meant using Algol idioms instead of familiar C ones. I don’t remember the details, but it might have been things like not using C’s null-terminated strings and malloc/free, but instead inventing new libraries to learn.)

None of this affected the branchiness of the `/bin/sh` code, so it didn’t affect the cyclomatic complexity. It just made the job of understanding the code much more, um, complex.

I don’t remember as much about `adb`. I think it was partly the complexity of the domain (it was an assembly-level debugger), partly that it was heavily data-driven (so that the complexity was in the setup and interconnection of data, not in the code that then processed it).

The point of all of this is that merely using a word from the domain of human psychology allows you to import, under cover of night, tons and tons of assumptions, making them harder to notice. By saying “complexity” instead of “branchiness”, the purveyors of one simplistic measure were able to make it seem profound. They could hide, perhaps even from themselves, the fact that they were, at best, feeling one part of the elephant.

The same applies, I suspect, to the common statement that “functional programs are easier to reason about.” I happen to believe that’s a heck of a lot closer to being true than that complexity is all about branchiness, but still: it’s important to realize that the motivation for that broad statement has historically been the much narrower claim that functional languages make it easier to prove that programs satisfy their specifications.

When most of us slap down our coffee cup next to the keyboard and open up some code in the editor, our conscious goal is not to make proofs. It’s to accomplish other ends. Therefore, a straight extrapolation from “easier to write proofs about” to “easier to reason about” is a big leap. It smuggles in an assumption that “to reason” is one sort of thing, and that one sort of thing is formal logical deduction. That’s a really strong and dubious claim about the nature of thought, one that hasn’t led to overwhelming practical success when used in, oh, artificial intelligence or as logical positivism.

As usual, we ought to leave the grand claims about “the way humans are” or “the way that it is best to live/work” to psychologists and preachers. Amongst ourselves, perhaps we should just say things like “I’ve been doing this one kind of fairly specific thing recently, and I’ve been surprised to find that X has been really helpful to me. Maybe it will help you too.”

Rubactive: functional reactive programming in Ruby

I was trying to figure out what functional reactive programming (FRP) is. I found the descriptions on the web too abstract(*) and the implementations to be in languages that weren’t easy for me to use. I could have been more patient, but I’ve always found rewriting (documentation and implementation) a good way to understand, so I (1) implemented my own (extremely trivial) library in Ruby and (2) changed the names to ones that made sense to me (since I found the terms commonly used to be more opaque tokens than ones that pointed me toward meaning).

And—why not?—I put the code up on github. I also wrote the documentation/tutorial I wish that I’d found early on.

I should point out that it’s quite possible I never really did understand FRP, making my tutorial a Bad Thing.

(*) I don’t mean to criticize that abstract descriptions. They were written by people in particular interpretive communities for other members. I just happen not to be one of those members.

Refactoring code retreat - help needed

Based on (mostly twitter) response to my note asking for information on composed refactorings, I’ve concluded there’s a gap in the knowledge-market, and I’ve decided to see if I can fill it.

The topic is medium-scale refactorings, ones composed from the kind of Opdyke/Fowler-style “atomic” refactorings that tools can implement. Such medium-scale refactorings can be done in, say, an hour to a day in response to needs that I think of as vaguely “small-scale architectural” (things like the need for layers that cross-cut many vertical slices, or separation of responsibilities into multiple cooperating classes, or the desire to segregate behavior into a pluggable state machine pattern, etc).

As with small-scale refactorings, the goal is to make the change safe in the sense of having a set of tests that always pass. It’s a lower risk and lower disruption (but perhaps slower) approach than discarding and rewriting.

Two aspects particularly interest me:

  • How do you get good at imagining the “path” from where your code is now to where you want it to be? (A good path wouldn’t leave you stuck halfway with the only way forward being a big, risky change to a bunch of code. It also is one that holds open the possibility of discovering an even better, unanticipated path halfway through.)

  • People who prefer rewriting to refactoring sometimes say they fear that refactoring gets them “stuck at a local maximum”. I can imagine that: I see what my code is like now, I can see how it would be better, but I can’t see a refactoring path from here to there. But, because I’m hooked on the feeling of safety refactoring gives me, I don’t improve the code.

    So it’s worth exploring how often a really good refactorer (not me - yet!) will be unable to see any path. What characterizes such hard cases?

    And there are related questions: once you can really do changes well both ways (refactoring and rewriting), what are the tradeoffs? how do you decide which to do?

The first way I want to address the topic is via a refactoring code retreat, patterned roughly after Corey Haines’ code retreats. There have to be differences, though. The point of a code retreat is that you write the code from scratch and throw it away when you’re done. The point of a refactoring code retreat has to be that you start from a code base and transform it. That means someone — that is, me — has to provide the starting code. I’m beginning to do that here. There’s currently one Ruby exercise, complete with instructions.

How can you help?

  • Try out the split-into-two example. (You can get a tarball or zipball here.) Join the mailing list and post your reaction. (Too simple? Too unclear? Needs a dash of X to make it more realistic? Etc.) [I’ll also take pull requests.]

  • Give me an example of a medium-scale refactoring you’ve done yourself. A written description at the level of detail of the sections labeled “The app” and “The story” in the split-into-two example would be great. So would a pointer to two commits on Github (or equivalent) and a brief note like “Look at how classes A, B, C changed into A, B, D, and E.”

  • I’m thinking of having all the examples in Ruby, Java, and JavaScript. Help translating them would be cool. Examples in other languages might also be fun.

  • I’m concentrated on traditional object-oriented refactoring, but I’m also interested in how refactoring applies in functional languages like Clojure. Got examples?

  • I’ll need a local host for a first trial workshop. Chicago area preferred.

The aim of architecture

A quote from Wilfrid Sellars:

The aim of philosophy, abstractly formulated, is to understand how things, in the broadest possible sense of the term, hang together, in the broadest possible sense of the term. […] To achieve success in philosophy would be, to use a contemporary turn of phrase, to ‘know one’s way around.’ [Two commas added for clarity]

In a biographical article, Willem deVries expands on that:

Thus, philosophy [to Sellars] is a reflectively conducted higher-order inquiry that is continuous with but distinguishable from any of the special disciplines, and the understanding it aims at must have practical force, guiding our activities, both theoretical and practical. [Emphasis mine]

That seems to me not horribly askew from what I’m starting to understand as the aim of software architecture. I’d say something like this:

The aim of architecture is to describe how code, in a broad sense of the term, hangs (or will hang) together, in a broad sense of the term. The practical force of architecture is that it guides a programming team’s activities so as to make growing the code easier.

This definition discards the idea of architecture as some etherial understanding that stands separate from, and above, the code. Architecture isn’t true, it’s useful, specifically for actions like moving around (in a code base) and putting new code somewhere.

I also like the definition because it lets me bring in Richard Rorty’s idea that conceptual change is not effected by arguing from first principles but by telling stories and appealing to imagination. So architecture is not just “ankle bone connected to the leg bone“–it’s about allowing programmers to visualize making a change.

Looking for information about composed refactorings

Even after all these years, I take too-big steps while coding. In less finicky languages (or finicky languages with good tool support), I can get away with it, but I can’t do that in C++ (a language I’ve been working in the past two weeks). So I’ve had to think more explicitly about taking baby steps. And that’s made me realize that there must be some refactoring literature out there that I’ve overlooked or forgotten. That literature would talk about composing small-scale refactorings into medium-scale changes. The only example I can think of is Joshua Kerievsky’s Refactoring to Patterns. I like that book a lot, but it’s about certain types of medium-scale refactorings (those that result in classic design patterns), and there are other, more mundane sorts.

Let me give an example. This past Friday, I was pairing with Edward Monical-Vuylsteke on what amounts to a desktop GUI app. Call the class in question Controller, as it’s roughly a Controller in the Model-View-Controller pattern. Part of what Controller did was observe what I’ll call a ValueTweaker UI widget. When the user asked for a change to the value (that is, tweaked some bit of UI), the Controller would observe that, cogitate about it a bit, and then perhaps tell an object deeper in the system to make the corresponding change in some hardware somewhere.

That was the state of things at the end of Story A. Story B asked us to add a second ValueTweaker to let the user change a different hardware value (of the same type as the first, but reachable via a wildly different low-level interface). The change from “one” to “two” was a pretty strong hint that the Controller should be split into three objects of two classes:

  • A ValueController that would handle only the task of mediating between a ValueTweaker on the UI and the hardware value described in story A.

  • A second ValueController that would do the same job for story B’s new value. The difference between the two ValueControllers wouldn’t be in their code but in which lower-level objects they’d be connected to.

  • A simplified Controller that would continue to do whatever else the original Controller had done.

The question is: how to split one class into two? The way we did it was to follow these steps (if I remember right):

  1. Create an empty ValueController class with an empty ValueController test suite. Have Controller be its subclass. All the Controller tests still pass.

  2. One by one, move the Controller tests that are about controlling values up into the ValueController test suite. Move the code to make them pass.

  3. When that’s done, we still have a Controller object that does everything it used to do. We also have an end-to-end test that checks that a tweak of the UI propagates down into the system to make a change and also propagates back up to the UI to show that the change happened.

  4. Change the end-to-end test so that it no longer refers to Controller but only to ValueController.

  5. Change the object that wires up this whole subsystem so that it (1) creates both a Controller and ValueController object, (2) connects the ValueController to the ValueTweaker and to the appropriate object down toward the hardware, and (3) continues to connect the Controller to the objects that don’t have anything to do with changing the tweakable value. Confirm (manually) that both tweaking and the remaining Controller end-to-end behaviors work.

  6. Since it no longer uses its superclass’s behavior, change Controller so that it’s no longer a subclass of ValueController.

  7. Change the wire-up-the-whole-subsystem object so that it also makes story B’s vertical slice (new ValueTweaker, new ValueController, new connection to the hardware). Confirm manually.

I consider that a medium-scale refactoring. It’s not something that even a smart IDE can do for you in one safe step, but it’s still something that (even in C++!) can easily be finished in less than an afternoon.

So: where are such medium-scale refactorings documented?

Some thoughts on classes after 18 months of Clojure

I had some thoughts about classes that wouldn’t fit into a talk I’m building about functional programming in Ruby, so I recorded them as a video.

Topics:

  • Using hashes instead of classes.

  • Classes as a documentation tool—specifically, as a way of making functions easy to find.

  • Preferring module inclusion to subclassing (which is akin to preferring adjectives to nouns as a way of organizing the documentation of verbs). (Vaguely similar to duck-typing in Haskell.)

  • Object dot notation as a more readable way of writing function composition. (Similar to the motivation for the -> macro in Clojure or type-directed name resolution in Haskell.)

On mutable state

I’m working on a talk called “Functional Programming for Ruby Programmers”. While doing so, I’ve somewhat changed my opinion about mutable state. Here’s my argument, for your commentary. Being new, it’s probably at best half-baked.

  1. Back in the early days of AIDS (the disease), I remember a blood-supply advocate saying “You’ve slept with everyone that everyone you’ve slept with ever slept with.”

    The nice thing about immutable state is that it stays virginal and knowable. It is what it was when it was created. With mutable state, it’s more like “Your code might be infected by any code that ever touched your data before (or after!) you first hooked up with it.”

    However, I don’t personally find that a huge problem in programming, debugging, or understanding code. There are other problems I’d rather see fixed first.

  2. It’s often said that “code with immutable state is easier to reason about”. I realized a long time ago that there’s some sleight-of-common-language going on there, like the way people who wanted code to be less branchy got rhetorical leverage by renaming branchiness “complexity“.

    In the claim, “reason about” is (I believe) being taken to be synonymous with “prove theorems about”. Pace a whole long trend in artificial intelligence, I don’t believe theorem proving is the basis for, nor a good analogy to, reasoning. Thus I took the “reason about” statement to be a sort of solipsistic wankery from people with roots in the theorem-proving community, not an argument that should sway me, Pragmatic Man!, who left the world of proofs-of-design-correctness around 1984.

    What I’d forgotten is that optimization is theorem proving. One can only optimize if there’s a proof that the optimized code computes the same result as the original. If you consider the desire to implement, say, lazy sequences efficiently, you see how the guarantees that immutability gives are important. Ditto for automatically spreading work to multiple cores.

  3. So:

    My previous attitude was pretty much “Yeah, I have a preference for avoiding mutable state. Being immutable isn’t a huge win, but as long as I have people providing me with data structures like zippers, it’s not really any harder than fooling with mutable state. Still, I don’t see why I need an immutable language. If I don’t want to mutate my hashmaps, I just won’t mutate my hashmaps.”

    My new attitude is: “Oh! I see why I need an immutable language.”

I still don’t make a huge deal about immutability. Its benefit is greater than its cost, sure, but it’s not the Thing That Will Save Us.

Here are three Ruby functions…

Here are three Ruby functions. Each solves this problem: “You are given a starting and ending date and an increment in days. Produce all incremental dates that don’t include the starting date but may include the ending date. More formally: produce a list of all the dates such that for some n >= 1, date = starting_date + (increment * n) && date < = ending_date.

Solution 1:

Solution 2:

Solution 3 depends on a lazily function that produces an unbounded list from a starting element and a next-element function. Here’s a use of lazily:

As a lazy sequence, integers both (1) doesn’t do the work of calculating the ith element unless a client of integers asks for it, and also (2) doesn’t waste effort calculationg any intermediate values more than once.

Solution 3:

The third solution seems “intuitively” better to me, but I’m having difficulty explaining why.

The first solution fails on three aesthetic grounds:

  • It lacks decomposability. There’s no piece that can be ripped out and used in isolation. (For example, the body of the loop both creates a new element and updates an intermediate value.)

  • It lacks flow. It’s pleasing when you can view a computation as flowing a data structure through a series of functions, each of which changes its “shape” to convert a lump of coal into a diamond.

  • It has wasted motion: it puts an element at the front of the array, then throws it away. (Note: you can eliminate that by having current start out as exclusive+increment but that code duplicates the later +=increment. Arguably, that duplicated increment-date action is wasted (programmer) motion, in the sense that the same action is done twice. (Or: don’t repeat yourself / Eliminate duplication.))

The second solution has flow of values through functions, but it wastes a lot of motion. A bunch of dates are created, only to be thrown away in the next step of the computation. Also, in some way I cannot clearly express, it seems wrong to stick the inclusive_end between the exclusive_start and the increment, given that the latter two are what was originally presented to the user and the inclusive_end is a user choice. (Therefore shouldn’t the exclusive_start and increment be more visually bound together than this solution does?)

The third solution …

  • … is decomposable: the sequence of dates is distinct from the decision about which subset to use. (You could, for example, pass in the whole lazy sequence instead of a exclusive_start/increment pair, something that couldn’t be done with the other solutions.)

  • … eliminates wasted work, in that only the dates that are required are generated. (Well, it does store away a first date — excluded_start — that is then dropped. But it doesn’t create an excess new date.)

  • … has the same feel of a data structure flowing through functions that #2 has.

So: #3 seems best to me, but the advantages over the other two seem unconvincing (especially given that programmers of my generation are likely to see closure-evaluation-requiring-heap-allocation-because-of-who-knows-what-bound-variables as scarily expensive).

Have you better arguments? Can you refute my arguments?

I’m trying to show the virtues of a lazy functional style. Perhaps this is a bad example? [It’s a real one, though, where I really do prefer the third solution.]

Using functional style in a Ruby webapp

Motivation

Consider a Ruby backend that communicates with its frontend via JSON. It sends (and perhaps receives) strings like this:

Let’s suppose it also communicates with a relational database. A simple translation of query results into Ruby looks like this:

(I’m using the Sequel gem to talk to Postgres.)

On the face of it, it seems odd for our code to receive dumb hashes and arrays, laboriously turn them into model objects with rich behavior, fling some messages at them to transform their state, and then convert the resulting object graph back into dumb hashes and arrays. There are strong historical reasons for that choice—see Fowler’s Patterns of Enterprise Application Architecture—but I’m starting to wonder if it’s as clear a default choice as it used to be. Perhaps a functional approach could work well:

  • Functional programs focus on the flow of data through code, rather than on objects with changing state. The former seems more of a match for a typical webapp.

  • It’s common in functional languages to lean toward a few core datatypes—like hashes and arrays—that are operated on by a wealth of functions. We could skip the conversion step into objects. Rather than having to deal with the leaky abstraction of an object-relational mapping layer, we’d embrace the nature of our data.

Seems plausible, I’ve been thinking. However, I’ve never been wildly good at understanding the problems of an approach just by thinking about it. It’s more efficient for me to learn by doing. So I’ve decided to strangle an application whose communication with its database is, um, labored.

I’m going to concentrate on two things:

  • Structuring the code. More than a year of work on Midje has left me still unhappy about the organization of its code, despite my using Kevin Lawrence’s guideline: if you have trouble finding a piece of code, move it to where you first looked. I have some hope that Ruby’s structuring tools (classes, modules, include, etc.) will be useful.

  • Dependencies. As you’ll see, I’ll be writing code with a lot of temporal coupling. Is that and other kinds of coupling dooming me to a deeply intertwingled mess that I can’t change safely or quickly?

This blog post is about where I stand so far, after adding just one new feature.
(more…)