I wanted to learn Clojure’s implementation of logic programming (core.logic). So I wrote up a proof-of-concept of using it for test data generation. I think it turned out rather well, so I took the time to document it. There’s a github repository that includes wiki documentation and tests that are supposed to be explanatory.
It makes involved use of macros (in the sense of having macros that write macros). Those interested in macros might find what I’ve done interesting. That’s not documented, though.
The aim of philosophy, abstractly formulated, is to understand how things, in the broadest possible sense of the term, hang together, in the broadest possible sense of the term. […] To achieve success in philosophy would be, to use a contemporary turn of phrase, to ‘know one’s way around.’ [Two commas added for clarity]
Thus, philosophy [to Sellars] is a reflectively conducted higher-order inquiry that is continuous with but distinguishable from any of the special disciplines, and the understanding it aims at must have practical force, guiding our activities, both theoretical and practical. [Emphasis mine]
That seems to me not horribly askew from what I’m starting to understand as the aim of software architecture. I’d say something like this:
The aim of architecture is to describe how code, in a broad sense of the term, hangs (or will hang) together, in a broad sense of the term. The practical force of architecture is that it guides a programming team’s activities so as to make growing the code easier.
This definition discards the idea of architecture as some etherial understanding that stands separate from, and above, the code. Architecture isn’t true, it’s useful, specifically for actions like moving around (in a code base) and putting new code somewhere.
I also like the definition because it lets me bring in Richard Rorty’s idea that conceptual change is not effected by arguing from first principles but by telling stories and appealing to imagination. So architecture is not just “ankle bone connected to the leg bone“–it’s about allowing programmers to visualize making a change.
The client I was at for the last three weeks has enough teams working on a single code base that they have an architect role (indeed, a distinct architecture team). That’s made me wonder what I would do were I on that team, given my awfully iterative, refactor-your-architecture-into-being biases.
About those biases
The way I was taught to do architecture was like this:
You think and think and think, and design an infrastructure that will support the needed features. Then you add the features, one by one. The first part is the hard part, which makes the second part easy.
That approach still has enormous appeal for me, but it’s really hard to pull it off. The design is so often wrong, and the features don’t require what you expected, and the infrastructure needs to be changed, but you had no practice changing it, so it’s hard to change it right, and the lesser programmers who are supposed to be doing the features are standing around tapping their feet, or (worse!) they’re not standing around, they’re implementing the features anyway and doing all sorts of hackery to make up for the infrastructure failings, and you were supposed to be this godlike disembodied intelligence that Gets It Right, but now everyone scorns you.
Now you implement the next slice:
Amazingly, the second slice does some things quite similar to what the first one did. Either during the slice’s development or just after, factor that commonality out into shared code:
Repeat the process with each new slice, constantly finding and factoring common behavior:
The end result is a system with an infrastructure:
Moreover, if you do it well, that infrastructure will look like it was designed by some really smart person who provided exactly (and only) the features needed by the current implementation. And it’s an infrastructure that’s well tuned to allow the sorts of changes and growth that the business will demand in the future (because it was built by responding to those sorts of changes in the past).
How often does this work superbly? Probably about as often as big-architecture-up-front does, but it seems to be less risky and (I’m inclined to believe) gets closer to a really good architecture more often.
What would I do?
My first (and ongoing) task as an architect would be modest. I would pair with people on the implementation of vertical slices, concentrating on being a moving information radiator. In some ways, an architecture is nothing but a bunch of libraries, but any library so restrictive that it can’t be misused is one that can’t be used well. People need to know how to use a library, and the best way of teaching them is to sit with them and show them. An effective project has a common style. It’s everyone’s job to develop and communicate that style, but it surely seems an architect ought to pay special attention to it.
I would also be an agent of intolerance. Programmers are way too tolerant of programming experiences that grate, that throw up friction to seemingly every action. They tweet that they spent the whole day shaving a yak as if that were a mildly humorous thing instead of an outrage. As I programmed with my pair, I would insist we remove the friction for the next person who’ll have to do something similar. If it means we miss our story estimate (we’re working on a real vertical slice, remember), so be it: I will model a calm resolve.
If I do this well, all the teams will know they certainly have permission to solve the many small problems that would otherwise slow them down. They won’t wait for someone else (the architect!) to fix them.
Having enlisted all the teams in putting out small fires, would I then leap toward grand architectural visions? Not if I could avoid it. I’d rather work on solving medium-scale problems. Those are problems that even I wouldn’t be willing to solve under the auspices of a single story.
Here might be a scenario: as before, I’m pairing with someone. We notice the need for some abstraction or other solution, one that’s too big to get right during this story or iteration. We can still do three things, though: (1) imagine what a good solution would look like, (2) imagine what sort of smaller steps might get us there, and (3) take the first step. We end up with a delivered story, some rough plans for the future, and code whose structure is more than the minimum required. For example, we might have pushed code out into a class in a new package/namespace. And we might have changed both our new code and some existing code similar to it to use that new package.
Now, I as architect will actively look for new stories that give excuses to take further steps toward the good solution. I’ll pair with people on those stories. We’ll use the stories as an excuse to flesh out the previous solution (in a way that either doesn’t break previous stories or adapts them to use the fleshed-out solution).
I fully expect that the original “good solution” will look somewhat naive partway through this process. That’s fine. If I’m an architect, I’m supposed to be good at adjusting my path as I traverse it.
Eventually, we have a decent solution, some vertical slices that use it, some vertical slices that don’t use it, and some vertical slices that use it in a sort of halfway, intermediate, not-quite-right way. It’s my job to make sure code doesn’t get frozen in that state. Partly that’s a matter of helping on stories that give us an excuse to fix up some vertical slice. Partly that’s making sure my pairing results in ever more people who know how to use the solution and are sure to change code to use it when they get the excuse. (That’s the information radiator and agent of intolerance roles as applied to this one solution.)
What about the grand architectural vision? I don’t know. I’ve factored out medium-scale solutions, but I don’t have the personal experience to know how you get from one major architecture to a completely different one. I want to get vicarious experience in that this year.
So, I’d be nervous about big picture architecture, were I an architect. However, I’d take some comfort in knowing that we’d almost certainly be using frameworks that suggest their own architecture (Rails being a good example of that), and that we wouldn’t go too far wrong if we just did what our toolset wanted us to. (So, even though I’m personally fashionably skeptical of the value of object-relational mapping layers, I’d be conservative and use ActiveRecord, though I’d probably lean toward a development style like Avdi Grimm’s Objects on Rails.)
Still, there are two things I’d emphasize, if only out of fear of my own inexperience:
I do surely think that product directors should emphasize doing high-business-value work early and low- or speculative-business-value work later. However, as an architect, I’d be on the lookout for minimal marketable features that would stress the architecture, and I’d sweet-talk the product director into scheduling them earlier than she otherwise might. (Said sweet-talking would be easier because I’d always be concentrating on reinforcing the gift economy structure that makes Agile work.)
There exists an architectural change cycle that wastes endless amounts of money: humble programmers work within a given architecture, Architects That Be decree there will be a new one, programmers continue to work in old architecture because the new one isn’t available, the new architecture appears, programmers begin to use it, it turns out to have serious flaws, now the system has some code in architecture 1 and in architecture 2, all of which will be superseded by the “This time, definitely” architecture 3, etc.
I’d hope to force myself to roll out any new architecture gradually (in increments, as it was developed). That imposes extra cost on architecture development and somewhat reduces the possibility of change, but my bet would be the business cost of that constraint would be less than that of a Big Bang rollout.
Tools like Fit/FitNesse, Cucumber, and my old Omnigraffle tests allow you to write what I call “rhetoric-heavy” business-facing tests. All business-facing tests should be written using nouns and verbs from the language of the business; rhetoric-heavy tests are specialized to situations where people from the business should be able to read (and, much more rarely, write) tests easily. Instead of a sentence like the following, friendly to a programming language interpreter:
I have previously specified default payment options.
(The examples are from a description of Kookaburra, which looks to be a nice Gem for organizing the implementation of Cucumber tests.)
You could argue that rhetoric-heavy tests are useful even if no one from the business ever reads them. That argument might go something like this: “Perhaps it’s true that the programmers write the tests based on whiteboard conversations with product owners, and it’s certainly true that transcribing whiteboard examples into executable tests is easier if you go directly to code, but some of the value of testing is that it takes one out of a code-centric mindset to a what-would-the-user-expect mindset. In the latter mindset, you’re more likely to have a Eureka! moment and realize some important example had been overlooked. So even programmers should write Cucumber tests.”
I’m skeptical. I think even Cucumber scenarios or Fit tables are too constraining and formal to encourage Aha! moments the way that a whiteboard and conversation do. Therefore, I’m going to assume for this post that you wouldn’t do rhetoric-heavy tests unless someone from the business actually wants to read them.
Now: when would they most want to read them? In the beginning of the project, I think. That’s when the business representative is most likely to be skeptical of the team’s ability or desire to deliver business value. That’s when the programmers are most likely to misunderstand the business representative (because they haven’t learned the required domain knowledge). And it’s when the business representative is worst at explaining domain knowledge in ways that head off programmer misunderstandings. So that’s when it’d be most useful for the business representative to pore over the tests and say, “Wait! This test is wrong!”
Fine. But what about six months later? In my limited experience, the need to read tests will have declined greatly. (All of the reasons in the previous paragraph will have much less force.) If the business representative is still reading tests, it’s much more likely to be pro forma or a matter of habit: once-necessary scaffolding that hasn’t yet been taken down.
Because process is so hard to remove, I suspect that everyone involved will only tardily (or never) take the step of saying “We don’t need cucumber tests. We don’t need to write the fixturing code that adapts the rhetorically friendly test steps to actual test APIs called by actual test code.” People will keep writing this:
… when they could be writing something like this:
That latter (or something even more unit-test-like) makes tests into actual code that fits more nicely into the programmer’s usual flow.
How often is my suspicion true? How often do people keep doing rhetoric-heavy tests long after the rhetoric has ceased being useful?
A twitter conversation between Zee Spencer and Ron Jeffries makes me think I’ve never written down my two-phase approach to release planning.
Phase 1: Someone wants something. They have a problem to be solved by a software system. There are probably a few key features that are needed, perhaps buried in a mass of requirements that pretend to more authority than they’ll turn out to have. If the key features aren’t delivered, the system isn’t worth having. (Or the new release of an existing system isn’t worth having - it doesn’t matter.)
The person who wants the system (or wants to sell it) might have a release date in mind. It might be based on real constraints, like the beginning of a school year or the Christmas sales season. Or it might be arbitrary. Doesn’t matter to me.
At this point, the development team’s job is to answer the question: can this team produce a respectable implementation of that system by that date?
“Respectable” is a somewhat tricky idea. One conceptually easy part of it is “provides the key features”. Another is, embarrassingly, fuzzier: more about whether something can be provided that feels like a “whole” rather than a collection of parts. This is the kind of thing that Jeff Patton’s story mapping is about. It also includes Jeff’s idea of the distinction between incremental development (where you develop by bolting on features one at a time) and iterative development (where you get something complete-for-some-real-purpose done quickly but crudely, then add flesh to the walking skeleton.)
The answer to the “can it be done by that date?” question is going to be something of a leap of faith. The estimate that “we can do that by then” isn’t going to be as reliable as, say, the use of yesterday’s weather in iteration planning.
One way to get a better answer is to just go ahead and start the project. As Cem Kaner has said, you know less about the project right now than you ever will again. Spend a month building the product, then ask: can it be built by the desired date? You’ll get a much better answer. [Note: I’d favor really starting to build the project, just as if you’d made a two-year commitment, rather than doing some pilot study. I’d be suspicious that a pilot study would ignore a lot of the grinding details that take up a lot of project time.] The risk here, though, is that an answer of “well, it turns out we won’t be able to do it” will be unacceptable.
(Whether estimating at the beginning or after a month, I’d personally err on the conservative side because my experience is (1) the development team is more likely to be optimistic than pessimistic, and (2) I favor pleasant surprises - “We can do more! Or release sooner!” - over unpleasant ones.)
Phase 2: At this point, there’s a commitment: a respectable product will be released on a particular date. Now those paying for the product have to accept a brute fact: they will not know, until close to that date, just what that product will look like (its feature list). What they do know is that it will be the best product this development team can produce by that date. It’ll be the best product because the team commits to being flexible enough to put features into the product in any order. Therefore, the most valuable features can go in first, then the less valuable ones, then…
So, in this phase, you can stop worrying about anything but the near horizon. For much of the project, all that’s required is occasional stock-taking. (”Do we still think we’ll have a respectable product by the release date?”) Toward the end, there might be some need to predict more precisely than “best possible, given this team and that date”. Not everything that needs to be done before release might be as changeable as the code. (It’s harder to un-train a salesperson than to turn off the code for a feature that won’t be done in time.) But you’ll be in a good position to make good predictions by then.
My success selling this approach has been mixed. People really like the feeling of certainty, even if it’s based on nothing more than a grand collective pretending.
Critter4Us is a webapp used to schedule teaching animals at the University of Illinois Vet School. Its original version was a Ruby/Sinatra application with a Cappuccino front end. Cappuccino lets you write desktop-like webapps using a framework modeled after Apple’s Cocoa. I chose it for two reasons: it made it easy to test front-end code headlessly (which was harder back then than it is now), and it let me reuse my RubyCocoa experience.
Earlier this year, I decided it was time for another bout of experimentation. I decided to switch from Cappuccino to jQuery, Coffeescript, Haml because I thought they were technologies I should know and because they’d force me to develop a new TDD workflow. I’d never gotten comfortable with the testing—or for that matter, any of the design—of “traditional” front-end webapp code and the parts of the backend from the controller layer up.
I now think I’ve reached a plateau at which I’m temporarily comfortable, so this is a good time to report. Other people might find the approach and the tooling useful. And other people might explain places where my inexperience has led me astray. (more…)
“The anarchy of evasion” is a term that (I think) I read a few years ago. I occasionally search for it, but I’ve never been able to find anything about it again. As I recall, the idea is a reaction to the fact that an anarchist movement that forms an independent new society or transforms an existing society is… hard. So an alternative is to live within an existing society but try to make as many as possible of your activities and interactions run according to the principles you wished the whole society ran with. You behave as you wish, but in a way and context that increases the chance you won’t be noticed and therefore stopped.
The idea appeals to me, perhaps because I’ve always liked the idea of complex and interesting things happening unnoticed in the cracks. Probably a result of reading The Littles as a kid and books like You’re All Alone and Neverwhere later.
In some way, my original post makes it seem as if normal programmers are like the drug dealer D’Angelo Barksdale eating in a fancy restaurant in “The Wire“—everyone stared, the waiter rubbed D’Angelo’s ignorance of fine-restaurant customs in his face, and D’Angelo’s introspective attitude doesn’t hide a desperate desire to leave. Well, yes (though nothing in a programmer’s life is as bad as anything in a “The Wire” character’s life), but there’s more to it.
I think Clojure already suffers from being a language for hardcore programmers to solve hardcore problems. For example, that hardcore attitude makes things like nice stack traces seem unimportant. More: fishing through stack traces can almost become a perverse right of passage. Do you have what it takes to program in Clojure, punk?
The history of programming, as I’ve lived it, has been a march where expressiveness led the way and performance followed. I came of age when the big debate was whether us new-fangled C programmers and our limited compilers could ever replace the skilled assembly-language programmer. C won the debate. When Java came out, the debate was whether a language with byte codes and garbage collection could be fast enough to be a justifiable choice over C/C++. Java won the debate. Ruby is still being held up as too slow in comparison with Java, but even the pokey MRI runtime is fast enough for a whole lot of apps. Perhaps the emerging consensus is that Twitter-esque success will require migrating from Ruby to JVM infrastructure—and that’s exactly the kind of problem you want to have.
Against that background, it’s notable that Clojure 1.3 is taking a step backward in expressiveness in the service of performance. (By this, I mostly mean that performance—calling through vars, arithmetic operations—is the default and people who are worried about rebinding and bignums have to act specially. So my super-cool introduction-to-Clojure “Let’s take the sum of the first 3000 elements of an infinite sequence of factorials” example will become less impressive because arithmetic overflow is now in the programmer’s face.) That’s worrisome. There’s not much new in 1.3 to tempt the hitherto-reluctant Java or Ruby programmer. And unless I care about performance or am a utility library maintainer, I don’t see very much in 1.3 to tempt me away from 1.2.
Now, 1.3 is not a huge hit to expressiveness, and Clojure already has pretty much all the expressive language constructs that have stood the test of time (except, I suppose, Haskell-style pattern matching). So what else is left for someone of Rich Hickey’s talents to work on but performance? Documentation?
That’s a good point, but I worry about how performance will drive the constructs available to Clojure users. For example, multimethods feel a bit orphaned because of the drive to take advantage of how good the JVM is at type dispatch. There’s a whole parallel structure of genericity that’s getting the attention. And conj is a historical example: it’s hard to explain to someone how these two things both make sense:
As with arithmetic overflow, it strikes those not performance-minded (those not hardcore) as odd to have lots of generic operations on sequences except for the case of putting something on the end. “So I have to keep track of what’s a vector? In a language that creates lazy sequences whenever it gets an excuse to?”
Catering to performance increases the cognitive load a language puts on its users. Again, the hardcore programmer will take it in stride. But Lisp and functional languages are different enough that I think Clojure needs more focused attention on the not-hardcore programmers.
Now, the quick retort is that Clojure isn’t my language. It’s a gift from Rich Hickie to the world. Who am I to demand more from him? That’s a good retort. But my counter-retort is that I demand nothing. I only suggest, based on my experience with the adoption of ideas and languages. It’s hard making something. More information (from users like me) is probably better than less.
If people think that spending fifty percent of their time writing tests maximizes their results—okay for them. I’m sure that’s not true for me—I’d rather spend that time thinking about my problem.
… I could say that I read a snarky “[those losers]” implicitly attached to “okay for them.” I could say that “fifty percent of your time writing tests” and “rather spend that time thinking about my problem” are complete mischaracterizations of TDD as it’s actually done. And I can say that those mischaracterizations and that snark will be aped by programmers who want to be Rich Hickey.
I could say all those things, but the statement as written isn’t false. It can even be given a generous reading. But—if I’m right—programmers like me won’t stick around to hear defensible statements defended. They’ll just look for friendlier places where they aren’t expected to deal with decade-old arguments as if they were new and insightful.
Please note that I’m not saying Rich Hickie’s design approach is wrong. I suspect for the kinds of problems he solves, it may even be true that his approach is better across all designers than TDD is.
One of the characteristics of Agile is that it favors the reactive over the proactive. For example, you grow the program in response to new feature requests rather than needing to know all the features in advance. Programmers regularly talk of reacting to “code smells” or paying attention to “what the code’s telling us”.
Being appropriately reactive is a skill. You’re not born knowing how to attend to subtle details, or how to react in a graceful, helpful way.
Skills can be taught. All over the world, people are learning the complementary skills of leading and following in tango, an incredibly improvisational and reactive dance. I believe the learning techniques tango instructors use can be adapted to learning other kinds of reaction—like the reactions of pair programmers to each other and to the code.
In the first part of this session, I’ll teach you some simple tango moves. We’ll concentrate both on learning the moves and also on how we’re learning the moves.
In the second part of the session, we’ll break out computers and write programs or tests (or collaborate on some other task of your choice). Will you be able to react more appropriately to your pair? Will you give off better signals? Will consciously practicing the skill of pairing help you be better at it? We’ll see!
Important note: In my experience, most people are more comfortable learning to dance with someone of the opposite sex. I’ve also noticed that there are a lot more male programmers than female programmers. I worry the title will attract only programmers, leading to a big imbalance. So I encourage everyone to encourage women at the conference to attend at least the first half of the session. They’ll get something out of it (beyond an introduction to tango).