I’ve been meaning for some time to write about how slow “quick and dirty” is, how misnamed the term is, how misguided are the hoards of managers (many of them former and current software developers) who embrace “quick and dirty” as a fast solution to pressing problems, as though it could ever deliver an actual solution.
I’ve been meaning for some time to write this. It took Lidor Wyssocky’s latest blog post to push me into it. And just in case someone out there couldn’t tell you were joking… (You were joking, right?) The sad truth is that “quick and dirty” may be dirty, but it’s never quick, at least not if you actually want to deliver a usable product.
“Quick and dirty” never has all the features the customer expected. And it rarely will. Because when you try to support all those edge conditions, the complexity quickly overwhelms you. It takes you twice as long to add the code to support the edge conditions, and in doing so you introduce innumerable new bugs.
“Quick and dirty” is what the A-Team did to build a tank out of a truck or a jeep. Yeah, maybe it got them out of their current predicament. But 5 minutes later, the whole landscape was in ruins (including the tankified vehicle). Is that what you want to deliver to your customers? I’m going to guess “Not.”
Pam Slim in an interview in this very blog talked about her some of her early consulting clients:
As your readers know, “doing the right thing†in corporations is not often supported, especially by senior management. But I felt really passionate about spending their money and my time on things that were most likely to make a real difference. I turned away many clients who insisted that I do things that I felt would be counter-productive to their business goals…
… and in what situations should we as professionals allow ourselves to adopt quick-and-dirty? (You are a professional, right?)
The general rule I use is that if it’s a spike solution for proof-of-concept or one-time demo, it’s okay to do quick and dirty. Because these things do not get delivered to customers. Before I deliver proof-of-concept or demo code to a customer, I clean it up. Or I rewrite it. Because I know that doing it right the first time is going to be both better and faster than any other way.
Deep in the recesses of my memory, there’s a story about a software development workshop. (I wish I could remember the citation.) The developers who attended the workshop were given some programming problem to solve and a limited amount of time in which to solve it. Those developers that took a little time first to think about the problem and how they might solve it generated more reliable solutions in less time than their counterparts who just started coding.
(By the way, the opposite of “doing it right” is not “doing it quick.” The opposite is “doing it wrong.”)
This is a running theme on Mythbusters. Host Jamie Hyneman, faced with a challenge, thinks first before constructing a solution, always looking for the simplest, most elegant design. Co-host Adam Savage, on the other hand, plunges forward impulsively, with cool, clever gizmos and gadgets. They frequently do episodes where the two Mythbusters compete to construct a device that will do such-and-such. Guess who more often achieves the goal within the allotted time, usually with a solution that makes you go “Wow.”
All software designs have some necessary complexity. Good design manages this necessary complexity, dividing it into smaller, simpler parts. I had a manager, a practicing software developer, who occasionally would criticize me for always doing grandly architected software designs. This puzzled me. When I pressed him on the issue, he told me that I had introduced unnecessary complexity. But every design feature he complained about was something that would be there anyway. So I told him so. Then he hit me with what I believe was the real issue. He said, sometimes we have to get the software out there, and we don’t have time for grand architecture.
As a professional, how can I see eye-to-eye with such a man? I was not sorry for my architectural decisions. These choices even allowed us to deliver features we previously could not even consider. And I will always be proud of his negative review. Because I knew and still know, it’s faster to do things right the first time, rather than to do it wrong.
Or to put it another way: I’d rather be known as someone who actually delivers what he says he will when he says he’ll deliver it, rather than someone who does “whatever it takes” and then delivers something that doesn’t actually work.
-TimK
Keep in mind that there is also a core Agile principle where you implement only the minimum amount necessary to accomplish the stated goals. The idea being that you deliver the most value to the customer for the least cost.
Superficially, the Agile approach shares things in common with “quick-and-dirty” as you are deferring architecture and infrastructure. I imagine some managers have difficulty distinguishing these.
A skilled developer will recognize that with the Agile approach, you achieve “quick” by limiting scope, implementing flexible, easily morphed architecture for the portion you do implement, and you don’t forgo tests.
Ultimately the biggest time gains you can achieve in any software project happens as a result of negotiating scope (feature set in the next release) with the customer, not by making construction shortcuts.
Good point, Tom.
I just reread this post— and can hardly believe it’s only 6 years old: it feels like an eternity ago. But I know I was experienced enough at that time to understand the Agile principles of “Do the simplest thing that could possibly work” and “Refactor continuously.” When you do those, your architecture ends up as complex as the problem. So whatever requirements were in the original problem end up expressed in the code. Lots of interacting stated goals results in lots of code and architectural complexity.
I think “Refactor continuously” is one concrete difference between good Agile development and quick-and-dirty development. At least, if you don’t refactor, you often end up with something that looks as though it was developed quick-and-dirty style. And that’s something even a Dilbertesque manager ought to be able to grasp, even though he probably would fight against it. “Why spend time refactoring when we could be adding new features instead? Besides, refactoring is dangerous; we might introduce bugs.” (Or maybe managers don’t parrot such arguments anymore? That would be nice. I can’t say I’ve heard either of them in several years.)
The following is not related to your comment, but I thought I’d lay out the story anyhow.
If I remember the “grand architecture” story correctly (and I probably should have told it in more detail in the post above), the feature I had developed was a directory cache for a multithreaded streaming video server. So you have video streams coming in, video streams going out, a filesystem in-between, everything running in different threads, and the problem was that when you had lots of hits to the same filesystem directory, just doing the directory listing was becoming a bottleneck. I probably developed this using Agile programming principles, because I always use Agile programming principles (though I may not have been as disciplined at the time, especially not with that codebase). In any case, after a week or two of development, during which I wrestled with issues like cache consistency, read-write policy, and thread synchronization, my manager—a practicing developer himself—was confused as to why I needed all these complex classes for the cache controller, cache content, file-system interface, and so forth. He had apparently envisioned a function somewhere that would simply save the cached directory listings in an array or something. (Probably in global memory.) Oy vey. He didn’t get that such a function would have all the lines of code (probably more) as my eventual solution, and all the same issues, along with more bugs, but buried in an indecipherable mass of could-be-cyphertext.
(As far as I know, that system is still as utterly unmaintainable as the day I left the company. Last I heard, they still had not merged into the baseline the architectural improvements I had helped design that would have allowed them to support HTTP version 1.1.)
To this day, I still believe that by separating concerns and issues into different classes, I was simplifying the solution, not making it more complex. There was no way to implement that cache as a simple function, or else that’s what I would have done. And there was no way to deal with edge-cases like race conditions and stale data without adding all that additional code. So I still don’t see how I could have approached the problem any differently.
-TimK