As Neal Ford explains, the NFJS Anthology series has been reborn as a monthly magazine and in the current edition, you can read my take on test infecting legacy organizations. I’ve been a proponent of the testing meme for most of my career but I’ve also spent much of that time convincing reluctant coworkers (and managers) that testing was in their best interest – the article takes my talk of the same name and puts it to paper. All NFJS attendees get a complimentary copy of of NFJS, the Magazine, but anyone is free to subscribe. Each month you’ll get an eclectic mix of articles written by NFJS speakers on topics they are passionate about; if you’d like to see a sample article, check out Jared Richardson‘s A Case for Continuous Integration [PDF]. Enjoy!
Today we learned something important, the NTSB announced the results of their investigation of the the 35W bridge collapse. Turns out it was a design flaw – some gusset plates weren’t quite up to snuff. As a result of this tragedy, bridges of similar design will undergo much needed scrutiny and we won’t see these types of designs in the future. Heck, by now engineering textbooks have probably already been updated.
Contrast this with the average failing software project. Maybe this is a bit of stretch (much like the bridge construction metaphor) especially considering that few software failures result in the loss of human life. But when was the last time anyone published a report about what went wrong with a multimillion dollar software collapse? No, we bury our mistakes (near Jimmy Hoffa) and pretend that the next time, when we do it JUST like we did it this time, it will work. It’d be too difficult to admit we did something wrong, even in the “safe” confines of our own organizations protected with lengthy NDAs.
Retrospectives are a vital part of creating better software but to be effective it requires a level of maturity that few organizations posses. Taking an honest look back at what happened, what went well and what went wrong leads to better results but only if you can discuss the project/increment/whatever openly and more importantly change how you do things. Life is full of constant adjustments, why should software projects be any different?
Clearly I’ve kicked off a trend – one day, I post about process, a few days later Reg Braithwaite coins a new term and the Daily WTF posts an ode to process . OK, I’ve only got three readers so I know it’s all just a big fat coincidence but still! Process theatre really does capture the notion perfectly; pointy haired bosses believe more templates, meetings and blind adherence to methodology are the answer to failing projects. Of course when you remove all decision making from the hands of sentient beings, you get a lost workday instead of someone just hitting the spacebar.
I’m not anti-process; you’ve got to have some framework to hang your work on. I plain can’t stand dogma though and I thoroughly believe you need to test your processes just as much as you test your code. Is this document/meeting/hurdle saving us money? Resulting in better projects? Happier customers? Whatever the raison d’être, we should make sure it’s actually delivering. Do more of what works, less of what doesn’t. Rinse and repeat.
Software is full of ilities – those quality attributes that more seasoned veterans (or anyone that thinks beyond today’s quarter) care an awful lot about. Some common non-functional requirements bandied about include scalability, reusability, flexibility, testability, availability, usability, adaptability, maintainability…really we could go on and on. Individually, none of these is more or less important than any other though depending on what you’re building for whom, certain attributes are given more or less weight. If I’m working on a simple app to manage my wine collection, I probably don’t care too much about scalability. But, when designing a ratings engine to process thousands of transactions, my concerns change. To put it succinctly, it depends and its always about tradeoffs.
Lately I’ve noticed a lot of projects value dateility above all else. Now, this isn’t necessarily a bad thing. Say you’ve got an important industry conference in six weeks and you need to have a demo ready or on the close date of a merger the books have to be unified – I’ve been in situations where hitting a specific date really was critical to the success of the project.
But then there are those times where the date is arbitrary, it’s pulled out the hat by some manager or VP in an effort to please their bosses or curry favor with the person cutting the checks. I remember one project where the importance of the date was reiterated to us again and again, only to be told at the holiday party that the plan really had us finishing a couple of weeks after the all mighty date. That didn’t sit well with those of us logging all that extra time and we spent most of the next month cleaning up the code in preparation for the next march.
Of course dialing any ility to eleven means others will be turned down to compensate. When we focus on the date at all cast, we stop testing, ignore best practices, and we’re left with a ball of mud. We might have “saved” a little time, but odds are we’ve cost ourselves significantly more in the long run. When building a house or a bridge, the consequences of shortened schedules are easy to see; with software, it’s harder to diagnose but no less real. High defect rates, difficult to use systems and high estimates for new feature work are typical markers of a rushed project.
The affect on team morale is evident to anyone that cares to see it. Nearly everyone I’ve worked with genuinely wants to do good work, they want to take pride in what they’re building. When forced to do a half-assed job, they don’t take it well. The key is saying no, to build less, but finding a manager or VP willing to do that is nigh on impossible. Agile techniques help, but culture trumps all – if people are rewarded based solely on hitting a date, success will be redefined to make sure the maximum bonuses are paid out.
Andy Hunt recently posted a great piece: Stage 0: Not Ready For Agile. He was all set to give a talk at a company until, well, someone discovered he was coming. Turns out the manager that contacted Andy hadn’t followed the non-existent process and instead of congratulating him or her on bringing in a well known speaker, they decided it’d be better if it could never happen again.
As stunned as I was by this, I’m not surprised. Talk is cheap – it’s easy to say you want to be (or are) agile but the proof is in the pudding. You can say you value collaboration but when there are three levels of indirection between developers and end users, that statement echos hollow.
Culture plays a huge role in how we build software; Andy lists several traits that indicate you might not quite be up to the challenge of agile. Some are fixed more readily then others but if your culture won’t support it, you’ve got your work cut out for you. Unfortunately, cultural issues don’t respond to technical solutions as Reg Braithwaite says so well with this quote:
“Cultural problems cannot be solved with technology. If you are an advocate for change, ask yourself what sort of cultural change is needed, not what sort of technical problems need to be solved.”
Changing culture is hard, but for many organizations, it’s the critical first step towards better software.
It shouldn’t be a big surprise that I prefer low ceremony agile processes to their heavyweight waterfall brethren. While I’m certainly not anti-process, I’ve spent way too much time in meetings defining how we were going to write software. Recently I wondered, do other industries spend so much time on mundane details like templates for issue logs? Perhaps they do, but I doubt the architects of the new 35W bridge spent a lot of time discussing the how they would document the plans for the new roadway.
Most companies I’ve worked for spend an awful lot of time and money on process. Much as no serious enterprise would ever consider deploying SAP or PeopleSoft without a hefty does customization, every organization wouldn’t dare to use an off the shelf process. No, it’s better if a group of people spend months coming up with the perfect approach and a set of presentations that would make Tufte weep. Despite a lot of pats on the back, I’ve yet to see this effort lead to any significant business value, yet it is a persistent part of the environments I’ve worked in.
It seems to me that most companies focus on process for one of three interrelated reasons. First off, there are a number of people in technology that, well, aren’t very technical. Some of these folks *used* to be, but many just bluffed their way into higher paying roles and well, they need things to work on. Of course not everyone needs to spend their free time brushing up on the finer points of the closures in Java, but last I checked templates don’t compile their way into working software.
Second, software projects have a nasty tendency to fail and to combat this fact, most organizations want greater control over the work. To accomplish this goal, they invent more and more process, more gates, more checkpoints. Of course it doesn’t really work that way, and the tighter they grip, the worse things get (hmm, sounds like a Star Wars quote to me.)
Project failure brings us to reason number three: plausible deniability. If (or is it when?) a project isn’t everything it can be, we can always point the finger at the process as either a point of blame (if only the process had a few more gates…) or as an example of success (as in: we’ve got a great process now…) Following the corporate standard also gives managers et al an easy out: yeah, the project failed, but we followed the all mighty process.
Don’t get me wrong, heroic software development isn’t good either. I’ve worked in manners best described as fire fighting and frankly that’s just too much stress for me (though some people I know absolutely thrive on that rush.) Like so many things, process lives on a continuum. At one end, we have absolutely nothing – hack and code, cowboy coding, whatever you want to call it. On the other end, we’ve got extremely heavyweight command and control approaches that attempt to plot out when every developer will use the bathroom.
Truth is often found between extremes and we should seek a balance. Do what’s right for your project though I’ll always favor the less is more camp. And before you spend the next few months creating the perfect process see if something off the shelf will work. Better yet, try some stuff and see what works for you – repeatedly ask yourself these two questions: what worked and what didn’t; do more of the first and less of the second and you might just find yourself on a successful project.
After listening to an OOPSLA podcast about a workshop on Fred Brooks‘ widely read No Silver Bullets, I was inspired to reread his seminal piece. Though 20 years old, I was struck by just how applicable NSB is today and while there are a few things that place it in time, as I’ve said before, the more things change, the more they remain the same. Heck, I even decided to assign it at dynamic language camp. Much of what Brooks writes about relates to accidental vs. essential complexity, a topic that’s echoed by Neal Ford in this post and Reg Braithwaite here. Stu Halloway touched on this at Code Freeze this year though he rephrased the concept as essence vs. ceremony. More and more, we’re finally heeding the message found in this C. A. R. Hoare quote:
“Programmers are always surrounded by complexity; we cannot avoid it. … If our basic tool, the language in which we design and code our programs, is also complicated, the language itself becomes part of the problem rather than part of its solution.”
Anyway, on to Brooks. In the spirit of a number of Ted Neward‘s posts, I’ll take snippets of NSB and inject my thoughts. Let’s start near the top with this gem:
“[Germ theory] told workers that progress would be made stepwise, at great effort, and that a persistent, unremitting care would have to be paid to a discipline of cleanliness. So it is with software engineering today.”
Though perhaps not what he intended, I see this as yet another call for continuous integration as well as fixing broken windows. It isn’t easy, it takes a great deal of work, but when we fail to be diligent, our “patients” get sick. And anyone that’s ever worked on decaying software knows how much fun that is…
This quote should be endlessly fed to those that think programmers are essentially typists:
“I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation. We still make syntax errors, to be sure; but they are fuzz compared with the conceptual errors in most systems.”
This work is made up of thought stuff – and anything we do to disrupt flow will ultimately hurt our chances of successfully developing software.
To those that think some tool or modeling language will make software so easy anyone can do, I’d counter with this:
“The complexity of software is an essential property, not an accidental one. Hence, descriptions of a software entity that abstract away its complexity often abstract away its essence.”
In other words, software is hard…though we often make it harder. Sometimes that’s related to our organizations:
“Much of the complexity that he must master is arbitrary complexity, forced without rhyme or reason by the many human institutions and systems to which his interfaces must conform. These differ from interface to interface, and from time to time, not because of necessity but only because they were designed by different people, rather than by God.”
Further confirming that the problems in software are largely people oriented, one can practically hear Brooks’ echo in the agile manifesto:
“The central question in how to improve the software art centers, as it always has, on people.”
“The differences between the great and the average approach an order of magnitude.”
These days, I’m asked often about how we’re going to “scale up” our development teams which is really just management speak for “off shore 80% of the work.” Now, fundamentally, I don’t have any issues with taking advantage of vast labor pools, but I’d much rather have a small team of top notch developers than a large team of, well, less than average ones. I’m not sure if it’s just the fiefdom complex or the overriding dictate of distant management, but big teams are usually problematic. With a few great developers, I can move the world. And let’s never forget garbage in, garbage out.
Back to Brooks – he’s more entertaining than I am. He touches on something near and dear to my heart praising higher level languages:
“Surely the most powerful stroke for software productivity, reliability, and simplicity has been the progressive use of high-level languages for programming. Most observers credit that development with at least a factor of five in productivity, and with concomitant gains in reliability, simplicity, and comprehensibility.”
Seems to echo with what people say about Rails, Ruby and a host of other languages these days. Expressiveness matters – a lot.
I’d argue this is largely what Stu was getting at in his Ending Legacy Code talk:
“[Abstract types and hierarchical types] removes yet another accidental difficulty from the process, allowing the designer to express the essence of the design without having to express large amounts of syntactic material that add no information content. For both abstract types and hierarchical types, the result is to remove a higher-order kind of accidental difficulty and allow a higher-order expression of design.”
Getting rid of boiler plate and focusing on the problem at hand is key. I know a lot of developers who defensively shout “but my tool handles all that for me.” Sure. See above. And don’t forget, you or someone coming in behind you will still have to read all those excess symbols.
Though I wish buying new hardware would solve all our woes, Brooks reaffirms what we already know:
“More powerful workstations we surely welcome. Magical enhancements from them we cannot expect.”
Bummer. Guess I’ll need a better reason to get that new MBP.
There’s quite an agile flavor to NSB and a number of Brooks’ comments speak very well of what is becoming a more and more common approach to writing software. I’m not sure about you, but I haven’t seen much success with waterfall…but iteratively developing solutions seems to work, a point he makes quite clearly:
“Therefore, the most important function that the software builder performs for the client is the iterative extraction and refinement of the product requirements. For the truth is, the client does not know what he wants. The client usually does not know what questions must be answered, and he has almost never thought of the problem in the detail necessary for specification.”
Despite what some think, you just can’t get it all right up front. This isn’t some fundamental failing, it’s a feature not a bug. Rather than attempt to fight this, just work with it; instead of trying to write it all on the plan before you break ground, iterate. Software types tend to be good abstract thinkers, but our customers often aren’t – thus why getting working products in front of them early is so important:
“I would go a step further and assert that it is really impossible for a client, even working with a software engineer, to specify completely, precisely, and correctly the exact requirements of a modern software product before trying some versions of the product.”
The value in this approach seems to obvious yet it still isn’t “the norm.” I honestly cannot understand why customers don’t demand this process.
“Much of present-day software-acquisition procedure rests upon the assumption that one can specify a satisfactory system in advance, get bids for its construction, have it built, and install it. I think this assumption is fundamentally wrong, and that many software-acquisition problems spring from that fallacy.”
He goes on to discuss growing software, an analogy that really speaks to me. I remember mentioning something along these lines to a former co-worker only to be rather rudely dismissed:
“Incremental development—grow, don’t build, software.”
When I discuss agile with skeptics, I really try to hammer on this point:
“One always has, at every stage in the process, a working system. I find that teams can grow much more complex entities in four months than they can build.”
All I can say is, I’m no Fred Brooks. But what does he know right?
Systems that spin off for months (or years) without that iterative review tend to fail – often rather expensively. I remember one project many years ago where the client eventually ran out of money (remember the dot com implosion?) Due to decisions made outside my influence, we really didn’t have anything he could use. Sure, he could “demo” the product, but he certainly couldn’t sell it. We’d spent a great deal of time designing everything in the system – all the screens, all the interactions…too bad we hadn’t spent more time building. Had we worked more iteratively, he would have had *something*. Oh well, lesson learned.
As I mature in this industry, I become more aware of our pioneers; as I find my path I discover how far out they saw. Though recent times have seen amazing advances, we have much to learn from our past.
A few years back, my wife and I built a house. OK, so *we* didn’t actually swing a hammer; if you’ve ever seen me attempt a project around the casa, you know hiring the project out was the sanest course of action. This was my second go round with home construction, a process that many say they’d never repeat. But hey, I’m a glutton for punishment so in we plunged! I’d mostly walled off the entire experience in that place we put painful events like child birth and two-a-day practices, but last week one of my projects got me thinking that maybe, just maybe, there’s a comparison to me be made between house construction and building software.
Before I get started, I have to consider whether I’m on shaky ground here – many people have written pieces questioning the whole construction analogy. For starters, my friends Neal Ford and Glen Vanderburg have their takes with building bridges without engineering and bridges and software. Jack W. Reeves’ classic What Is Software Design? is a must read and Reg Braithwaite has a great post about what he admires about engineers and doctors which has this money quote:
Try this: Employ an Engineer. Ask her to slap together a bridge. The answer will be no. You cannot badger her with talk of how since You Hold The Gold, You Make The Rules. You cannot cajole her with talk of how the department needs to make its numbers, and that means getting construction done by the end of the quarter.
Considering the shared experience behind that impressive collection of wise words, I’m questioning my sanity to even think a construction analogy might fit software. But, even though I’m conflicted about the whole thing, I’m still going to share. Heck, maybe I’ll learn something in the process.
When we built our house, we spent several weeks looking at model houses, poring over floor plans, looking at carpet samples – all sorts of fun stuff. Now, with the last “home project,” during the requirements gathering phase we were just focused on the big issues: do you want a built in here? Would you like a fireplace? How about a skylight here? You know, the stuff you’ve got to get right before the foundation is set and the walls are up. At various stages, I’d get a call from the project manager (yep, that was his title, I’m feeling more confident already!) and he’d setup a time for me to come out to the site and work with one of the trades on issues like outlet placement. With my sample size of one, I expected a similar (iterative) process this go round – alas I was wrong.
You see, this builder had a different approach. They believed heavily in making all (and I mean ALL) the decisions up front (can you say BDUF?) Being a software geek, I think I do a pretty good job of thinking abstractly but needless to say, it can be quite a challenge to figure out where you want your phone jacks when all you have to go on is a 2D model of your future dwelling. I pushed back on the builder and was told they did this for a reason – they felt that if everything was on the plan, I could go on (as they put it) a four month vacation and come back to a completed house *exactly* as I intended it to be.
As much as I wanted to believe the people I was about to give a very large check to, I wasn’t convinced and as you might expect, my wife and I were on site pretty much every other day keeping track of what was going on. It was a good thing we were vigilant customers constantly running our acceptance tests. Nearly every visit revealed something that needed to be fixed, a story to be added to the backlog (or punch list in this case.) Some things were minor – a switch not controlling the proper light or a misunderstanding about what the plumbing code would allow. But others were, well, of the show stopper category. For example, despite a very clear floor plan showing where the washer and dryer were to be, the plumber decided he’d just put the washer where it was in every other house. Thank goodness we caught it early but this whole “we get it on the plan thing” certainly didn’t work in practice.
So what the heck does this have to do with software? Well, one of my projects has a customer group that thinks like my builder – they want to give us all the requirements, we give them an estimate, then they don’t talk to us until the project goes live. Obviously, this doesn’t work too well. I’m not sure why people ever thought this approach worked, heck, if we can’t get it right with houses (something we’ve been building, oh, forever) how can we possibly get close with a discipline as new as software? No, the answer is found in an agile approach where we work closely with our customers. Does that mean we need to see them for eight hours a day everyday? I sure hope not, but if they aren’t willing to commit some time to the project, how important could it actually be?
Needless to say, we’re trying to get the customers to think different, to embrace a more collaborative approach and I hope we succeed. Otherwise, I have a pretty good idea what will happen after that four month vacation.
In my Test Infecting talk, I do my best to counter a number of myths that I (and others) have encountered when introducing testing into an organization. One of the most persistent misconception revolves around time – or rather the lack thereof. Many a developer has claimed they don’t have time to test to which I generally reply with a Pat Parelli quote from this post on Kathy Sierra‘s blog:
“Take the time it takes so it takes less time.”
Kathy was talking about multitasking but my point is simple: forgo testing and you’ll pay that price plus more later when the defects start rolling in. While I *think* this is persuasive, Dean Wampler went one better by using charts which we all know makes for a better argument Dean makes some great points in Why you have time for TDD (but may not know it yet…) though the part about moving unscheduled project end time up earlier into the project really hit home.
Ranges are fine and the key to success is frequent milestones; as we learn more about the problem domain and the technology we are using, the more accurate our estimates. But most organizations take a random guess (with, I’d say, a wind’s spittle of support) and turn that into a concrete date around which the world turns. They then ignore all the little milestones (if they track them at all) or they green shift the project status. The result is failure, though sometimes we redefine that word to mean something else entirely…
Despite our best efforts, many technology projects don’t succeed (and a few that do define “success” in interesting ways…) Many many words have been spilt trying to answer why failure is so common and even more describing the one true way to counter that sorry state; I’m certainly not going to add to that ink bath but maybe I can give you a heads up that will save you the pain of yet another death march. In a series of chats with various project survivors, I’ve assembled the following short list of signs that you it might be time to find something new.
- “The file extension is .java” is the first item on your code review checklist.
- You throw out a random TLA in a meeting and no one misses a beat.
- The project manager says the data model is already 95% done.
- The use of the term “code smell” is outlawed.
- Developers insist that unit testing will only slow them down.
- Estimates aren’t ranges.
- Your manager tried to send people to Waterfall 2006.
- Technologies are evaluated based on the quality of the golf course.
- The first thing the tech lead does when he downloads a piece of open source software is hack the code.
- The term “Big Bang” is thrown around liberally.
- Your project structure requires an upgrade to your toolset in order to function properly.
- The architect is babbling like the Oracle at Delphi…and everyone is nodding.
- You feel the need to build a “concrete containment building” around a part of your code base.
- Your IDE is responsible for generating nearly all your code.
- All technical questions are answered with a recap of the project history.
- End users have update access to the production database tables.
- You do something special when your page has checkbox3.
Though influenced by my own experience, any resemblance to projects real or imagined is entirely coincidental… [Shortly after writing this, I was listening to a Java Posse podcast that had a nice list of project smells.]