It Takes a Project to Raze a Forest
What's Wrong and What's Right with Software Design
One of the basic premises of evolution is that traits that are advantageous to the species are perpetuated and adverse ones winnowed. Extending the metaphor to project management (and specifically as it is applied to software design), you would expect that modern "best practices" would be the ones that had resulted in the most benefit to companies, attracting new "mates" (projects). Conversely, practices that had a negative impact on project delivery should have been evolutionarily discarded, unable to find new projects to breed with.
Unfortunately, exactly the opposite seems to be occurring in project management. Each year brings new metrics to measure against, new design tools to learn and code against, and new reporting structures designed to bring more "control" to the process. And this would all be well and good, if they actually helped. But in reality, all that the morass of design methodologies and time management techniques is doing is turning software development into a ponderous, uninspired, and uncertain venture. As a veteran of several recent projects, none of which I believe could be termed a success, I blame the following factors:
- The rose-colored glasses effect. Promoters of the methodologies never talk about the failures, only the successes. It's almost impossible to find case studies of projects that have failed, so the same failing methodologies get used over and over again. The project I'm just coming off of is probably going to tip the scales on the other side of $100 million by the time it's finally put out of it's misery, but no one outside the company (or perhaps even in the other divisions of the same company) will ever hear of it. That's because the (probably justified) perception is that publicizing project failures is bad for the bottom line.
As a result, all reviews are glowing ones. There's nothing like the "double blind" standards of review that drug trials use, just hand-picked stories of outright success from the lucky companies that manage to succeed, sometimes in spite of the management practices used.
- Most "best practices" in use today come from people who don't do "in the trenches" software development on a regular basis. These include techniques such as the use of the unified modeling language and design patterns. At one recent project I worked on, there were so many ossified documentation requirements that I could almost hear the incremental deforesting of North America each day. First there was a UI design document. Then a functional spec. Then a technical design document. And invariably, most of the interesting and complex questions didn't really get asked until coding had begun. But an accounting firm had sold them this development methodology, and they were going to use it until the cows came home.
- Overspecing. Companies hate risk. When they start a project, they want to know how much something will cost and how long it will take. Unfortunately, the idea that by spending month after tedious month analyzing the problem in complete detail, you will somehow reduce the risk of implementation is a false one, which has been discredited in late project after late project. First off, all the really interesting problems and issues in development don't come out during initial requirements gathering (which, don't take me wrong, is a very important phase in project development.) They come out during integration, when you discover who lied about their APIs, and what nasty edge cases got missed. Frontloading all the brainpower into documents and designs that are of limited usefulness at the end of the day doesn't reduce risk, it just delays it.
- Modern development metrics (like six sigma) are the wrong ones for running most software projects. They were developed to deal with doing the same thing over and over, in a production environment, where a 0.5% defect rate is a big deal. But we're making software here, not widgets.
As a result, companies spend too much time trying to overdefine the problem in the abstract. One specific example of this is the current overdependence on UML. I recently attended a seminar called "Code Complete," in which the speaker (Steve McConnell) made the case that you should program on a language, not in a language. The difference is that when you program in a language, you think about the problem in terms of how the language can solve it, not what the best solution is. UML is a particularly bad programming language, but UML based design methodologies make you design in terms of how UML lets you represent problems, not what the best way to solve the problem is.
The other downside to this approach is that it tends to lock you into commercial, proprietary development platforms, because they are the only ones that support the methodology you've bought into. So instead of using Apache and Tomcat, you use IBM's WSAD because it has all the hooks to integrate UML development into it. And then the temptation, because you paid so much for the platform, is to use all the features of the platform whether you need them or not. So Enterprise Java Beans get used, even though there's no need for them, as an example.
Part of this trend in overspecing, of course, is due to outsourcing. If you're going to ship requirements overseas to be coded on the cheap by developers that aren't under your direct control, you have to spec things out in exacting detail. But this requires that a great deal of the flexibility and adaptability of the project be removed at an early date. It's like having to plan every turn and stop in a 2,000 mile road trip before you've left the driveway. And if there's construction along the way or an accident, you don't have that ability to reroute.
Software engineering isn't a production process. It's somewhere between an R&D endeavor and a creative pursuit. There are certainly limited subsets of software projects (largely involving things such as customizing existing software to a particular client's needs) that can be characterized and measured using these kinds of metrics, but most large software projects are explorations of new ground.
I'm particularly reminded of a quote from the HBO series, "From the Earth to the Moon," involving the development of the Lunar Module. Essentially, the actor says that it shouldn't be surprising that the LM fell behind schedule and over budget, because they were doing something new, something that had never been done before, and that required the development and integration of new technologies.
Every large software project is like that. There are unknowns to discover, things that even the end users don't fully understand. There are poorly or inaccurately documented APIs to integrate with. There are complexities to unravel and possibilities that should be explored, even if they lie outside the original bounds of the problem. It is the peak of hubris to set a project schedule and budget before you've taken the first step in this strange new land.
So what is my prescription for solving these problems? Firstly, trust your developers and get them involved in the problem earlier. Any seasoned developer worth their price can spot the potential gotchas and risks the second they see them, and can also suggest alternatives that can reduce the task's complexity without reducing functionality.
Next, spend the time to do some prototyping. Even if you want to eventually have a rigidly speced out project, take the time first to play around with third party interfaces and dummy up some functionality.
Finally, take a good hard look at every piece of paper (virtual or physical) that you're requiring your team to produce in furtherance of the goal. Are you creating it because it fills a necessary requirement to get the job done, or because someone told you that you needed to have it? Do the same for your development methodologies and metrics. If they bring genuine value, wonderful. But if they are being done because that's the way things are done, you're probably wasting time and driving your technologists crazy to enrich some consulting firm.