Over the last couple of large projects I have noticed a trend in the way software developers/engineers/architects like to come out with statements like “that code is shit”. I’m not just talking about me saying it, or how they react to the code that I write (come on, I write great code!). But general sweeping statements like this crop up all the time on these projects, related to everyones code. And managers then perpetuate these statements.
I think the reason is easy to understand. When a developer takes over responsibility for some code they want to make it clear that any problems with that code are unrelated to them. They don’t want to take responsibility for the code which someone else has written. Because writing code is inherently creative and the same functionality can be written many different ways, you are almost guaranteed that a developer taking over some code “would have written it differently”.
What I have also noticed is that the developers coming out with statements like this do not know or do not want to understand the conditions under which the code was originally written. As an architect, I realise there are many many factors which influence the creation of code, budget being a major one. What some developers also do not fully appreciate is that the perfect implementation not only doesn’t exist (because every developer would do it differently) but that it often isn’t required. The perfect solution considers many cases which don’t need to be programmed, as they “may” be required in the future, but currently are not. From a financial point of view it is possible to determine whether or not to implement functionality in software (see my previous posting). So from that basis, a developer should work to a simple 80/20 rule and implement something that will suffice, as opposed to something that will exist for the next quarter of a century, unchanged because of its greatness.
Often developers which make statements that the code they are inheriting is poor, want to refactor it. They give reasons that the enlightened manager/team-leader/developer understands that refactoring is part and parcel of modern coding. Or perhaps they give reasons like without refactoring or rewriting, the code is unmaintainable. But is that really true? Does experience show that once a project has gone into production that there are many bugs or many change requests? During maintenance what proportion of code will ever actually be changed because of bugs or change requests? The project I am working on now has some code it in which we have not touched in 18 months (since going into production). The code works, although today we do not fully understand that code, but importantly we have its source code! This code is without a doubt poor. It has relatively little documentation. We have a list of all changes that are likely to come in the next years (perhaps 5 years ahead), none of which a related to this code. The intended lifespan of the code is at least another 20 years. After 18 months in production there have been zero reported bugs related to this code. So should we refactor in order to be able to optimise the code and make it a nicer design so that we understand it today? I am of the opinion that we should not. Doing so would not add any value.
Indeed does a developer need to fully understand the code in order to be capable of maintaining it? I am of the opinion (and please feel free to disagree) that a good developer doing maintenance should be capable of maintaining code that they currently do not fully understand and which has been implemented using different styles/patterns/techniques than they would. In fact I would go as far as saying it is imperative for a maintenance developer to be able to do this. The reason is simple: to keep maintenance budget to a minimum. At the time of going to production, you almost certainly do not know the areas of code which will need maintaining due to bugs. You may be aware of future change requests or further project phases where changes will be made. The result is that a maintenance developer should not start refactoring code just to make it “more correct”, unless that developer intends to support the software for the rest of its life. If that is not the case (and in my experience most developers move on to different projects within at least three years), then changing the code is useless as the next maintenance developer to come along will simply claim that the code is unmaintainable and poorly implemented and want to start the refactoring all over again.
The skill of being able to inherit code and appreciating that at that time you do not need to fully understand the code is in my opinion extremely important for reducing maintenance costs. The key is “compartmentalisation”. A typical example in maintenance is that you will often come across code which is duplicated. The gut reaction is to refactor it because a “law” of OO is to have code reuse. How often though does it happen that during the refactoring you realise that getting the common code out is painful and not optimal? Often. You might even end up having to build in a work around (guaranteed to cause the next maintenance developer to claim you wrote crap code). And certainly, such refactoring may introduce new bugs, requiring it to be fully regression tested.
The solution then, is to “compartmentalise” the code. The maintenance developer should become at ease with the situation and make a mental note (or even in a erm… document) that this situation exists. Without knowing whether this code has a problem with bugs and without being able to anticipate any changes which may come, you cannot guarantee that the changes you make during refactoring will be optimal. However, by understanding such code and having several related “compartments” you can plan correct refactoring, should a bug or change come your way which affects that code.
What I am about to write now is controversial, but please read it with an open mind. If you make the assumption that your maintenance developers can compartmentalise and live with code duplication, then during the project phase where you implement your code, you can use compartmentalisation to your advantage: parallel development. If you have two developers implementing similar things and you get them to work together to create a single design, it will cost you more money. Firstly, because of the communication required (the positive communication as well as the negative whereby they disagree and spend time comprimising). Secondly because while one does the implementation of the common code, the other will be potentially idle. Thirdly, a common design may not be an optimum because of comprimise in order to make it fit both/all situations.
Hey, if worst comes to worst, at least if a bug turns up in on half of some duplicated code, the users will only see it once and not in every instance of some commonly used core code. There is a rumour in our office that aeronautical systems are developed parallel in duplicate by two teams who do not communicate. This ensures that critical systems can continue to run if a bug crops up in one of them… Of course you would actually need three such systems in order to be able to determine which one is actually displaying the bug.
Anyway, if you have got this far without claiming I am a nut case then congratulations for having an open mind. However please realise I am not advocating code duplication for the sake of it. As a designer or architect you should be in a position to determine where code duplication might occur and to plan it sufficiently correctly so that your team can work efficiently and use common components as required. But if your plan cannot efficiently include such a common component, or the complexity of it means that at the planning stage you are not able to determine the usefulness of common code, then don’t get caught up in the “code reuse is imperative” mantra that OO brought us. When OO was invented, the idea of code reuse was first at the method level – OO empowered programmers to reuse functions very easily. Making a law that code duplication is illegal simpy constrains management/architects/designers from using their brain to consider the options (and be creative, which after all puts the fun back into software development).
Copyright (c) 2009 Ant Kutschera
Leave a Reply