Parkinsons Law applied to Software Projects

While recently listening to a BBC podcast I learned about Parkinson’s law and realised how it simply explains why traditional software project which work to deadlines, fail so often. Simply put, the law states "work expands so as to fill the time available for its completion". This year I had to help work on an estimate used to promise a delivery date to upper management. We use SAFe at this organisation (where I work as a system architect in a troika managing an agile release train), and did an agile estimate, so it was based on previous features and we tried to estimate just their complexity rather than say hours, and we added factors to handle risks and added additional reserves for other unknowns. The date we named was roughly 6 months ahead of the estimation date, based on work done in the 18 months prior. That date was communicated both to our 8 software development teams, and upper management. So the expectation of when to finish had been set. According to my interpretation of Parkinson’s law, it doesn’t matter how much work there really is, because if there isn’t enough time you trim optional features or quality that nobody is really counting on.  On the other hand, if there is too much time, you do things like write more automated tests or little scripts here and there, those things you always wanted to do but never had time to do. I’ve seen our various teams doing these things, and sometimes both, depending on where they were in the timeline.

Note that the way we implement SAFe means that we don’t micro-manage the work done sprint by sprint, rather we let the teams work, we sync once a week to get information about progress, and then wait for the end of the 3-month programme increment (PI) when we check over the PI objectives and overall progress. But anything that wasn’t completed, spills over into the next PI, normally with further refinements before the PI-planning in order to better understand what is still left.

Now, two months prior to the deadline, we appear to be heading for roughly an on schedule delivery, although we have started to trim features down to the bear minimum, and we are now suddenly finding more and more things we didn’t think about – the devil is in the details, and much of it is only discovered as features are completed and tested, especially together with others which may have been finished for longer. That is quite typical in my experience – if you estimate well, and give teams plenty of time to complete the work, they take the time up front to do a good job, probably "over engineering" the solution.  But as time goes by, you pay for that as you get closer to the deadline, because time that was invested in that higher quality is then missing for tasks that weren’t planned.

I’ve worked on two seriously micro-managed projects – one that I micro-managed as the architect & developer back in 2006 with a team of 3 other developers. The other was with a number of teams, probably about 20 developers, circa 2012. By cracking the whip micro-managing all the tasks, you can ensure that work done early in the project is still finished early, leaving room for the unplanned things that always turn up later in the project. Were either of them actually successful? That depends on how you measure success – they were certainly feature complete, on time, on budget and they appeared to have the required quality.  However both left the impression on those who took care of the work after the project (me included), that they weren’t very well engineered. In the first one, we had a lot of duplicate code and didn’t take the time to factor out common parts (code / designs). In the second one, we hadn’t had the time to make allowances for future projects which were just around the corner. Both can be classed as technical debt. Both can also be classed as problems that should be adressed by future projects, if and when they come (ever worked on solutions which don’t make progess because your’e too busy engineering for the future?)

An alternative is to use feature toggles and actually release, when you (everyone: the teams, the business, the users) are ready for it. Often the business will give reasons why this isn’t realistic, for example they need to market the project and organise a launch, maybe organising expensive PR or training for users, or doing final acceptance tests, or needing to migrate an entire inventory of legacy data at the time of the release. For me, those are reasons that don’t exist in 2021. Big software enterprises (think of the sites we are confronted with daily) don’t launch apps like this, rather they go for soft / viral launches. Training is done with self-taught videos and by embedding context relevant tips and tricks into the application and gamifying it to encourage users to discover new functionality. Acceptance testing is something we do on a daily basis anyway, with our continous deployment and releasing. Regarding data migrations – just expose the data so it is usable and decouple the migration and split it, so that it can be partially migrated step by step, mitigating risks associated with big bang releases. It’s only really the launch party which suffers if there are delays 😜

One thing is for sure – I stated at the time that I was dead against naming a deadline, and that I never want to do it again. Next time anyone suggests it, I am going to quote Parkinson’s law at them!

Copyright ©2021, Ant Kutschera