Godsend Project Management #4: Code quality, Rush and Impatience

The biggest conundrum of this module for me was not some sort of technical challenge, it was more high-level and related to project management – code quality versus time. Speaking to the Chief Technical Officer at Codemasters, apparently it’s a big challenge to strike a balance in any industry development team, so I think it’s good to not take the problem lightly and it definitely deserves as much thought as I’m giving it.

To summarize the issue at hand, we had clearly over-scoped our project, and we weren’t too flexible about the fact either. This becomes evident when you take a look at the hours we worked – 10/11/12 hours a day was a frequent schedule for us. I’m pretty sure it was for the artists and the level designer as well.

It would have helped to re-scope the project and cut features off, but I think we were all too naïve for that – we wanted the game to be great and we didn’t want to take the cheap way out. So instead, we just worked more. A common opinion I read online about the subject matter is that crunching is not a project management strategy (http://www.smh.com.au/it-pro/blog/smoke-and-mirrors/crunch-time-is-not-a-project-management-strategy-20120330-1w2cj.html). Yet we still did it, and I would like to spend the rest of this entry talking about the side-effects of that.

When you have too much work to do and want to do it all, obviously you need to find a way to accomplish tasks in the least amount of time possible. This lead to dirty tricks and hacks in our code base (a lot of which is detailed in my code submission entries) – there’s all flavours of code duplication, hard-coded file paths, path manipulation by altering and concatenating strings during run-time and so on.

As a result of such work, progress happens fast, but the code base becomes more and more unstable. It gets to the point where the development team starts becoming scared to change things and fix bugs – it’s difficult to manage the side-effects. We pulled through since this was a small-scale project, but the code base was definitely approaching its tipping point – a couple more weeks of such development practice and it would become completely unmanageable. I mentioned striking a balance between development speed and quality, and I have to admit that we prioritised speed too much, missing that balance in the progress.

Of course, it could be that I am too paranoid and worn out by the end of it, hence I’m biased towards a more negative view on the matter. However, it’s a hard fact that some of the shortcuts we took in development are definitely bad practice. And if there’s some bad practice, then we need to be able to manage it and be aware of its limitations. Unfortunately, if we let the bad practices add up, that quickly becomes impossible.

Furthermore, due to the insane rush at which we developed features, we partially neglected an industry best practice – peer reviewing our code. This is not to say that we didn’t do it, but rather that we should have done it more. We were inpatient and sprinted towards implementing features. This impatience needs to be managed and I found that it’s tough to manage it yourself. That’s what peer reviews are for – others can be more critical about your code design than you yourself can. The important thing is to recognise that you need that review and take the criticism on board.

Of course, there’s a balance to be had – if the problem is absolutely trivial, then peer reviewing it would potentially be a waste of time. I still don’t know how to strike this balance. I suppose that for every Hansoft task we are assigned, we should peer-review the rough solution to that task. We have been reviewing ways of solving a solution, rough sketches and diagrams and so on. However, that does not necessarily translate to good code, and the problem at a glance could be trickier than you might think, so the initial high-level peer review might not capture the details and lead to poorly-written solutions that tailor to specific edge cases.

How I would approach it next time

By far the biggest project management lesson I’ve learnt is in terms of scope. Our next project is going to be significantly longer, with twice the number of programmers working on it. This means that the rate at which the code base grows will be more or less twice faster. So if we reached a tipping point after roughly two months, we might reach it after slightly more than one month on the PS4 game. We can not afford to let that happen, as that’s only an eighth of the way through. Therefore, we need to slow down and develop code that becomes well understood, tested, “safe” and clean library code. If we can accumulate such a code base, we should be able to keep building code on top of it and avoid this pitfall. After all, if we let the same thing happen again, it’s going to be a lot more costly next time – we might end up spending loads of time just trying to refactor our code. And the more we let it grow, the harder it will be to untangle it.

In terms of peer reviews, I think we should practice writing interfaces and peer-reviewing those, rather than high level overviews of how to tackle a specific task. This might be slow and irritating at first, especially if the problem seems trivial, but I think we can get used to it. That’s the only sustainable way to run an 8 month project I think. It puts us at the risk of being overly cautious and becoming too slow, but it’s worth trying it. We can then objectively judge the practice of being ultra-fast and the practice of being ultra-cautious. Only then can we find that golden balance between speed and quality.