Managing the Uncertainty of Legacy Code: Part 5
On June 3, 2020, TechTalk hosted a meetup at which I spoke about managing the various kinds of uncertainty that we routinely encounter on projects that involve legacy code. I presented a handful of ideas for how we might improve our practices related to testing, design, planning, and collaboration. These ideas and practices help us with general software project work, but they help us even more when working with legacy code, since legacy code tends to add significant uncertainty and pressure to every bit of our work. Fortunately, we can build our skill while doing everyday work away from legacy code, then exploit that extra skill when we work with legacy code. Here are some questions that came up during this session and some answers to those questions.
You’ll find the remaining articles in this series here as they are released.
Some Questions About Microcommitting
When microcommitting, how do you know that you need to ‘rollback’ if you don’t have tests to tell you that you messed up?
There is no magic here: somebody somehow notices that something somewhere went wrong. It doesn’t matter how we notice. The information can come from anywhere: customer reports, manual testing, code reviews, monitoring the production environment. Microcommitting doesn’t help you know when you made a mistake, but it gives you more options to recover from that mistake: you don’t have to throw away as much code to undo or fix the mistake.
When you apply microcommitting and refactoring - do you recommend also for pull requests to review them on a commit by commit basis?
No, I don’t recommend this in general. I want the freedom to commit frequently because I want a powerful mechanism for recovering from mistakes; I don’t want to feel distracted by the question “Will this make my contribution harder or easier to understand?” I don’t expect anyone to read all these microcommits!
We might have to change how we review contributions, which might not fit the default Github pull request model very well. Remember that Github is not git
! We can always look at the diff between commit 37 and commit 42 as a single unit of work and review it that way. git
doesn’t stop us, even if Github doesn’t make that easy to do.
If your model for contributing changes to the project forces you to bundle together (squash) these microcommits, then you need to consider a difficult tradeoff: do the benefits of that contribution model outweigh the benefits of the more-powerful recovery mechanism? You might not know until you try both ways and examine the results. Legacy code offers very few moments of certainty.
A Kind of Case Study
How to approach a code base like 2 million lines of code that does not have any tests at all and no continuous build/deployment. It is fairly old and it contains a lot of concepts that are mostly forgotten. It is written in C++ and uses cbuilder 6.0. It does not have even MVC architecture and there is a huge dependency on and old-but-good VCL from Borland. Could you please provide most important steps in chronological order to modernize similar projects in short from a conceptual point of view?
For this kind of question, I almost always need more details, and since I can’t ask the original questioner, I have to make some guesses. I will get some of those wrong. I will describe a strategy, but I do not present it as a recipe to follow, but rather as a starting point to discussion!
First, I would add as little code as possible to this legacy system. I would try to wrap this legacy system in a protocol that allows me to add behavior using the tools and environments that I would already choose for new development, rather than relying on unfamiliar or difficult tools and environments. This probably includes adding some protocol to the legacy system that hides the technology choices that we want to get away from. I use this to try to change the legacy system as little as possible, hoping that it eventually simply rots and falls away, or at least stabilizes to the point where we never need to change it again.
For C++ in particular, I frequently see programmers who don’t know how their build works. I would invest earlier in solving this problem, because if you don’t know how the build works, then you don’t know how to add files to the project and that limits your options for refactoring. If you plan early to replace all the C++ with a different programming environment, then don’t bother with this idea; but if you need to refactor the C++ code and keep it (for a long time) in C++, then invest some time in understanding enough about the build to be able to freely add and move files around. Don’t let yourself remain prisoners of a build system that you don’t understand. This tactic always feel very slow, but it almost always helps significantly. I often start by creating a new build environment that I understand better, then migrating code from the old build to the new build as I change it. Here, I’d use the strangler approach on the build, rather than the code. I might have special directories inside the source code that only the new build system touches, while the old build system touches only the rest of the source tree.
Next, I would turn my attention to what happens when I need to change the legacy system. I would try to break it into small number of connected subsystems (maybe 3-7 of them) that use each other strictly over purely abstract protocols (interfaces/pure abstract classes). I could probably do this well if I understood the general intent of the system. I would then select the part of the system that probably needs to change more often and focus on isolating it from the rest. I could then work on building and testing this part of the system independently from the rest. In the process, I might even eventually turn it into its own project! I could use this strategy recursively to isolate and rescue smaller parts of the system one by one, where I could either rewrite them in the existing language or strangle them in a new programming environment. Some of those pieces might become small and clear enough that I could refactor them in place. That depends on my skill with C++. Some legacy systems have high-level subsystems that are easy to identify, but tangled together. This approach focuses on untangling the one part that we most likely need to change when we need to change it.
If you want to try to rewrite some part of the system using MVC, then I recommend selecting one feature and rewriting it in your favorite language in order to challenge your ideas about how that would work. This exercise might help you see how to rearrange the legacy code to isolate one Controller from the rest of the system. You might then notice that you can write that in C++ well enough or that you can write something in C++ that acts as a bridge to your newly-written Controller. This is a specific instance of the strangling strategy.
From here, I use the usual strategies: some exploratory refactoring and some feature-oriented refactoring. This involves some almost random combination of trying to rescue tiny parts of the system and trying to break the big system into 3-7 smaller, better-isolated pieces. These are the testing and design techniques that I teach in Surviving Legacy Code. I can’t advise you on how much to do of each of these kinds of work; I only know how to start, measure the results, then adjust the strategy. By this point, at least, you have made a commitment to change the legacy C++ system as little as possible, to simplify the build system for those parts that you must change, and to try to split the larger system into a handful of smaller better-isolated subsystems that increase your options for replacing them. In some cases you can replace them with better C++ and in others you can replace them with better code in another programming language where your organization as more competence. Most importantly, you hope to take steps every week towards the day when the legacy C++ system becomes stable enough that you never have to change it again. Then maybe one day you merely throw it away. Or you don’t. Both outcomes can work.