Unit Test When? Why?

The topic of when to unit test came up in a recent discussion with some people who are smarter than me. I’d just finished listening to a great presentation on Software Gardening. I hadn’t considered the option of Test First as I’ll describe it below. I’m not promoting the concept, but I thought it was interesting. Also, I’ve done what I’m calling Test Now, but I’ve never really heard it referred to as anything but another breed of Test After. The distinction is meaningful, and I think it deserves to be called out. I’m going to go out on a limb and make the assumption that you understand the importance of unit testing if you are reading this.

Why have this discussion? Because we want to be lean and do the minimal amount of large-scale rework. Also, because we want to understand the benefits and downfalls of each approach. In theory, there will be “best” options for a given set of circumstances. Let’s see what happens.

The Test When Continuum

I’m introducing the Test When Continuum as a way to help explain the commonalities and differences of these various test practices. The way I’m considering them is by the absolute amount of time between writing unit test code and production code.

Assuming we have a story or feature with 10 testable units taking one unit of time to complete each, we get the following dataset.

Given the assumption that testing beforehand is always best, and the data above, our test continuum looks like this:

Test First

First off, I misspoke when I said I’ve never heard of this approach. It was proposed by the software architect team at one of my workplaces in the hazy days of the distant past. I just hadn’t heard it called Test First Development, or if I did– I blocked it out. At any rate, in this particular gig, there was a lot of top-down design (or there was meant to be). The proposal was, if the architects wrote unit tests for their platform features up front, developers could just code monkey up and make those tests pass.

I was still learning at the time, and the idea seemed intriguing, if slightly wrong, and I couldn’t figure out why. For whatever reason, my Spidey-sense was tingling. I chose to ignore it, and as it happened, the architects didn’t have the time or discipline to take this approach either. So no harm was done.

Looking back, I’m sure my feeble brain was trying to tell me a few things:

  1. Throwing design over the fence with a set of requirements creates pretty much every problem we see in waterfall-type project management of software. Stage gates, anyone?
  2. Even if the developers had taken requirements and written every test possible, the approach
    • Doesn’t allow any flexibility in design when discovery happens mid-development
    • Creates a false imperative to stick with the original design no matter the cost
  3. There had to be a better way.

Consider your tests as eggs and the code as the critters inside the eggs, and I’ll drop some amazing English idiom on you.

Test Driven

Commonly referred to, by me at least, as the Holy Grail of testing. It solves all of the problems above. Does this mean the world should just give up every other type of development and convert to TDD (yep, this is just a link to the Wikipedia article on TDD)? Under the right circumstances, maybe!

I’ll say this: the test-driven approach leads to the best and most solid code I’ve seen in most cases. If you are doing Object-Oriented Programming (not your second semester college professor’s OOP, but modern OOP favoring SOLID, composition over inheritance, minimal state, and immutability), and you follow Red/Green/Refactor creating the simplest tests and bits of code to make them pass, then you refactor responsibly with the guidance of a skilled coder, you’re probably going to produce SOLID and clean code. There are additional benefits like semantic stability I won’t get into in detail.

Test Driven works well in most general cases as long as you are operating in a greenfield/new code environment.

Test Now

What’s this? Another made up name. I’m on a roll today with the Test When Continuum and Test Now. I must be a writer to make up so much garbage.

I feel like any kind of testing done after a material code change is always referred to as “Test After”. Let’s change this because there are varying strengths and weaknesses in the approaches.

The idea of Test Now is that after we complete a small, testable unit of code, we immediately write a test and make it pass. Then we look for opportunities to refactor our code and tests. Simple. Very similar to TDD, but perhaps with a couple of benefits under certain circumstances:

  1. You’re working with people unfamiliar/uncomfortable with TDD and they are driving.
  2. Your environment is exceedingly brownfield or “legacy” (existing code without tests and hard to test).
    • I know what you’re gonna say: “Just read Working Effectively With Legacy Code by Michael Feathers and all of your problems will be solved!” (Fantastic book by the way.)
    • The problem is, getting good at TDD takes time, practice, and diligence under the best of circumstances. Now we’re going to add learning how to effectively test legacy code to the workload at the same time. It’s a difficult thing to ask. The skills are important, and prioritizing what is most important can be difficult. I can’t tell you what would be best for your team.
  3. Your organization has just bought into the idea of Test After, and you want to move a step in the right direction.

Test Now gives quite a few of the benefits of Test Driven and is a nice place to stay for a while, but stick with those TDD katas. Keep practicing. Learn from Michael Feathers. In the meantime, Test Now is the second “Lean”est approach on our Test When Continuum.

Test After

Similar Test First in more ways than one, I’ll use another egg idiom to make the point. Testing after a feature or story is complete results in an “all your eggs in the same basket” scenario. Untested production code is fragile. All it takes is a couple of false assumptions. Maybe a sub-optimal design choice. Next, your tests are revealing the need for a systematic overhaul of the feature or story. At best, you hobble along and put tech debt stories in the backlog to fix it up later. Hopefully, you have time to get to them.

Conclusion

If you hadn’t guessed, I favor testing as close as possible completing increments of code. Before the code is written when possible– and after when it makes sense. General benefits are flexibility and usually better design.

My gratitude to Ben Davidson, Craig Berntson, Kaleb Pederson, and Dwayne Pryce for their eyes and time reviewing this article!

Here’s an updated Test When Continuum:

1 thought on “Unit Test When? Why?

  1. I’ve rarely seen strictly enforced TDD done well. In general I think it’s easy for new developers and less skilled developers to really take it too far. It quickly turns into a game of 100% code coverage. In less mature development teams without well built test pipelines including integration tests or automated acceptance tests and regression tests everything starts to look like it should be the subject of a unit test.

    Something I’ve seen time and time again is that a junior developer can be easily led astray by well intentioned TDD zealots and the discovery of dependency injection. In the worst cases I’ve seen this result in a massively inflexible mess of nonsense design, loads of anemic classes, and equally anemic tests. The code units are readable but they’re all so small and have as many dependencies as they have lines of code. The tests are equally unreadable and consist of unreasonable amounts of setup and mocking. The design becomes incomprehensible and it’s an enormous effort to refactor because you can’t reuse any of the original unit tests. None of them test the right things. This is a self reinforcing loop of bad design.

    When I’m writing a new feature I tend to spend more time throwing away and rewriting the code I write as I write it. Completely redesigning and turning it inside out as I go. Exploring the problem space through half baked code. None of it is ready for a test at this point. Then it gets shaked out into a design that I think I like and I will go down that route and as it gets more complete I’ll start writing tests. I’ll write some tests for things I’ve already got, and for things that I know will fail. I’ll fix the failing tests and maybe write more tests. Then back to exploring and testing. When I’ve got a bug in an area that should be covered by a test I always write a test first to confirm that it fails then fix it and confirm that it passes. I avoid mocking as much as possible and I put the interesting logic into pure functions whenever possible. Not every line of code is unit tested. If there is no business or code contract value it’s not worth testing. If you’ve separated all the interesting logic out of it and the code isn’t overly clever (and it shouldn’t be if it’s so unimportant) then it should be so self explanatory and the test would literally mirror the code. It can and will get tested by regressions or integration tests in the pipeline.

Comments are closed.