Tag Archives: test after

Unit Test When? Why?

The topic of when to unit test came up in a recent discussion with some people who are smarter than me. I’d just finished listening to a great presentation on Software Gardening. I hadn’t considered the option of Test First as I’ll describe it below. I’m not promoting the concept, but I thought it was interesting. Also, I’ve done what I’m calling Test Now, but I’ve never really heard it referred to as anything but another breed of Test After. The distinction is meaningful, and I think it deserves to be called out. I’m going to go out on a limb and make the assumption that you understand the importance of unit testing if you are reading this.

Why have this discussion? Because we want to be lean and do the minimal amount of large-scale rework. Also, because we want to understand the benefits and downfalls of each approach. In theory, there will be “best” options for a given set of circumstances. Let’s see what happens.

The Test When Continuum

I’m introducing the Test When Continuum as a way to help explain the commonalities and differences of these various test practices. The way I’m considering them is by the absolute amount of time between writing unit test code and production code.

Assuming we have a story or feature with 10 testable units taking one unit of time to complete each, we get the following dataset.

Given the assumption that testing beforehand is always best, and the data above, our test continuum looks like this:

Test First

First off, I misspoke when I said I’ve never heard of this approach. It was proposed by the software architect team at one of my workplaces in the hazy days of the distant past. I just hadn’t heard it called Test First Development, or if I did– I blocked it out. At any rate, in this particular gig, there was a lot of top-down design (or there was meant to be). The proposal was, if the architects wrote unit tests for their platform features up front, developers could just code monkey up and make those tests pass.

I was still learning at the time, and the idea seemed intriguing, if slightly wrong, and I couldn’t figure out why. For whatever reason, my Spidey-sense was tingling. I chose to ignore it, and as it happened, the architects didn’t have the time or discipline to take this approach either. So no harm was done.

Looking back, I’m sure my feeble brain was trying to tell me a few things:

  1. Throwing design over the fence with a set of requirements creates pretty much every problem we see in waterfall-type project management of software. Stage gates, anyone?
  2. Even if the developers had taken requirements and written every test possible, the approach
    • Doesn’t allow any flexibility in design when discovery happens mid-development
    • Creates a false imperative to stick with the original design no matter the cost
  3. There had to be a better way.

Consider your tests as eggs and the code as the critters inside the eggs, and I’ll drop some amazing English idiom on you.

Test Driven

Commonly referred to, by me at least, as the Holy Grail of testing. It solves all of the problems above. Does this mean the world should just give up every other type of development and convert to TDD (yep, this is just a link to the Wikipedia article on TDD)? Under the right circumstances, maybe!

I’ll say this: the test-driven approach leads to the best and most solid code I’ve seen in most cases. If you are doing Object-Oriented Programming (not your second semester college professor’s OOP, but modern OOP favoring SOLID, composition over inheritance, minimal state, and immutability), and you follow Red/Green/Refactor creating the simplest tests and bits of code to make them pass, then you refactor responsibly with the guidance of a skilled coder, you’re probably going to produce SOLID and clean code. There are additional benefits like semantic stability I won’t get into in detail.

Test Driven works well in most general cases as long as you are operating in a greenfield/new code environment.

Test Now

What’s this? Another made up name. I’m on a roll today with the Test When Continuum and Test Now. I must be a writer to make up so much garbage.

I feel like any kind of testing done after a material code change is always referred to as “Test After”. Let’s change this because there are varying strengths and weaknesses in the approaches.

The idea of Test Now is that after we complete a small, testable unit of code, we immediately write a test and make it pass. Then we look for opportunities to refactor our code and tests. Simple. Very similar to TDD, but perhaps with a couple of benefits under certain circumstances:

  1. You’re working with people unfamiliar/uncomfortable with TDD and they are driving.
  2. Your environment is exceedingly brownfield or “legacy” (existing code without tests and hard to test).
    • I know what you’re gonna say: “Just read Working Effectively With Legacy Code by Michael Feathers and all of your problems will be solved!” (Fantastic book by the way.)
    • The problem is, getting good at TDD takes time, practice, and diligence under the best of circumstances. Now we’re going to add learning how to effectively test legacy code to the workload at the same time. It’s a difficult thing to ask. The skills are important, and prioritizing what is most important can be difficult. I can’t tell you what would be best for your team.
  3. Your organization has just bought into the idea of Test After, and you want to move a step in the right direction.

Test Now gives quite a few of the benefits of Test Driven and is a nice place to stay for a while, but stick with those TDD katas. Keep practicing. Learn from Michael Feathers. In the meantime, Test Now is the second “Lean”est approach on our Test When Continuum.

Test After

Similar Test First in more ways than one, I’ll use another egg idiom to make the point. Testing after a feature or story is complete results in an “all your eggs in the same basket” scenario. Untested production code is fragile. All it takes is a couple of false assumptions. Maybe a sub-optimal design choice. Next, your tests are revealing the need for a systematic overhaul of the feature or story. At best, you hobble along and put tech debt stories in the backlog to fix it up later. Hopefully, you have time to get to them.

Conclusion

If you hadn’t guessed, I favor testing as close as possible completing increments of code. Before the code is written when possible– and after when it makes sense. General benefits are flexibility and usually better design.

My gratitude to Ben Davidson, Craig Berntson, Kaleb Pederson, and Dwayne Pryce for their eyes and time reviewing this article!

Here’s an updated Test When Continuum: