I fartlek in your testing strategy's general direction

I see an interesting parallel in the debate over practices in the running world and practices in the software development world. Indulge me for a moment while I attempt to share my thinking.

Mileage versus Code Coverage

In running, we often look at the miles you run in a given week as an indicator of your general ability to perform in a race. At first glance, one who runs 20 miles per week would not likely perform at the same level as one who runs 50 miles per week while 200 miles per week is probably going to hurt performance. But this actually depends on the type of race you're training for. If you're a sprinter, 20 miles per week may be optimal and 200 would be just silly. If you're an ultra-marathoner, 20 miles per week will not cut it and you may be topping out at 200 on some weeks. What is optimal for one is not reasonable for the other.

In software development, we often look at code coverage as an indicator of confidence that our code is adequately tested. At first glance, 20% coverage is not as good as 50% coverage while 200% coverage is probably too much (every line of code covered by at least 2 paths). But this actually depends on the type of system you are working on. A legacy code base where the majority of the code is never touched might be sufficiently covered at 20%, assuming it is the 20% that frequently changes. If it is a greenfield code base comprised of small well composed classes, you may exceed 200% between unit and integration tests. What is optimal for one is not reasonable for the other.

Types of Work

In running, we can engage in base work, strength work, and speed work.

In software development, we can engage in unit testing, integration testing, and acceptance testing.

I know these are simplifications. This is not a comprehensive overview of either endeavor. I don't need all you opinionated, highly-critical, and vocal runners that follow me lambasting me because I didn't mention fartlek, intervals, zones, or VO2max. Fortunately, most developers don't share those same characteristics.

Beginning runner programs often focus on base work. This is relatively long slow distance through which we can prepare safely for any general race. Beginning developer programs often focus on test first techniques. This is relatively isolated testing through which we can write reasonably simple applications. Neither of these approaches alone is enough to create excellent performers.

To be excellent performers, we need to understand the purpose and application of each practice. We need to know when to use each in order to solve a particular challenge.

Want to improve endurance? Base work.

Building a single class? Unit testing.

Want to improve power? Strength work.

Verifying contracts? Integration testing.

Want to improve speed? Speed work.

Verifying user interactions? Acceptance testing.

For each type of race, the distance run and combination of work needs to vary.

A sprinter needs lower miles with a focus on speed.

A marathoner needs higher miles with a focus on base.

For each type of application, the amount of coverage and combination of testing needs to vary.

A simple service needs fewer tests with a focus on units.

A class that consume that service needs more tests with a focus on integration.

Well-intended advice

When someone tells you that too much focus on base training can result in a slower race, they may be right. Or maybe they are a sprinter dispensing advice to a marathoner. In either case, nobody in their right mind would suggest you don't run base miles.

When someone tells you that too much focus on unit testing can lead to over-confidence and broken contracts, they may be right. Or maybe they are a full-stack developer dispensing advice to an algorithm developer. In either case, nobody in their right mind would suggest you don't unit test.

Mileage alone is not the measure of a good runner. It is the careful and deliberate combination of quantity and type of running that results in optimal performance.

Code coverage alone is not the measure of a good test strategy. It is the careful and deliberate combination of quantity and type of testing that results in optimal performance.

Choose, Observe, and Adapt

The best runners understand the tools available to them, choose an informed combination, observe their own results, and adapt.

The best developers understand the tools available to them, choose an informed combination, observe their own results, and adapt.