The art and craft of test-driven development

Knowing when and how to use this sometimes controversial (but often beneficial) technique is key.
Part of
Issue 10 August 2019

Testing

For the most part, the debate about whether to write tests is over: Testing is a must for real-world products. After all, it provides assurance that you’re building stable software for yourself to work with later, or for the next engineer to pick up. But test-driven development, or TDD, is more controversial. People feel either a strong dislike or strong appreciation for the sometimes dogmatic application of its flow.

TDD (which Kent Beck “rediscovered” in the late 1990s) emerged from the extreme programming movement and encourages engineers to work together. The TDD maxim of “red, green, refactor” dictates that developers should first write a test, then write the code for it. In its heyday, it was often treated as the only method of testing, rather than one of many. (In fact, its all-or-nothing popularity eventually turned some people against it.) TDD is an incredibly useful tool, however, especially if used on a case-by-case basis.

I often jump into the TDD flow when I’m adding a new feature to a product or confirming the existence of a bug. If it’s not clear how I should approach the problem, the best way for me to start is with a test. Tests force me to break down the problem into steps to reach an initial solution, while refactoring gets me to a cleaner solution.

In TDD, you essentially repeat these three steps:

Step 1: Red (write a test)

Think of the behavior you want your code to have. If that behavior has multiple parts, break it down into smaller steps, each requiring their own test. (List them out if needed.) Then write a test for each individual aspect of the behavior you’re adding. Run the tests and confirm they fail as expected. If a test doesn’t fail as expected, debug what’s broken.

Step 2: Green (make the test pass)

Once a test fails as expected, write some functional code to get it to pass. The design or performance of the code doesn’t matter in this phase, but it will later. Run the individual test to get a pass, then run all of the tests within that test file to confirm that all pass. If an individual test fails, revert to the last version of the code that passed. Make that test smaller, then start the cycle over.

Step 3: Refactor

Now that you’ve got the green light from all your passing tests, you can refactor with confidence that you’re not breaking any code in your application.

Look for possible improvements to your tests, such as clear and descriptive variable names or decoupling complex logic from a method. After each change, run the tests to ensure that everything still passes. Remember that refactoring means improving the design of the code, not changing any behavior. Adding behavior will result in a failing test, so it’s better to focus on the existing behavior rather than anticipating future needs for the system.

After you’ve gone through the TDD cycle, you’ll want to repeat it when you add new functionality—resulting in well-tested, well-designed code. The process might feel long compared to your usual workflow, but with practice you’ll decrease the time spent on each step.

The craft of TDD

It’s easy to get lost in the doctrine of TDD. I’ve been overwhelmed by the process myself. Earlier in my career, I over-tested code, writing tests that either involved testing a third-party library or not writing enough tests around the business logic of the application. I’ve seen others let testing damage their application’s design because they spent too much effort up front isolating parts of the application to test, making future refactoring more difficult.

Yes, it’s possible to over-test your application, especially if you follow the TDD doctrine of “don’t write a line of code without a failing test” without exception. Living by this rule, you’d have a test for every single line of production code, resulting in meaningless tests that are hard to maintain over time.

Moreover, if tests aren’t fast, they’re more of an obstacle than an asset. Limit yourself to running a subset of tests at a time so you can keep things fast.

But which ones do you run, and how frequently? This is where the art of testing comes into play.

The art of TDD

As a general rule, you should test code before submitting it for code review and deploying to production. How you arrive at writing tests, however, is different for every programmer. Usually you’ll start with a unit test—which tests individual units such as classes, modules, and functions in isolation. When you’re writing code to test a web request or class that interfaces with another, you’ll create an integration test—which tests how individual units work together. Depending on your engineering team, you may also have system tests—which test the functionality of a user journey as well as the performance and load of the application—or acceptance tests—which test if a piece of software is a usable product that meets business requirements. In my experience, most people like writing tests and see the value in them, but find that the frequency of a test, as well as its speed, determines whether or not they’ll implement it.

Speed

TDD doesn’t have to be slow. Limit the tests you run to only the subset that is affected by the code you’re writing during development. If you have a rather large test suite, rely on a continuous integration server such as CircleCI to run the entire suite, either by occasionally pushing up small commits or by pushing up your entire branch when you’ve completed the task.

Segregate the types of automated tests you write and run during your TDD cycle. Reflecting the famous test pyramid first presented by Mike Cohn in his book Succeeding with Agile and popularized by Martin Fowler, you’ll want the majority of your tests to be unit tests, with few inputs and typically a single output. This ensures that your application remains flexible to change as it grows, and to make sure that you don’t have to change lots of tests in order to change your code.

Over-testing

Think about what coverage this test will provide that’s unique to your app. If you have multiple tests for the same thing, you’ve now coupled your tests and added complexity to your test suite.

If you’re spending more time changing the tests than you are writing or refactoring code, you’re probably over-testing. Don’t be afraid to throw tests away. Moreover, beware of vanity metrics when it comes to testing tools. It’s very easy to get caught up in test coverage and speed, but you don’t need to reach 100 percent. Instead, aim for ensuring that your application has enough high-level tests of the desired user journey that meet the business requirements, and a large amount of unit tests for various components that make up the business logic of the product, such as classes, modules, et cetera.

TDD for teams

Set the tone that testing—and TDD—is valuable for your team, and more people will incorporate techniques like these into their work. TDD in particular can increase your team’s confidence about the code they’re putting into production and make the application more resilient.

Overall, TDD provides a mechanism to quickly get feedback and leads to better software design through refactoring. It encourages collaboration with teammates in order to break down a problem and find a solution that is testable. Allow your teammates time to adjust to TDD if they’re new to it and encourage them to find the approach that works best for them. With TDD, teams benefit from better-designed code, clearer documentation, and having safeguards in place against introducing new issues as the software grows.

About the author

Rushaine McBean is a senior software engineer at Teachable specializing in building JavaScript and Ruby on Rails web applications. Outside of work, she’s an organizer of Manhattan.js and EmpireConf, and enjoys playing and making music.

@copasetickid

Buy the print edition

Visit the Increment Store to purchase print issues.

Store

Continue Reading

Explore Topics

All Issues