How Not To Write Unit Tests

Over the course of the last year and a half, I've been implementing unit tests on an API which is responsible for sending and receiving data to a web and mobile application. Let this blog post serve as a healthy reminder of how not to write unit tests based on the potholes I've dug for myself. 

1. Don't read data directly from the database.

Data is likely to change over time. By depending on a specific element of data to be present when you run the test, you're subjecting yourself to failure every time things change. Instead of constantly updating the unit tests with new constants or variables just to make them pass, create a testing database full of faked data. This data can easily be manipulated to fit all of your possible needs. For example, if you've got an integer that may land anywhere between 1 and 10, you can add test cases for -1, 0, 1, 9, 10, 11, and any other circumstances your heart desires. 

2. Don't read data that may change based on the environment.

As stated above, data is subject to change. One of the major difficulties we encountered was that data, tables, or migrations that were present in development environments were not yet available in acceptance or production databases when we wanted to run the unit tests. 

A great example of this is a new migration that was run on development, but now needs to be deployed to acceptance. When your unit tests read information from the acceptance database but the code containing the migration hasn't been deployed yet, the unit tests will fail every time. Save yourself the headache and make all unit tests environment agnostic by using that testing database.

3. Don't write data to the database 

Let's use the example of user registration here. You want to ensure all of the user's information can be submitted appropriately. If you have to write this new data to the database every time you run the unit tests in a production environment, reporting data for stakeholders will quickly become polluted with fake users. Instead, use a setup/teardown methodology: regardless of whether it passes or fails, at the end of each unit test remove all of the data you've written to the database. 

4. Isolate as much as possible 

When a test fails, you want to know exactly what went wrong as fast as possible. In order to make that happen, keep your unit tests isolated in terms of their behavior. One rule of thumb that I've found particularly helpful in this regard is to explain to a team member what each test does. If you find yourself using the word AND, then refactor into two separate methods. 

5. Use Constants 

This one may seem overly simple, but the amount of time it saves is well worth the explanation. When your test depends on variables like a specific user or data point, put that data into a reusable constant and reference it throughout the test suite. Manually typing it out can seem faster at first, but problems quickly arise when you need to refactor or add additional tests. 

Better yet, extrapolating these constants to a separate file can help your whole team recognize which constants are already in use and prevent repetition amongst various tests. 


As I continue to learn more about testing, I'll continue to make mistakes and document them here. Do you have any worst practices that have caused problems for your test suite? Any questions about the points mentioned above? Let me know in the comments below!