A Guide to Designing Your Unit Tests

| 6 min. (1241 words)

Hello blog world! I’m talking to you now as Raygun’s part time Junior Software Developer, who has recently been exposed to the joys of unit testing. In one of my older posts, I discussed how unit testing really could save your sanity as a developer, and now that I’ve helped you all see the error of your ways, I’ll be walking you through the basics of how you should go about designing your unit tests.

This is something I often got stuck on when I was first exposed to writing unit tests, not too long ago.

I would have a whole bunch of code that I had written, some had yet to be written and I’d stop and think to myself “I should really be writing some unit tests for this bad ass code I’m writing here…

It would be at this point that I would turn to my co-workers and quiz them about where to start writing these tests that everyone is always talking about.

The questions that ran through my confounded brain would include things like: How many tests do I have to write? Can I just shove a bunch of asserts into one test, that’ll save some time right? Do I need tests for my tests? Can I get away with calling them ‘Test1’ and ‘Test2’? What am I testing again?

So here’s a breakdown for any of you who have confounded brains that are getting in the way of good unit test writing:

What am I testing?

Take a look at the code that you’ve been writing or that you are about to write and pluck out the foundations of that code. Look at all of the assumptions you have made or will make in your code-writing process e.g. I can call ‘y’ method on ‘x’ object.

It’s at the initial breakdown stage that I find myself thinking like a paranoid pessimist.

Is ‘x’ object accessible? Does the object actually contain what I think it contains? Will ‘y’ method return what I think it’s returning? If ‘y’ method mutates some data in a certain way: Am I getting the right mutated output?

Essentially with your tests, you are formally checking all of the assumptions that are explicitly stated and/or implicitly relying on for the code behaviour. This way, you can have automated, repeatable tests running easily in the background while you work on other projects or sneak a quick coffee break. Trust me, your stressed out developer alter-ego will appreciate it.

How many tests do I write?

This question must be answered from the mindset of any good scientific researcher. It depends.

If you have one line of code, one would intuitively think that 50 tests to run that line of code may be a little overkill. However, if that one line of code is calling several methods that are returning various data-types storing who knows how much information, 50 tests might not be enough.

I’m not going to sit here and tell you how many tests you should create for each method you may have written or intend to write because there is just to much to take into account.

You can use various test coverage tools like dotCover or NCover (these are .Net specific) to act as a rough guide and tell you how much of any written code is being tested, but if you are using test driven development you may want to consider the following:

  • Have you generated (at least) one test per method?
  • In this test, or follow on tests, have you evaluated that this method spits out the correct input?
  • Have you generated a test for every assumption you have made while designing or writing your code?
  • Is there one test per behavior?
  • Is running this test as simple as clicking the “run tests” button?

If you can say yes to those questions, and you’re telling the honest truth, I’d say you have a decent foundation of test coverage to continue on or start your development.

How much can I cram into one test?

Theoretically you can cram as much as you want into one test, you could have all of the setup shoved in there and you can have it spitting out several assert statements per test, but that’s some BAD JUJU.  Everyone who runs those tests or has to read, fix or refactor those tests will curse you and your name for the whole time.

A good software developer will write one test with one method call and one statement that asserts or verifies the behaviour was as expected. This means easier debugging, readability and helps the tests run smoother and more efficiently.

Essentially, you want to make sure that each test is testing it’s assigned code behaviour. Not only does this make the test easier to edit if the behaviour itself changes, but it also means that if that test fails, you have an increased chance of fixing the problem in a shorter amount of time (because you have narrowed down that the ‘Hello_World_Method_Test’ was likely due to some failure in the HelloWorld method). But also be wary of having oversimplified tests that state a null object is null. Note: the provided example has been extremely simplified to demonstrate the general point.

Can I call them Test One and Test Two?

No.

If you’re looking at the results of ‘x’ number of tests all in a nice output box, and you see a list of ‘Test 1’ through to ‘Test 295’ and you see that the tests ‘Test 37’ through to ‘Test 59’ have failed, I don’t know about you, but just writing those numbered test examples has almost put me to sleep. I’m getting absolutely no information from what parts of my code has failed from a simple glance. I’m going to need to go through and look at the failure messages of each one, taking a guess that they might be linked by one method or it could be 22 separate methods that have all banded together to begin a code revolution.

However, if I had been given a list of tests, ‘Calculator_Should_Output_Multiple_Of_2’ through to ‘Computer_Is_Successfully_Connecting_To_X_Network’, if one or more of those tests fail, my analytic, problem solving brain can see a clear list of what each test is testing and it narrows down  what areas I might need to navigate into to fix the problem.

While more comprehensive naming is a little harder to fit on an adorable onesie for a make believe character, your future self and anyone who comes along after you will be grateful.

TLDR:

  • Ensure that your tests are checking all of the assumptions that are explicitly stated or implicitly relied on for the code behaviour
  • Make sure that each test is testing it’s assigned code behaviour, and that it’s name helps describe what behaviour is being tested.
  • Be reasonable with how much you are encapsulating in one test, if it looks like it’s getting a bit complex, split it into smaller components.
  • Make sure you are writing automated, durable, easy and quick to run tests that you wouldn’t mind sticking your hand up and saying “Yep, that was me” – I’ll be discussing how to go about generating those in my next post.

Whilst you’re here, if you’re a software developer, then you should certainly take a look at our product – Raygun. Error tracking and crash reporting was never this fun and easy. Try out a free trial today!