Pages

Tuesday, March 4, 2014

Why do we write tests?

We all write tests everyday. We write different types of tests from unit tests to browser automation tests. Occasionally I meet a team members who are not sure whether they should write a test or not. If yes, why she should write that test. Or whether the test they are writing is a good quality test. An example of this situation is tests for FluentNHibernate mapping classes. The argument usually is that if you know how your table is structured then there is barely anything that needs to be designed or unit tested. If you know the rules of how to write mappings then things just work. Another example is a test below.

[Test]
public void RegistrationController_Index()
{
    var controller = new RegistrationController();

    var result = controller.Index() as ViewResult;

    Assert.That(result, Is.Not.Null);
}

This test has two obvious problems

  1. Just asserting that result is not null is not enough. We need a better assert. 
  2. The name of the test is misleading or rather it is not leading us to anywhere. 

My experience is that, people who write such tests (or decide against writing important tests) fail to understand what value a test adds. To understand what value a test adds, we need to answer this question - Why do we write tests?

Now, I am no expert at TDD. Following paragraphs are my attempt at answering this question in a way that provides some pointers to help us with understanding what value a test adds.

Prevent induced defects - This is by far the most important reason why we write tests. We work in a world dominated by delivery pressure where code is always changing. Any line of code we change can leave some other part of the application broken. Our tests should prevent us from this. If you have got tests that are not failing even when the functionality they are testing is completely changed, then those tests are of no use. The test above is an example of such a situation. If RegistrationController is supposed to return a view Index.cshtml and I change that to return Register.cshtml, then I might have broken some important feature and this test does a bad job at telling me that.

Safety Net - This is second most important reason behind writing tests. A well written test, acts as a safety net not only when you are re-factoring your code but also when you are changing an existing feature. After changing the code, if you have no failing tests, that either means your change has not altered any existing behaviour of the system or your tests are not good at detecting this. Tests for NHibernate mappings is important from this standpoint. I would like to be prevented from someone removing/modifying a property on my model that is mapped to an existing database column.

Drive design and implementation of features - This may not apply to unit tests, but does apply to automated acceptance criteria and integration tests. A lot of times, I start my Red-Green-Refactor cycle with an automated acceptance criteria or an integration test. This leads nicely into design of controllers, models and routes that efficiently satisfy the needs to feature I am building.

Confidence in the quality of the software - A well written and complete automated test suit is an indicator of the quality of the software. I personally do not feel confident about the quality of the software if there are missing tests or tests that are not carefully written. For lot of people high level of code coverage is a factor that leads to a level of confidence in the quality of the software, for others it may be something else. But at the end of the day, the confidence does come from quality of the automated tests.

Code documentation - By code documentation I do not mean technical documentation of you product. Lot of times, we deliver a feature and nothing happens around that feature for few months. After few months product owner comes with a great idea that needs changes to that feature. People who worked on the feature originally are not around and new people have to work on the change. They would first need to understand how the existing code is working. They can look at the existing code if it is simple. But they would better off by going through the tests in order to understand the code. This works quite nicely.

Lower the frustration of testers - Testers on your team are going to be frustrated if they see that nasty defect that is beyond their means to reproduce. If you have testing developers on the team, they might find a way out on their own, but not always. It helps to think about what kind of uncontrolled environment your code would be run in (e.g. multiple concurrent requests, load etc.) and try to come up with tests that can validate code's behaviour in such situations. If you have such tests that, after every commit, are validating behaviour of software in probable uncontrolled environment, then you have a happy tester who would help with finding more important issue and save your face in front of client.

That was not a big list, huh? So every time I have a doubt whether I should write that next test or whether the test I have just written add any value, I try to look for positive answers to one or more of the following questions.

  1. Will this test prevent defects induced by changes to the code being tested?
  2. Will this test provide required level of safety net to other developers or to me few months later?
  3. Is this test helping me with design of the code I am going to write?
  4. Would this test increase my or my team's confidence in the quality of the software?
  5. Can this test act as good documentation of the feature being tested?
  6. Can this test help lower the frustration of other team members, especially testers?
Some questions are difficult to answers and experience is the key to getting best answers out. But I hope trying to answer these questions when you are in doubt would set you on the right path

No comments:

Post a Comment