Quantcast
Channel: Mojo Lingo
Viewing all articles
Browse latest Browse all 59

Coding To The Test

$
0
0

Coding To The Test

Automated tests are a wonderful tool for improving the quality of your software process. The Rails community has fostered a strong culture of automated testing, and although we at Mojo Lingo work with telephony rather than web applications, we're card-carrying Rubyists and believe in the value of automated tests.

But there's still a question that comes with automated testing: How much is enough? What, exactly, should you test? It's simple to say "everything", or "the important things", but neither statement is a practical guide for daily work. Nor is either statement true, because not everything - or even everything important - can be efficiently verified by an automated test.

Let's go on a journey together, reader - you and I and Alice are going on a trip to the magical land of North Dakota, where we will learn a great deal about test strategy and not very much about North Dakota.

Distinguished From South Dakota By Being North Of It

It's a rather short trip, because Alice actually isn't going to North Dakota. Not yet, at least. Here's the story: Alice wants to buy a house in North Dakota, but simply can't make room in her schedule to go there personally right away. She can check listings on the Internet and speak with agents by phone, but there's no guarantee the information is accurate or complete. Fortunately, she has a friend who will be in North Dakota with some free time he's willing to use on Alice's behalf. Alice can ask him to investigate properties, and he'll report back about the neighborhood, the landscaping, the plumbing, the price - whatever she wants.

What should Alice actually have her friend check out? His time (and patience!) aren't infinite, after all, and it might be better to look at multiple houses quickly instead of exhaustively checking a single location. What kind of a checklist should Alice create for her house-hunter?

There's one category of things that we can immediately strike from the list as unimportant: Anything that won't change Alice's mind. If Alice hates the color red, but is willing to buy a house even if she'll have to repaint the walls blue once she's moved in, then she might as well not check the color scheme at all. That's just as true if Alice loves the color red instead, as long as she doesn't love it enough that she'd buy a house because it's already painted. It doesn't matter whether Alice's likes or dislikes a particular thing; it matters whether she would change her mind after knowing about it.

Then what about more significant problems - ones that would definitely torpedo any house? For example, obviously Alice should make sure to check whether the house is suffering a velociraptor infestation. As every viewer of the film Jurassic Park knows, velociraptors are not only swift, lethal hunters, they possess a primitive intelligence. Alice definitely doesn't have the budget to hire an anti-velociraptor mercenary strike force to clean up the neighborhood's dinosaur problem.

But she probably shouldn't worry, because, after all, velociraptor infestations don't really happen. Alice should focus on problems that would change her mind, but crime, noise, and traffic are much better red flags to investigate, even though all of them put together aren't as bad as getting eviscerated by a carnivorous therapod. The likelihood of a problem is just as important as its magnitude.

Alice's checklist should therefore include things that are big enough that Alice can make up her mind based on the results, but could plausibly happen in a house search. It's also a good idea for the checklist to be short and simple within those limits, and not just to make things easier on the house-hunter. The entire reason Alice doesn't go and look at houses in person is that she doesn't have the time to do that work. If she gets back a report of several hundred houses examined on an exhaustive 300-point scorecard, she's just recreated the original problem - Alice wouldn't have time to read all about the houses any more than she could have examined them personally.

Follow the Code

What does this parable of the house-hunter mean for automated testing? Like Alice, we need to investigate something, but don't have time to do it in person. It would take far too much time and effort to manually run and verify each piece of a software project after making a change. Instead, we write automated tests to perform that laborious process for us. However, tests are not useful in and of themselves. The goal of automated testing is to produce working code, but the tests don't produce code - they produce information, which we, the developers, then use to help us write and debug software.

I follow a few rules of thumb for determining which tests to write and how much effort to spend maintaining them - practical guides for determining if you're writing too many tests, not enough, or the wrong kind.

Rule of Thumb 1: A well-tuned test case should sometimes cause you to revise your code after a test run.

There's no point in Alice asking her house-hunter to check something out if she wouldn't change her mind based on the information. If you're going to buy the same house and just paint the walls anyway, then you don't need to check the decor ahead of time. Likewise, there's no benefit from a test case if it provides information you only ignore. This is obvious for troubled projects where failing tests have been left unmaintained - if you ignore test failures anyway, then the test suite doesn't benefit you. However, it's also true for test cases that always pass! The goal of a test is not to pass or to fail, but to let you know when you need to revise your code.

I don't measure the value of a test by how much functionality it covers or how much code coverage it provides - I measure it by how often running the test has made me go back and fix a bug that the test revealed. If the answer is "never", it's worth asking whether the time spent writing and maintaining that test could've been better used elsewhere.

Rule of Thumb 2: Your test cases should roughly reflect problems you have had in the past - or reasonably expect to have in the near future.

Alice doesn't need to check for velociraptors at her house, and you don't need to write tests that check for them either. A test suite should focus on the bugs that are most likely to actually occur - and the likelihood of a particular bug depends on both the project and the developer doing the work.

If you notice a mistake that you personally often make, then it's a good idea to write a test for it... and on the flip side, you don't need to write tests for code that has high quality, or features that rarely have bugs. The most common bad example is a test that covers the #initialize method in Ruby and checks whether variables have been set. It's unlikely that you'll ever write a real bug that gets caught by such a test.

Rule of Thumb 3: Test code should be cheaper to maintain than the code it tests.

Alice doesn't benefit from having someone else do the house-hunting unless that's actually easier than doing it herself. If an automated test finds a bug for you that would have taken half an hour to smoke out in manual testing, that's great - but you still haven't actually saved any time unless you spent less than half an hour writing and maintaining the test up until that point. Unit tests rarely have that problem, but functional or integration tests that have extensive and fragile set-ups sometimes do, especially tests that try to verify the user interface in detail.

Consider using code review, manual testing, or static analysis tools to handles problems that are difficult or expensive to catch with automated tests. Not every problem can be solved by writing more software code, and that includes the problem "my software code has bugs"!

Passing Grade

The above are what I consider my three rules of thumb when designing an automated test suite. If the tests are easy to maintain and help me catch issues in my code, I'm happy; otherwise, I try to improve the suite, striking a balance between maintenance cost and the likelihood I would write a bug the included tests could catch. If it looks like I can't make progress on one front without sacrificing the other, that's a sign that automated tests have already done what they can for the project and it's time to switch strategies.

An automated test suite is a tool, and like any tool, needs to be used with an understanding of its strengths and weaknesses, costs and benefits.

The post Coding To The Test appeared first on Mojo Lingo.


Viewing all articles
Browse latest Browse all 59

Trending Articles