May 30, 2015
I see a lot of teams debating whether or not they should be writing automated tests. In fact, I see even more teams not writing any tests at all, but that’s another story.
These debates would make at least some sense if there wasn’t one teeny problem: testing is not a choice.
Now, what do I mean by that?
What I mean is that there’s always someone testing your product. If developers don’t do it, testers do. If there are no testers in your team, stakeholders do. And if stakeholders don’t do the testing, end users do. This chain may be shorter or longer, but I’m sure you got the idea.
Let’s consider each of the roles in the chain separately in the order from the worst to the best.
We all know that end users are the worst testers there are. And I don’t mean their ability to test; they might be testing your product much better than some professional testers do. What I mean is that they’re the worst testers for your business.
If your end users are the ones running into bugs and reporting them, I wouldn’t expect much loyalty from them. A couple of bugs from time to time would probably not drive them away, but if they’re running into bugs all the time or running into the same bugs over and over again, don’t get surprised if your bottom line is waning away.
The next worst testers are stakeholders or customers that commissioned the product and paying you to develop it. These are the people with the most expensive time and lots of more important stuff to do. If they’re the ones testing the product, something is goddamn wrong in your company.
Luckily, most stakeholders know better and do not bother doing the testing themselves and instead hire professional testers.
Now, professional testers doing the testing is a much better idea. It’s a much better proposition compared to all mentioned above. The only problem with testers is that there are dumb testers and smart testers.
Dumb testers test products manually. They click-click-click-type-type-click nonstop. As you can imagine, this approach is not very scalable.
If your product is of any significant size and complexity and is constantly evolving, clicking and typing through the whole product and generating all kinds of conditions will take more and more time.
It gets especially problematic as iterations become shorter and shorter. It’s pretty common to have 2-4 weeks iterations these days. Being able to click and type through an increasingly bigger and more complex product hits its scalability limitations pretty fast.
To solve this problem, you either hire more and more testers — resulting in a whole army of testers at some point — or the testers learn to automate their tests. If you can afford to generate jobs for a whole army of testers and still make a decent profit, be sure to do that. But if you can’t afford it, your testers better start automating their tests and make machines do what they do best — repetitive and boring stuff.
Now, if your testers start automating their tests, they move from the dumb category of testers to the smart one. Smart testers are smart because they automate the boring and repetitive tasks and focus on a higher level of testing — exploratory testing.
If you hire a few smart testers that will automate testing, you’ll be doing better than most companies out there.
But there is an even better option.
Developers writing the product are the best candidates for testing. But it works that great only if they’re of the smart kind — the ones writing automated tests instead of clicking and typing through the whole product after each change. As you can imagine, that wouldn’t scale well too.
Developers are best for writing tests because they’re the ones understanding the product much deeper than anyone else on the team. After all, they’re the ones writing the code and that forces them to be very specific about the code they write.
Another advantage developers have over testers is that they can write white-box tests — the tests that set up state for each test and validate the resulting state directly. They don’t have to use the UI to first input the test data and then use the same UI to verify that a result is the expected one.
They can also write tests at different levels — up from the UI down to unit tests. The lowest level tests — unit tests — execute the fastest, but they alone don’t give the required confidence in the overall product, so end-to-end tests still have to be written.
What’s good about it though is that developers can mix and match to find a balance between the required confidence and performance of tests. Testers can’t do that since they have to do everything via UI.
Another advantage developers have is that they can force the code to behave in a way that’s not easy or sometimes even impossible to reproduce by black-box tests written by testers. Developers can simulate an edge case and then handle it properly. An example of this would be a third-party service your product depends on going down.
To summarize, developers can write tests that give more confidence, execute faster, and can simulate exceptional conditions.
Wait, now, does it mean that if developers write tests, you don’t need testers anymore? Not really.
Testers have a different kind of mentality compared to developers. While developers are creative in their nature, testers are destructive — in a good sense. Only a tester — at least a good one — would try to insert 1 megabyte worth of text into a single line text field just to see what happens. Developers don’t do that.
So, the best option is the combination of both developers and testers. Developers write tests and testers help them write better tests. Testers engage in exploratory testing and come up with new ways to break the product. They then collaborate with developers to write tests that cover those cases as well.
I hope you now know why it doesn’t make sense to have debates of whether to test or not. And you now know the best combination of roles to write the best tests.
Just remember one last thing: the later in a development cycle a bug is found, the more expensive it is to fix.