Test automation

Filed under: Software development, Test automation, — Tags: Why test — Thomas Sundberg — 2014-01-28

A lot of people and companies are talking about and thinking of test automation. It seems to be like a holy grail within software development for a lot of people.

I am, however, sometimes wondering if they have thought this through properly. A few questions are important to know the answer to before you start a project that aims to place a product, or a project if you want, under automated testing. It seems to me as if a lot of people haven’t thought enough of the question why or the question what. Focus is often on the question how and almost never on where or when. How is obviously important, but if why and what hasn’t been properly understood then the how is uninteresting. Where is also important to decide upon. Some things should be tested through the final user interface, some things should be tested on the inside of a system. Some things should be tested with unit tests and some things at other levels. Finally, it is important to decide when the tests should be implemented. Tests can be implemented first, last or during the implementation of the system. This may not seem to a big deal, but it turns out that it is.

But let us start with the why.

Why?

Why do you want to test something automatically? Why isn’t it sufficient with manual testing? My short answer to these questions is fast feedback. Anyone that makes a change needs fast feedback. Like when you drive a car, you need fast feedback all the time to adjust where you are headed with the car. If you don’t, you will crash.

So fast feedback is the ultimate reason why you want to automate testing of software. With this established we come to the next question, what do you need to do to be able to get this feedback? You need to deploy the software somewhere. The first thing that come to my mind is a test environment. This implies automating deployment to the test environments. This must be done as soon as possible and as fast as possible after each change. When the software has been deployed, it is time to run a test suit on it. This test suit must be large enough to catch errors and small enough to execute in a short time. This means a balance is hard to find. It is easy to have a test suite that is too large and therefore takes too long time to run. The largest problem with a complete, and therefore long running test suit, is that the feedback cycle is too slow.

Dividing a test suit into a fast and a slow test suite is one possibility. The fast suite is executed as fast as possible and the slower is executed afterwards to catch corner cases. This will catch the largest problems fast and will eventually give you feedback on the state of the entire system.

Who?

Who is it that want to have test automation implemented? A simple answer could be management. Management want the testing to be automated to save time at each release, it is a lot faster compared to manual testing. It is also a lot more accurate than manual testing, each test is performed the exact same way every time.

The simple answer is, however, not enough. I really don’t think that it is only management who is interested in test automation. Everyone involved in the development of software will benefit from automating repetitive tasks. This will speed up the feedback cycle significantly and it will therefore raise the quality of the software built. Fewer bugs will slip into the product and the result will be a less stressful working environment.

It is possible to release the software faster if all testing can be carried out fast and frequent. The agony of preparing a release candidate is greatly reduced. The result is that it is possible to release to the the end users often. The benefit from this is faster, better and more accurate feedback from the users. This feedback can be used to steer the development of the product to a better direction compared to just steering it in the direction someone at marketing is hoping is the way to the next killer app.

So even if it may seem as management  is pushing for test automation to save time, increase quality and enable more frequent and smaller releases it is actually something that the entire organization benefits from.

Another question is what you should test. Let me explore this a bit.

What?

What are you supposed to test? The simple answer is obviously everything. But everything is not possible. Not given a time constraint, which usually is the case. You must give the developers feedback while they still remember what they did. This normally means within minutes and in some cases up to an hour. Not the next day, next week or next month.

It isn’t reasonable to test anything out of your control or that is out of your responsibility. If you system is a part of larger context, which usually is the case, then you must most likely stop testing at the boundaries of your responsibility.

You must remove or take control over all moving parts that affects your system. This means taking control over the environment where the system lives. If you are depending on a web service that another system is providing, you must be in total control over that service. You must know what it will respond with for a given query. In other words, mocking or stubbing the system your system depends on.

It is very important that you know what you are testing so you don’t test things by coincidence. A good plan and well defined boundaries are very important.

When you have sufficient answers for why and what, it is time for the how.

How?

This is usually the starting point for many people and it is important but it is not the most important thing in my book. This is about tools and how to use them. It is also about how to take control over the moving parts that will make your tests flaky and unreliable if you are not in control.

What do you want to take control over? I already mentioned an example above with a web service. A web service is an example of an external dependency that you must have have control over. A list of things that you must be in control over would, at least, contain these things:

External dependencies

This may be other services your system relies on to either deliver information to or get information from.

Test data

A lot of system operates on data and it is therefore extremely important that the test data is well known and doesn’t contain inconsistencies that will break a test suit. It is very important that the test data doesn’t change between two test executions in such a way that the result for a given scenario becomes different. If you count the number of rows in a table and expect them to be 42, then it is very important that they stay at 42 no matter how many times you execute the test suite.

Time

The time moves relentlessly forward and may be hard to be in control over. But being in control of time means that you are able to test things that should only occur at a given point in time. Suppose that you test that a cron job starts at a specific time. It is not reasonable to have a test that waits until that point in time appears. Especially if it is something that only should occur on Sundays at 07:15. You must be in charge over time and manipulate it as you need.

Being in charge over time is difficult if the system you test is deployed in an environment that is hard to manipulate. But it is easy to be in charge over time on a unit test. Some of these things should therefore be tested in unit tests and not anywhere else.

Random

A core property of randomness is that it is hard to predict. It is because of this it is called random. But if you inject randomness into a function, then it is easy to be in charge and control random. This is again hard to test from the outside of a system, but it is easy if you test it on the inside. Random is a great candidate for a unit test.

Quality

How do you know that the tests are correct? What is testing the tests? This is the same questions as who is guarding the guard? If you are not writing or reviewing tests, then you have to trust the people who are writing and reviewing them. The best way to make sure that the tests written are good is to write them as simple as possible and always pair program when they are written.

Simplicity means code that can be verified by inspection. It is hard to write code that is so easy to verify but it is doable. It is easier if you write these tests in a pair, two brains and one problem will very often solve the problem better compared to one brain and one problem.

Writing code that is so simple that it can be verified to be correct by inspection is really difficult. Better ways to do it could be to use tools designed for communication and clarity. Cucumber is one great tool that allows you to specify your requirement in plain text following a simple format with a prerequisite, an action and an expected behaviour. The format looks like this:

Given a computer program in a specific state

When I use it in a specific way

Then should the end result be in a new, correct state

This format is easy to read and it is therefore possible to use as a communication tool between those who knows what they want and those who knows how to implement it. It can be used to bridge the gap between developers, testers and users. The developers are expected to be able to implement it, but they don’t necessarily know all about the problem the users want to have solved. The users know what they want, or at least which problem they want to have solved, and should be able to define the expected outcome given a specific state and action. The testers are somewhere in the middle, they must have good knowledge about the problem and they need to know some about development to able to verify that the result is correct.  

Where?

Where should you test then? It obviously depends on a number of things. One thing to consider is speed. Another things is integration and a third thing is acceptance.

The base for automated tests should always be unit tests. They are fast and test small, well defined functionality.

Unit tests

The fastest place to verify functionality is to test it in unit tests. A proper unit test is executed in memory and uses any external resources. It doesn’t use a database or a disk system. Anything that it should test is available in memory. Unit tests are therefore blazing fast. It is possible, and reasonable, to execute thousands of unit tests in seconds. A few situations above claims that they should be tested in unit tests. Functionality that is dependant on time or random are good candidates.

Integration tests

Tests that use external resources are verifying the integration between these resources and your implementation. They are slow and it is not possible to execute many thousands of them in seconds. It is, however, often possible to execute many of them in minutes. Since integration tests are slow, you must limit the number of integration tests you write and execute. You should not limit them so much that you miss things, but you should always question a test and explore the possibility to use unit tests instead. Sometimes it isn’t possible, but I think you will be surprised how often it is possible.

Acceptance

Acceptance tests defines if your system is done or not. They are often executed through the user interface. They are usually very slow and it is not possible to have too many of them. You should limit them to handle important happy path flow through you system and some expected unhappy flows.

The acceptance tests are tests that your end users should define and should be an automated implementation of what they would test themselves before they accept a delivery.

Selecting test level

The base for automated tests should be unit tests. They are fast and it is possible to be in charge over things like time and random.

Integration tests are built on top of the unit tests and play a vital role in hooking up the system to other components. They are slow and should therefore be used with care.

Testing through the user interface is very slow and should therefore only be used as the last part of a testing suite. You should limit the testing through the user interface and as much as possible try to catch all bugs with either integration or unit tests.

When

The last important thing is timing. When should you implement the tests? There are three options. Before you implement the production code, after you have implemented the production code or while you are implementing the production code.

My personal preference is to create acceptance tests first. Before anything else is implemented. This is my way to automate testing and to know when I am done with the functionality needed. The acceptance test should automate what I would like to show the end users at a demonstration. With the acceptance tests implemented, I start implementing the unit tests and never write more production code than I need to make the unit tests pass. This is what is called Test Driven Development, TDD. This allows me to have tests that drive the implementation forward step by step. I know that every test always tests something that wasn’t there before because every test start with a failure. My next step is to implement enough code to make it pass. I iterate back and forth until the acceptance test passes. Then I know that I am done. When I am done with a test I always ask myself, is there anything that should be refactored here? I know that the tests will save me when I do any refactoring. Any refactorings are safe to do as long as all tests pass. It happens that I refactor the test code as well, but I never do any refactoring on both test code and production code at the same time. I always use the other half as a safety net. I don’t trust myself not to introduce a bug even if I didn’t intend to.

Adding the tests last is possible but it comes with a risk. You may discover that the system that has been implemented is very difficult to place under test. You may miss an execution path. You may also make a mistake that verifies a bug in the production code as as the expected behaviour. The last thing is possible to avoid if you are strict about always verifying the transition from a failing to a passing test. But it is a dangerous path that I can’t recommend even if extremely many developers adds the test after the implementation of the production code. A missing execution path is possible to catch if you use a tool that tells you where you have code coverage in the production code. Placing a system that hasn’t been implemented with testing in the developers mind under test may prove to be more or less impossible. If everything is connected to everything in a large monolithic lump, you may find that it is very hard to break into that lump and test it. You may have to test from the outside and that may be very slow and very difficult as well. Adding test lasts is possible, but definitely not desirable.

Conclusion

Before you start with test automation, think about why you want it. Play five why with someone until you find the root cause.

When you know why, start thinking of what. What should you test? Where are the boundaries of your responsibilities? Where are the seams to other systems that you are not in charge over? Locate them and do whatever it takes to get in control over them.

When you have done this homework it is time to start thinking on how you want to implement the tests and where you want to implement them. Consider hiring people with a strong development background to write the automated tests. Consider getting help from expert consultants. The experts have thought about not only how, but also the what and why. If they haven’t, then they aren’t any experts.

Finally, try to make sure that the experts you get help from has understood and can teach Test Driven Development, TDD. They should teach your current developers TDD as well as help them getting the system under test. Implementing a system test driven is a culture change and it may be difficult to get all developers to agree and use it. It is, however, possible if you have someone at the office that is willing to show good examples and teach the developers how to do it.

Resources



(less...)

Pages

About
Events
Why

Categories

Agile
Automation
BDD
Clean code
Continuous delivery
Continuous deployment
Continuous integration
Cucumber
Culture
Design
DevOps
Executable specification
Git
Gradle
Guice
J2EE
JUnit
Java
Javascript
Kubernetes
Linux
Load testing
Maven
Mockito
New developers
Pair programming
PicoContainer
Presentation
Programming
Public speaking
Quality
React
Recruiting
Requirements
Scala
Selenium
Software craftsmanship
Software development
Spring
TDD
Teaching
Technical debt
Test automation
Tools
Web
Windows
eXtreme Programming

Authors

Thomas Sundberg
Adrian Bolboaca

Archives

Meta

rss RSS