Test coverage - friend or foe?

Filed under: JUnit, Java, Test automation, — Tags: Bad tests, Cobertura, False positives, Good tests, No assert, Test coverage, Unit tests — Thomas Sundberg — 2012-12-18

Measuring the test coverage is something many people think is a good idea. But is it a good idea? It depends on the quality of the tests. Test coverage may be a good measurement if the tests are good. But suppose that we have a high degree of test coverage and bad tests?

I will show two examples of how to get a 100% test coverage using Cobertura. They will be based on a set of good tests and a set of bad tests.

How is test coverage calculated?

Test coverage is calculated by recording which lines of code that has been executed. If the code has been executed through a test in a testing framework then we have calculated the test coverage.

The calculation is done using these steps:

We will be able to say that 47% of the lines in the source code has been executed. If the execution is done through test code, this will give us a measurement of the test coverage.

100% good test coverage

A small example where the test coverage is 100% and where the coverage is backed by good tests may look like this. First the production code:


package se.somath.coverage;

public class Mirror {
    public String reflect(String ray) {
        return ray;

Testing this production code with this test code will give me a 100% coverage:


package se.somath.coverage;

import org.junit.Test;

import static org.hamcrest.core.Is.is;
import static org.junit.Assert.assertThat;

public class MirrorTest {
    public void shouldSeeReflection() {
        Mirror mirror = new Mirror();
        String expectedRay = "Hi Thomas";

        String actualRay = mirror.reflect(expectedRay);

        assertThat(actualRay, is(expectedRay));

A coverage report generated by Cobertura looks like this:

We see that there is a 100% coverage in this project. Drilling down into the package tells us the same thing:

More drilling show us the exact lines that has been executed:

This coverage is good when it is backed by good tests. The test above is good because

The Maven project needed to be able to generate the coverage report above looks like this:


<?xml version="1.0" encoding="UTF-8"?>

I have added the Cobertura plugin. I tied the goal cobertura to the phase verify so it will be executed when i execute

mvn install

Tying the goal cobertura to a phase like this will force you to execute it in every build. This may not be what you want. In that case, remove the executions section in the plugin and generate the reports using Maven like this:

mvn cobertura:cobertura

Sometimes it is better to get faster feedback then generating a coverage report.

The coverage report will end up in target/site/cobertura/.

100% bad test coverage

A bad example with a 100% test coverage would be very similar. The only difference is in the test backing up the coverage numbers. And this is not possible to see from the reports. The same reports for the bad example looks like this:

We notice 100% coverage in this project.

100% in all packages as well.

We also see the lines that has been executed.

What is bad with this example? The bad thing is the test that has been executed and generated the coverage number. It looks like this:


package se.somath.coverage;

import org.junit.Test;

public class MirrorTest {
    public void shouldSeeReflection() {
        Mirror mirror = new Mirror();
        String ray = "Hi Thomas";


Parts of this test are good. There are no repetitions and no conditions. The bad thing is that I ignore the result from the execution. There is no assert. This test will never fail. This test will generate a false positive if something is broken.

The coverage test reports will unfortunately not be able tell us if the tests are bad or not. The only way we can detect that this code coverage report is worthless is by examining the test code. In this case it is trivial to tell that it is a bad test. Other tests may be more difficult to determine if they are bad or not.


Communicating values that you don't know the quality of is, to say the least, dangerous. If you don't know the quality of the test, do not communicate any test coverage numbers until you actually know if the numbers are worth anything or not.

It is dangerous to demand a certain test coverage number. People tend to deliver as they are measured. You might end up with lots of test and no asserts. That is not the test quality you want.

If I have to choose between high test coverage and bad tests or low test coverage and good tests, I would choose a lower test coverage and good tests any day of the week. Bad tests will just give you a false feeling of security. A low test coverage may seem like something bad, but if the tests that actually make up the coverage are good, then I would probably sleep better.


All tools can be good if they are used properly. Test coverage is such a tool. It could be an interesting metric if backed with good tests. If the tests are bad, then it is a useless and dangerous metric.


Thank you Johan Helmfrid and Malin Ekholm for your feedback. It is, as always, much appreciated.






Clean code
Continuous delivery
Continuous deployment
Continuous integration
Executable specification
Load testing
New developers
Pair programming
Public speaking
Software craftsmanship
Software development
Technical debt
Test automation
eXtreme Programming


Thomas Sundberg
Adrian Bolboaca



rss RSS