Friday, February 26, 2010


Coverage helps us to know what we are not testing. It doesn't helps us much to know if we are testing well what we are testing.

There are discussions about the usefulness of coverage. For example see the summary of a discussion on the subject, where I took the idea that the only thing that coverage shows are the untested areas.

For my part, I did a rant (in Spanish) about using the % coverage as a metric, especially by people who learned the concept only as a byproduct of learning TDD.

But if we take into account the characteristics of coverage, it can be an important aid for the test activity (whoever do this activity).

There are many types of coverage. For example, test coverage on business objectives or requirements, on identified risks, on the code or the inputs and outputs of the program.

Code coverage is the most common coverage metric, probably because it is easier to measure than others. Even within the code coverage, there are many possible coverage: class, method, line, statement, decision, path, etc.

In any coverage metric, we first define the universe, and then we measured how many of these points are being covered by some test case.

For example, in line of code coverage, each line is a point. If that line is executed by running a test, this point is said to be covered.

You can take the percentage of coverage as the number of points covered over the total amount of points.

But as mentioned earlier, is more valuable to know the points not covered.

The tool

The tool jXmlCoverage measures coverage based on the XML used by the System Under Test (SUT).

Many SUT uses XML as input, output or configuration. For example, Web Service. In these SUT it is available, or you can create, an XSD that defines a contract that the corresponding XML must comply.

We are interested in the degree to which the tests exercise the different values in the XML, specially the values that are not used at all.

For this, we must define the universe that we want to measure. What we do is to decide for each element defined in the XSD, the interesting equivalence partitions. For example, for an integer, it could be positive, zero, and negative integers and values out of range. We call this a sub-domain. Each sub-domain is a point on which coverage is measured.

After the definition and configuration phase, we have a testing universe, on which we can measure coverage.

All the XML used by the tests are evaluated against the sub-domains, counting how many times a sub-domain is used (covered) by the test cases.

As a result, we can obtain the sub-domains that were not covered.

As in all coverage metrics, one must avoid the temptation to treat a sub-domain as a sub-domain covered well tested.

No comments: