How Much Test Coverage is Enough?

img

In an ideal world, we’d have QA and testing teams testing their apps on every browser and operating system you can think of. The real world operates a little differently though. Just the sheer number of combinations of browsers and operating systems available today make testing, every single type, difficult. Development and testing teams, therefore, settle on only a subset of all available platforms that their end-users could run.

But this does raise an important question: At which point in the testing process do you feel confident about your application? How many browser/OS combinations will it take to get there? Put simply, how much test coverage is actually enough?

Test Coverage and Code Coverage

Despite the fact that test coverage and code coverage utilise similar underlying principles and are often mistaken for one another, these two are different things entirely.

Code Coverage deals with unit testing practices that have to target every part of the code at least once and is typically done by developers.

Test Coverage, however, is making sure that every requirement is tested at least once and naturally falls under the purview of a QA team. The exact qualification for a covered requirement can vary based on the project or the testing team.

For instance, some testing teams may deem a requirement covered if there is at least one test case against it. Other times, the criteria for coverage can be having at least one team member assigned to the test. Or if all the test cases associated with it have been completed.

Visualising Test Coverage Percentages

To better understand how test coverage and its percentages are determined, we’ll imagine a project with 10 requirements and 100 tests created for it. When all the 100 tests target each of the 10 requirements, without omitting or missing out on any, we consider it adequate testing coverage at the design level.

When you manage 80 out of 100 tests and target only 6 of the 10 requirements, we’ll assume 60% of test coverage as 4 requirements still remain. This is because requirements here take precedent even though 80% of testing is completed. This is coverage statistics at an execution level.

A 100 percent test coverage is not a realistic goal to have. Most teams simply lack the resources available to them and to approach that level of coverage,  and when you couple in the total number of browser/ OS combinations, the task is almost impossible. This is because your primary testing concern goes beyond major browser releases and OS versions. Take Linux for instance, with its 300 distributions available today. If you could take each distribution as a distinct operating system, you’d have a myriad of combinations to test to get complete coverage just for Linux-based platforms alone.

Factor in mobile applications and mobile devices, OS version,s and their relative browsers, and by now your list of possible platform configurations gets exponentially longer still.

The simple fact of the matter is this: no matter how you approach the problem, total test coverage, or anything remotely resembling it, is just not going to happen.

Setting the Right Test Coverage Goals

Regardless of the fact that 100% test coverage is nigh impossible, you still need to ensure that you’re testing for enough OS/ browser combinations to assure yourself that your software runs properly in most of the environments that your end-users use. Here are a few ways you can set the right test coverage goals:

#1 How many possible platforms does your application support?

It is vital to contextualize your test coverage strategy by understanding what percentage of possible platforms your customers could actually be using.

That number could vary widely counting on which sorts of operating systems your platform supports. For instance, maybe you’re writing an iOS-only app. In such a case, your total number of possible platforms is going to be much less than if your app supports Windows, Android and iOS. As a result, if you test against only three OS/ browser combinations for an iOS app, you might get greater test coverage.

In certain cases, you would possibly also support just one browser. This is often a rare instance these days, but if, for instance, your app officially runs only a particular sort of mobile using the browser provided by the manufacturer, then your total number of supported platforms also will be lower.

#2 How automated are your tests?

When you run tests manually, there’s an unavoidable tradeoff between the extent of your test coverage and therefore the time and resources your team has got to devote to testing.

The greater your test coverage, the longer and more money the tests cost.

Thus, if your tests are run manually (or mostly manually), then it’s always important to be conservative about the number of platforms you test against. Otherwise, your resource costs are too high.

This issue largely disappears once you automate tests. With test automation, you’ll achieve higher rates of coverage without a proportional increase in time or resource expenditures. And with defect management tools complementing your testing arsenal, cataloging and tracking bugs can be a breeze too.

#3 How bug-prone is your app?

It’s no secret that some apps are more bug-prone than others.

Factors such as the size of your codebases (which you’ll measure crudely, depending on the total lines of code), the number of runtime variables (which can create configuration issues that cause app problems on certain platforms) and therefore the extent to which your software interacts with hardware (whose behaviour is often device-specific and requires more tests) affect the likelihood of your app to experience bugs. Consider these factors when deciding what proportion of test coverage you should be aiming for.

There is not a single strict rule for the level of test coverage that is acceptable to most companies. Again, following the aforementioned test coverage goals can be a good general guideline and get you up to speed, but your testing needs will still vary. Automation tools and defect management tools can also be powerful additions when gauging how much test coverage is actually enough for your organisation.