Why not just integration tests?

The following is a mildly modified version of a post I made to our internal TWiki web a couple of years ago, under the title Why Not Just Integration Tests?


Do we really need actual UNIT tests? Why aren’t integration tests enough?

1. What’s an integration test?

An integration test:

  • Tends to test more than one source file at a time
  • May depend on other subsystems (e.g., may write to the database, need the workflow monitor to be running, perform an actual import, do an actual release load, need ECP set up between your computer and another, need a real dialer configured, etc.)
  • Creates the data it needs for its test scenario (tables, imports, workflows, clients, accounts…)1

2. Advantages of Integration Tests Over Unit Tests

There are certain things that integration tests do better than unit tests:

  • You don’t have to break as many dependencies to get the code under test. Instead, you take the code you’re changing together with all its dependencies
  • They provide basic test coverage for large areas of the system, which is useful when great expanses of the codebase are not under test.
  • They test how code components work together. This is something that is not in unit tests’ job description, and it’s an important item.

Integration tests are kind of like the Marines. They go in first and give some cover to the rest of the troops. But you can’t win a full-scale war with just the Marines – after they’ve established a beachhead, it’s time to send in the rest of the troops.

3. Disadvantages of Integration Tests

Integration tests do have disadvantages as well:

  • They are harder to read and maintain. Because integration tests generally need to perform setup for the tested code’s dependencies, the code of integration tests tends to be thicker and harder to read. It’s easy to get lost in what is setup and what is the main test. And it can take more careful analysis to to be sure the test itself doesn’t have a logic error in it.
  • Their code coverage is low. Even if your integration test covers several scenarios, getting anywhere near complete code coverage is usually somewhere between tediously difficult and impossible. Running a whole scenario is just too coarse of a tool to get that kind of coverage. (As a sidenote, this is also one reason manual testing is not enough.)
  • They tend to be slow-running (30 seconds to half an hour)2.
  • They take longer to write. In the short-term, when testing legacy code integration tests are still quicker to write than unit tests, since changing the production code to be unit testable takes time and effort. But once you break the dependencies of a class or routine for unit testing, future unit tests will no longer pay that cost, and integration tests will be the slower ones to write (probably by a large margin).

4. Conclusion

Integration tests are important and won’t be going away, but to get to the next level we need to be able to unit test individual classes3.

Glossary

Dependency breaking
Changing the production code to make it so that you can test one method at a time without having to have the workflow monitor running, doing an actual release load, etc.
Test case
a test class that tests a production class or routine. Note that the term test case refers to the whole test class, not just one method on the test class.
Code coverage
How many lines of the source file(s) being tested are exercised by unit tests or integration tests, expressed as a percentage. For example, 100% code coverage means that every executable line was executed by the tests. The lower this percent is, the more code we’re shipping that is never exercised by tests.

For Further Reading

The Trouble With Too Many Functional Tests – http://www.javaranch.com/unit-testing/too-functional.jsp

Footnotes

  1. Instead of having each integration test create and set up its own data, another approach is to have certain things already set up that the test environment can rely on. That has quite a bit of appeal – it would make the setup required by each integration test smaller (and they would run faster because of that); they would also be more readable and maintainable. The reason we haven’t gone with that alternative up to this point is that it’s really hard to know what side effects the production code your integration test runs may have on the system. It seemed safest to have each test be responsible for setting up what it needs. Perhaps we’ll revisit this decision in the future.
  2. 30 seconds may seem pretty quick for an integration test; but as we move toward the discipline of running existing tests often, and as that body of existing tests continues to grow, we’ll want the unit tests to be quick – like under half a second per test case. This will enable a continuous build server (for example) to run all unit tests after each commit to get near-instant feedback on whether we broke anything. The build server would still run integration tests, but because they tend to be long-running they may only be able to run once or twice a day.
  3. And routines… though that’s a bit harder, since our routines often aren’t really units of functionality – they’re more like a bag-o-labels… there’s certainly work to be done!

— DanielMeyer – 20 Oct 2007

Advertisements

,

  1. Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s