Many of us have seen the test automation pyramid, where unit tests are at the bottom, service tests in the middle and UI tests at the top. Originally this pyramid was introduced by Mike Cohn in his book Succeeding with Agile in 2009.
But what is the reason that UI tests are at the top with the fewest count? Why should we write more unit and service level tests than automated UI tests? I will tell what I’ve learned about this and why we should write our tests like this pyramid.
UI Tests – Good in Theory but…
At first, UI tests sound good: they will test the real application and how real users would use it. Isn’t it what we do when we test our software? Wouldn’t it be most valuable to automate that manual testing that takes a lot of time while making new releases? That is true, at its best UI tests are fabulous tests that make sure that everything works. But in practice, UI tests have many weaknesses, and thus we have to have also service and unit tests. We just won’t see those weak parts until we implement some UI tests and start to run them constantly.
|Tests whole real product||Are fragile (can break after even tiny UI change)|
|Tests real integrations||False-negative results aren’t rare|
|Tests real usage of the product||Takes long time to run (slow feedback)|
|Shows what was actually done when run||Doesn’t tell why test failed or where is the bug in the code|
|Are expensive (takes time to write)|
|Tests small amount of code (e.g. can’t test all code paths)|
|Can’t test error handling|
As we can see from the above table, there are many cons with UI tests even if there are really good pros. In real life, UI tests don’t pass always. There can be an issue with 3rd party REST API and they won’t pass even if our code would be fine. But when we run the test again it passes and we can’t repeat the failure constantly. Some day button click will take 2 seconds when usually it takes 1 second and the test won’t pass. Button’s place has been changed and the test won’t pass. Unfortunately, there are many examples that don’t show up until we run automated UI tests for a while. At first, they run fine, but sooner or later there will be false-negative results. UI tests also demand more maintenance than smaller service or unit tests.
End-to-End Tests – UI Tests for Backend Applications
Many authors replace the top of the pyramid with end-to-end tests over UI tests. Practically both do the same thing: test that application runs thoroughly from end to end as it is used. If there isn’t any UI in an application, we should write some end-to-end tests. They have practically the same pros and cons as I mentioned in the table earlier.
Power of Unit Tests
Unit tests come to help. They have the power what UI tests are missing:
- Fast to run; can run even hundreds of tests in one minute (compare to UI tests where even 10 tests per minute are really rare),
- Are reliable; if the unit test fails, there is some failure in the code (no false negatives),
- Tells the reason for a failure (e.g. expected to be “Lassi” but was “lassi”)
- Tells even the exact row to fix,
- Are cheap to write (especially with TDD),
- Can test most of the code paths and have near 100% code coverage and
- Possible to test many different inputs much faster than with UI tests.
- Can test error handling (what if the REST API call returns an error?).
Those are really powerful reasons to write unit tests. These are the reasons why we should write much more unit tests than UI tests. Even if the biggest weakness with unit tests is that they don’t test integrations that are a crucial part of products, they test the code strongly and are reliable.
One big advantage of unit tests over UI tests is that we can run them even after any small change to the code. When we rename some function, remove unused code (tests will tell if it wasn’t unused), add/remove a parameter, etc. It would be simply impossible to run UI tests after each little change. But it is different with unit tests because they are so fast to run. We can run them constantly while coding (live unit testing) and get quick feedback if our code is ok or not.
Complement with Service-Level Tests
If we have only UI and unit tests, we won’t have a test automation pyramid: it would be hourglass what is considered as an anti-pattern. Service-level tests, often called integration tests, will complement our pyramid. They are an average of the pros and cons of UI and unit tests:
- Tests integrations (unit tests won’t),
- Are faster to run than UI tests but slower than unit tests,
- Have less false-negative results than UI tests,
- Can tell the reason for failure with some accuracy,
- Tests bigger parts of the product than unit tests but smaller than UI tests,
- Are less fragile than UI tests and
- Can test some error handling.
I have found service-level tests to be a powerful way to test integrations and configurations in the installed applications. I can test many things that I can’t test with unit tests and UI tests with these. Especially, I can test dependencies and dependency injection with service-level tests. And I still have some contact with the code (like with unit tests) which makes me feel more comfortable with the service-level tests I write.
One good example of service-level tests is API tests. Those are tests that call real API and check that API returned or did what was expected. Especially with microservice architecture where there can be many APIs, it is good to write API tests.
Often we even should replace some of our UI tests with service-level tests:
If two units do not integrate properly, why write an end-to-end test when you can write a much smaller, more focused integration test that will detect the same bug?Mike Wacker (https://testing.googleblog.com/2015/04/just-say-no-to-more-end-to-end-tests.html)
I love unit tests, and in my opinion, no one should write code without them. Those are the crucial base of the test automation pyramid. Unit tests will give us fast feedback if our code is working or not. They also tell the exact place where to fix if the test doesn’t pass. Higher-level tests don’t serve us that powerfully.
But unit tests alone aren’t enough to make test automation work. At least we need service-level tests also to test that our pieces of the puzzle work fine together, and finally with some UI/end-to-end test. Otherwise, we won’t know if our application runs as expected.
I always argue that high-level tests are there as a second line of test defense. If you get a failure in a high level test, not just do you have a bug in your functional code, you also have a missing or incorrect unit test. Thus I advise that before fixing a bug exposed by a high level test, you should replicate the bug with a unit test. Then the unit test ensures the bug stays dead.Martin Fowler (https://martinfowler.com/bliki/TestPyramid.html)
- Just Say No to More End-to-End Tests by Mike Wacker in testing.googleblog.com.
- Main source and influence of this blog post.
- TestPyramid by Martin Fowler in his web page.
- The Forgotten Layer of the Test Automation Pyramid by Mike Cohn in mountaingoatsoftware.com.