In the software engineering industry, microservices are a hot topic of conversation right now. There are entire conference tracks dedicated to it and an array of new tools and products being built to manage them.
In the software engineering industry, microservices are a hot topic of conversation right now. There are entire conference tracks dedicated to it and an array of new tools and products being built to manage them. There are numerous benefits to building your software using microservices, but there is no question that they introduce a new set of complex challenges — including, how best to test them. There is no set of rules that can easily be used when testing microservices. However, effective testing is becoming increasingly important as ever more applications are being built using microservices architecture. Let’s take a look at the most common microservice testing methods.
By providing input and looking at the produced output, we can test the behaviour of a microservice from the clients’ perspective. When a microservice depends on other services, we can replace them with mocks. These methods allow us to test a single microservice in isolation and check whether it is working as it should be. There are many benefits to testing software components in isolation, as these types of tests are cheap to maintain and fast to execute, therefore identifying problems early in the development cycle. However, there are numerous disadvantages which cannot be ignored.
The problem with tests that mock dependencies, is that they make assumptions on how the real counterpart behaves. Unfortunately, there’s nothing there to verify this assumption. Once a dependency changes, the assumption is not valid anymore. We’re testing against a mock that does not represent its real life counterpart anymore. Additionally, when we change the API of a microservice, we’re not sure whether we’ve broken any of its API consumers. We can think of testing microservices in isolation as unit testing, where the microservice is the unit under test. By definition, unit testing only tests the functionality of the units themselves, therefore it misses out on the integration errors or broader system-level errors.
If we cannot verify the correctness of a single microservice in isolation, the next obvious step would be to do integration testing. With integration testing, you are testing the communication paths and interactions between microservices to detect issues. The test exercises communication paths checking for any incorrect assumptions each service has about how to interact with its peers, essentially testing to see if they talk to each other correctly.
Integration testing is slower and more expensive to maintain compared to testing microservices in isolation. We need to manage integration testing environments. When a microservice has the ability to talk to multiple services, we need to test all communication paths, increasing the complexity of integration testing and the time it takes to run these tests.
End-to-end testing can be a difficult task. As the scope of end-to-end testing is larger by definition, it takes more time and is more liable to mistakes. The more moving parts there are in the tests, the flakier they become. This means that there is a higher chance that they will fail, not because of broken functionality, but more likely because of a network glitch.
Flaky tests are the enemy. When they fail, they don’t tell us much. We re-run our CI builds in the hope that they will pass again later, only to see check-ins pile up, and suddenly we find ourselves with a load of broken functionality. When we detect flaky tests, it is essential that we do our best to remove them. Otherwise, we start to lose faith in a test suite that “always fails like that.” A test suite with flaky tests can become a victim of what Diane Vaughan calls the normalization of deviance — the idea that over time we can become so accustomed to things being wrong that we start to accept them as being normal and not a problem.
Vaughan famously coined the term “normalisation of deviance” in her book The Challenger Launch Decision, where she analysed the series of events leading up to NASA’s Space Shuttle Challenger Disaster. She points out that over time, seemingly minor unsafe practices grew into something that was considered normal, as the faults did not cause an immediate catastrophe. However, ultimately this built up, resulting in the project quite literally combusting in spectacular fashion.
From this ideology, we see that unstable and untrustworthy tests coupled with a blasé mind-set can lead to disaster. When the thought process becomes accepting of flaky tests, we begin to lose trust in the process and may ignore tests that sometimes pass and sometimes fail. Eventually, this can result in a major problem.
Consumer Driven Contract Testing (CDCT) is an alternative approach to traditional integration testing that gives you tests that are quicker to execute and more maintainable at scale. It is a method of verifying that all microservices are speaking the same language. This method works in the way that it sounds — you test contract agreements between API consumers and API providers. You are essentially creating what look like unit tests which validate that your APIs are functioning properly at any given time and according to your contract.
A set of expectations forms a contract that is produced by the consumer and shared with the provider. Contract obligations are verified by providers with tests that can be run in isolation, without having to set up integration testing environments. This lets them evolve independently and give immediate feedback when they’ve broken any of their API consumers.
Contract testing can be used anywhere where you have two services that need to communicate with each other. However, it is especially useful in environments with many services, as can be found in microservice architecture.
This type of testing isn’t new, however it has recently found a resurgence in popularity as microservice based applications have increased in popularity. While there are a number of ways to create consumer driven contracts, the rest of the post is going to look at an open-source framework called Pact.
Pact is a contract testing tool. In Pact terminology, a pact is a contract that is made between a consumer of an API and the API provider, with each pact being a JSON document containing a collection of interactions. These interactions contain what the consumer is expected to send to the provider and a minimal expected response the consumer wants the provider to return. Pact ensures that services are communicating with each other as described in the contract.
With Pact, the contract is created by the consumer and verified by the provider. A major advantage of this pattern is that only the parts of the API that are used by the consumer get tested. This means, provider behaviour not used by current consumers can be changed without breaking tests.
One down side you’ll quickly discover is that now you have to start thinking about how to share contracts between builds and how to manage their versions. Fortunately, Pact Broker is a tool that can help you with that.
To use Pact successfully, everybody must be on board. That is to say, both the consumer and the provider project teams have to both agree to use Pact. The consumer has to set expectations and generate the contract and the provider has to verify it. If any of these steps are missing, the approach will not work.
If both the consumer and provider microservice are managed by the same team, the use of Pact should be relatively straightforward. If different teams within the same organisation agree to be involved, they both need to use Consumer Driven Contract Testing.
On the other hand, if you are providing a public API, meaning that you don’t know who your consumers are, CDCT won’t work. Since the consumer defines how the provider behaves, if we don’t know who the consumers are, the provider has nothing to verify.
To choose the best way of testing your microservices, you have to identity what is the best fit for the needs of your application and your team. Investing time learning about the alternatives that are out there and the best way to execute them will pay dividends in the end.
This article is based on my talk “Consumer Driven Contract Testing with Pact”.
For a more in-depth introduction to consumer driven contract testing with Pact, I’ve put together the following blog posts.