- Using DDD terminology, which layer would you choose to test against, the domain, the application, or the infrastructure?
- If you regard your tests as clients (impersonating real clients) of your application, then what would be the boundaries of the your SUT (System Under Test)?
- Where and what are your test doubles?
- To which ports of the application does the test code bind itself to drive the test cases and verify its assertions?
These 4 questions basically ask the same with different choices of words.
Where are the boundaries?
Testing against the Application layer
The BDD approach of testing directly against the Domain is, I think, most times doesn't make much sense. The Use Cases are implemented in the application layer, without its orchestration the Domain is pretty useless. Testing against the application layer on the other hand is a very attractive approach. All the logic to fulfill the requirements of the application is there, and you don't need to be bogged down by infrastructure details. You can simply stub out the external dependencies like database/messaging/web-service/... configuration/implementation. Borrowing now from the Ports and Adapters terminology, you hang test-stub adapters on your ports and get away with it quickly and elegantly. And the tests, unburdened by IO or network latencies, run very fast. Thus if at one point of the application's life you decide to change the type of the DB, or using JMS instead of REST, you don't have to change a single line of the test code. But...
End-to-end testing
But those infrastructure details must be there in production. Without real end-to-end tests, where for example your test code actually calls the web-service endpoint of your component and verifies its expectations by querying the database, how can you be sure that the DB really works the way you've intended? What if our Camel configuration has a typo, rendering your whole messaging layer useless? You'll never find it out until manual testing. Having black-box like End-to-end tests after a successful "mvn clean install" you can sleep in peace, knowing whatever you've done, it hasn't break any existing functionality. The price you pay is that your test suite runs much slower and the test code is tied to the Adapters' implementation.
Choosing from the two approaches is a difficult decision and I've been thinking for long how we could have the best of both worlds. Maybe we can postpone the decision.
Best of both worlds - demonstration by an example
Let's see how a very simple Cucumber test would look like against the very simple application from the previous post. In a nutshell our app receives encrypted messages through a SOAP-based web service, asks an other component via REST to break the encryption, then stores the result in an Oracle DB. The words in italic are implementation details, shouldn't appear in the test or domain vocabulary. The test code comprises a feature file to describe a test scenario and a java class containing the implementations for the step definitions.
The feature definition
---------------------------------------------------------------------------
Given the decrypting component can break the following secret messages
| encrypted message | decrypted message |
| Alucard | Dracula |
| Donlon | London |
When the message 'Alucard' arrives
Then our message repository contains 'Dracula'
---------------------------------------------------------------------------
The step definitions
class StepDefinitions { @Given("^the decrypting component can break the following secret messages") public void givenTheDecryptionCodebookContains(List messagePairDTOs) { ... //store it in the Decrypter Test Double } @When("^the message '(\\w+)' arrives") public void whenTheEncryptedMessageArrives(String encryptedMessage) { ... // somehow trigger the use case } @Then("the decrypted messages repository contains") public void thenTheDecryptedMessagesRepositoryContains(List messages) { ... // assert the expected result against the DB Test Double } }
Introducing the TestAgent metaphor
The idea is that instead of putting the test code directly into the class containing the step definitions, we introduce a thin layer of abstraction between the step definitions and their implementations by a so called TestAgent. Regardless of the name (I guess it could be TestClient, FakeClient, ...), the TestAgent is the explicit manifestation of the concept that the test code is actually a client of you application.
interface TestAgent { void givenTheDecryptionCodebookContains(List messagePairDTOs); void whenTheEncryptedMessageArrives(String encryptedMessage); void thenTheDecryptedMessagesRepositoryContains(List messages); }
The TestAgent actually represents 3 real clients of the application (one method for each), but it's irrelevant for the example. In more complex cases we might consider one per client. So the updated step definition class would look like
class StepDefinitions { @Given("^the decrypting component can break the following secret messages") public void givenTheDecryptionCodebookContains(List messagePairDTOs) { testAgent.givenTheDecryptionCodebookContains(messagePairDTOs); } @When("^the message '(\\w+)' arrives") public void whenTheEncryptedMessageArrives(String encryptedMessage) { testAgent.thenTheDecryptedMessagesRepositoryContains(encryptedMessage); } @Then("the decrypted messages repository contains") public void thenTheDecryptedMessagesRepositoryContains(List messages) { testAgent.thenTheDecryptedMessagesRepositoryContains(messages); } }
Here comes the interesting part. We can create different implentations of the TestAgent for each layer we want to test.
// testing against the app service class AppLevelTestAgent implements TestAgent { void givenTheDecryptionCodebookContains(List messagePairDTOs) { fakeDecrypter.storeForFutureVerification(messagePairDTOs); } void whenTheEncryptedMessageArrives(String encryptedMessage) { EncryptedMessage msg = build(encryptedMessage); codeBreakerAppService.breakAndStore(msg); } void thenTheDecryptedMessagesRepositoryContains(List messages) { DecryptedMessage decryptedMessage = inMemoryDecryptedMessageRepository.find(messages.get(0)); assertEquals(decryptedMessage ,...); } } // testing against the "black box" class EndToEndTestAgent implements TestAgent { void givenTheDecryptionCodebookContains(List messagePairDTOs) { fakeDecrypterBehindTestWSEndpoint.storeForFutureVerification(messagePairDTOs); } void whenTheEncryptedMessageArrives(String encryptedMessage) { WSTransferMessage wsMessage = convertToWSDTOMessage(encryptedMessage); wsClient.send(wsMessage); } void thenTheDecryptedMessagesRepositoryContains(List messages) { DecryptedMessage decryptedMessage = realDecryptedMessageRepository.find(messages.get(0)); assertEquals(decryptedMessage ,...); } }
Ports and Adapters for the test
The test agents should be also responsible for initializing their test doubles, which are the same in role but different in nature depending on the scope.
DecryptedMessageRepository | Decrypter | Way to trigger the use case | |
AppLevelTestAgent | in-memory implementation | fake implementation of the interface | call the app service directly |
EndToEndTestAgent | real DB-using implementation | a fake service behind and WS endpoint started up by the test configuration | make a web service call |
That's it folks. The implementation of TestAgent can be chosen based on a property (like -Dbddtestscope=applevel if you use Maven), or you can configure your build to run the test suite for each. Since the application level implementation fakes out all the external dependencies, it's very quick, adding little additional overhead to the build on top of the end-to-end tests.
Pros and contras
I see the main argument against the approach is that introducing another layer is too much of an effort. Some even think that using Cucumber already adds unnecessary extra complexity. I would disagree. Separating the definition and the implementation of the steps alone is a good idea, yielding a cleaner code base. The test code is no longer tied to Cucumber, should you choose to use for example a simple JUnit-based approach, it can be reused without any change. The Cucumber-part is simply a layer above it.
Then some can say that we have to write the test code twice. It's not entirely true either. The feature files, the step definitions and the "smart part" of the test code is common. The implementations are the simpler, more mechanic part of writing the tests.
Possible extensions
After we'd discussed this idea a colleague pointed out that we might reuse the test code (feature files, step definition files and the TestAgent interface/abstract class) as a base to build up tests for the front end. It would require a new implementation of the TestAgent, which uses e.g. Selenium to drive the tests. I don't see any obstacle against packaging the test code in its own jar file, then let the project using it provide the implementation. I'm eager to see it in practice.
No comments :
Post a Comment