Showing posts with label Hexagonal architecture. Show all posts
Showing posts with label Hexagonal architecture. Show all posts

Saturday, 7 December 2013

Domain Services - representing external dependencies in the Domain

For the most of us who start learning DDD, Domain Services seem to be a strange beast at first. The general definitions says that logic that "falls between" entities should find its place in a Domain Service. Simple as it sounds, in concrete situations it can be difficult to decide where to put a piece of logic. In the entity or create a service? In this post I'd like to show a different, but very frequent use of domain services. This is the manifestation of external dependencies in the domain. Let's see an example for not being so abstract. Imagine we are developing a military application comprising multiple, distributed components. Our task is develop the component that receives an encrypted message from the enemy, decodes it, then sends the decrypted message to the headquarters. Physically there are 3 components in the system, one that processes the decrypted message (headquarters), one that actually does the decrypting, and our component in-between. They are communicating via web-services. Our domain could look like

interface EncryptedMessage  { ... }
interface DecryptedMessage  { ... }
// domain service representing the HeadQuarter component in the system
interface HeadQuarter {
   void send(DecryptedMessage decryptedMessage);
}
// domain service representing the CodeBreaker component in the system
interface CodeBreaker {
  DecryptedMessage breakIt(EncryptedMessage encryptedMessage);   
}
//the heart of our simple domain
class EnemyMessageCatcher {
  private HeadQuarter headQuarter;
  private CodeBreaker codeBreaker;
  void captureDecryptAndForward(EncryptedMessage encryptedMessage) {
    DecryptedMessage decryptedMessage = codeBreaker.breakIt(encryptedMessage);
    headquarter.send(decryptedMessage);
}
//infrastructure layer
class WSBasedHeadQuarter implements HeadQuarter { //calling some webservice }
class WSBasedCodeBreaker implements CodeBreaker { //calling some webservice }
}


Both the HeadQuarter and the CodeBreaker are domain services. Although our domain knows nothing about how these functionalities are implemented (whether they are in other physical components, or just in simple objects), it still knows about the concept, that there is a HeadQuarter who needs to be notified and there is a CodeBreaker that can decrypt the enemy's messages. That's why the interface(=concept) is in the Domain and the implementation(=the details) is in the infrastructure. Using Hexagonal Architecture-terminology, the domain service (interface) is the port, and the implementation is the adapter.
The DDD-savvy reader can notice, that the Repository pattern is actually an ordinary domain service. It represents the concept of storing and retrieving objects in the domain, and hides away how it's done exactly. I suppose the only reason for it being a separate pattern is simply that most application has to deal with persistence.

Sunday, 10 November 2013

BDD - choosing the scope of testing

When at the beginning of our project we decided to use BDD, and Cucumber as the tool, we had a little debate about the scope of the testing. Similar questions arose

  • Using DDD terminology, which layer would you choose to test against, the domain, the application, or the infrastructure?
  • If you regard your tests as clients (impersonating real clients) of your application, then what would be the boundaries of the your SUT (System Under Test)?
  • Where and what are your test doubles?
  • To which ports of the application does the test code bind itself to drive the test cases and verify its assertions?

These 4 questions basically ask the same with different choices of words.

Where are the boundaries?

Testing against the Application layer

The BDD approach of  testing directly against the Domain is, I think, most times doesn't make much sense. The Use Cases are implemented in the application layer, without its orchestration the Domain is pretty useless. Testing against the application layer on the other hand is a very attractive approach. All the logic to fulfill the requirements of the application is there, and you don't need to be bogged down by infrastructure details. You can simply stub out the external dependencies like database/messaging/web-service/... configuration/implementation. Borrowing now from the Ports and Adapters terminology, you hang test-stub adapters on your ports and get away with it quickly and elegantly. And the tests, unburdened by IO or network latencies, run very fast. Thus if at one point of the application's life you decide to change the type of the DB, or using JMS instead of REST, you don't have to change a single line of the test code. But...

End-to-end testing

But those infrastructure details must be there in production. Without real end-to-end tests, where for example your test code actually calls the web-service endpoint of your component and verifies its expectations by querying the database, how can you be sure that the DB really works the way you've intended? What if our Camel configuration has a typo, rendering your whole messaging layer useless? You'll never find it out until manual testing. Having black-box like End-to-end tests after a successful "mvn clean install" you can sleep in peace, knowing whatever you've done, it hasn't break any existing functionality. The price you pay is that your test suite runs much slower and the test code is tied to the Adapters' implementation.

Choosing from the two approaches is a difficult decision and I've been thinking for long how we could have the best of both worlds. Maybe we can postpone the decision.

Best of both worlds - demonstration by an example

Let's see how a very simple Cucumber test would look like against the very simple application from the previous post. In a nutshell our app receives encrypted messages through a SOAP-based web service, asks an other component via REST to break the encryption, then stores the result in an Oracle DB. The words in italic are implementation details, shouldn't appear in the test or domain vocabulary. The test code comprises a feature file to describe a test scenario and a java class containing the implementations for the step definitions.

The feature definition
---------------------------------------------------------------------------
Given the decrypting component can break the following secret messages
| encrypted message | decrypted message |
| Alucard                  |  Dracula                 |
| Donlon                   | London                  |

When the message 'Alucard' arrives

Then our message repository contains 'Dracula'
---------------------------------------------------------------------------

The step definitions

class StepDefinitions {
   @Given("^the decrypting component can break the following secret messages")
   public void givenTheDecryptionCodebookContains(List messagePairDTOs) {
      ... //store it in the Decrypter Test Double
   }
   @When("^the message '(\\w+)' arrives")
   public void whenTheEncryptedMessageArrives(String encryptedMessage) {
      ... // somehow trigger the use case
   }
   @Then("the decrypted messages repository contains") 
   public void thenTheDecryptedMessagesRepositoryContains(List messages) {
      ... // assert the expected result against the DB Test Double
   }
}

Introducing the TestAgent metaphor

The idea is that instead of putting the test code directly into the class containing the step definitions, we introduce a thin layer of abstraction between the step definitions and their implementations by a so called TestAgent. Regardless of the name (I guess it could be TestClient, FakeClient, ...), the TestAgent is the explicit manifestation of the concept that the test code is actually a client of you application.

interface TestAgent {
  void givenTheDecryptionCodebookContains(List messagePairDTOs);
  void whenTheEncryptedMessageArrives(String encryptedMessage);
  void thenTheDecryptedMessagesRepositoryContains(List messages);
}

The TestAgent actually represents 3 real clients of the application (one method for each), but it's irrelevant for the example. In more complex cases we might consider one per client. So the updated step definition class would look like

class StepDefinitions {
   @Given("^the decrypting component can break the following secret messages")
   public void givenTheDecryptionCodebookContains(List messagePairDTOs) {
      testAgent.givenTheDecryptionCodebookContains(messagePairDTOs);
   }
   @When("^the message '(\\w+)' arrives")
   public void whenTheEncryptedMessageArrives(String encryptedMessage) {
      testAgent.thenTheDecryptedMessagesRepositoryContains(encryptedMessage);
   }
   @Then("the decrypted messages repository contains") 
   public void thenTheDecryptedMessagesRepositoryContains(List messages) {
      testAgent.thenTheDecryptedMessagesRepositoryContains(messages);
   }
}

Here comes the interesting part. We can create different implentations of the TestAgent for each layer we want to test.

// testing against the app service
class AppLevelTestAgent implements TestAgent {
  void givenTheDecryptionCodebookContains(List messagePairDTOs) {
     fakeDecrypter.storeForFutureVerification(messagePairDTOs);
  }
  void whenTheEncryptedMessageArrives(String encryptedMessage) {  
     EncryptedMessage msg = build(encryptedMessage);
     codeBreakerAppService.breakAndStore(msg);
  }
  void thenTheDecryptedMessagesRepositoryContains(List messages) {
       DecryptedMessage decryptedMessage = inMemoryDecryptedMessageRepository.find(messages.get(0));
       assertEquals(decryptedMessage ,...);  
  }
}
// testing against the "black box"
class EndToEndTestAgent implements TestAgent {
  void givenTheDecryptionCodebookContains(List messagePairDTOs) {
       fakeDecrypterBehindTestWSEndpoint.storeForFutureVerification(messagePairDTOs);
  }
  void whenTheEncryptedMessageArrives(String encryptedMessage) {
       WSTransferMessage wsMessage = convertToWSDTOMessage(encryptedMessage);
       wsClient.send(wsMessage); 
  }
  void thenTheDecryptedMessagesRepositoryContains(List messages) {
       DecryptedMessage decryptedMessage = realDecryptedMessageRepository.find(messages.get(0));
       assertEquals(decryptedMessage ,...);  
  }
}

Ports and Adapters for the test

The test agents should be also responsible for initializing their test doubles, which are the same in role but different in nature depending on the scope.

DecryptedMessageRepository Decrypter Way to trigger the use case
AppLevelTestAgent in-memory implementation fake implementation of the interface call the app service directly
EndToEndTestAgent real DB-using implementation a fake service behind and WS endpoint started up by the test configuration make a web service call

That's it folks. The implementation of TestAgent can be chosen based on a property (like -Dbddtestscope=applevel if you use Maven), or you can configure your build to run the test suite for each. Since the application level implementation fakes out all the external dependencies, it's very quick, adding little additional overhead to the build on top of the end-to-end tests.

Pros and contras

I see the main argument against the approach is that introducing another layer is too much of an effort. Some even think that using Cucumber already adds unnecessary extra complexity. I would disagree. Separating the definition and the implementation of the steps alone is a good idea, yielding a cleaner code base. The test code is no longer tied to Cucumber, should you choose to use for example a simple JUnit-based approach, it can be reused without any change. The Cucumber-part is simply a layer above it.
Then some can say that we have to write the test code twice. It's not entirely true either. The feature files, the step definitions and the "smart part" of the test code is common. The implementations are the simpler, more mechanic part of writing the tests.

Possible extensions

After we'd discussed this idea a colleague pointed out that we might reuse the test code (feature files, step definition files and the TestAgent interface/abstract class) as a base to build up tests for the front end. It would require a new implementation of the TestAgent, which uses e.g. Selenium to drive the tests. I don't see any obstacle against packaging the test code in its own jar file, then let the project using it provide the implementation. I'm eager to see it in practice.

Saturday, 2 November 2013

DDD and Hexagonal architecture

Since two years ago I had the pleasure the start using DDD at my workplace I can hardly imagine developing software without it (long-haunting experiences from previous projects might have something to do with this). There is a lot to love here: Entities, Value Objects, Repository pattern, Aggregates, Bounded Contexts, Ubiquitous Language, Anti-corruption layers, ... But for me the most valuable part of it is the idea of placing the Domain in the heart of the application and building everything around it as opposed to the traditional layering theory, where the UI is on the top, the Domain in the middle, and everything lies on the DB at the bottom.

----------------------------------
UI Layer
----------------------------------
Business Logic Layer
----------------------------------
Persistence Layer
----------------------------------

This idea, placing the database and the data model at the center, have proved to be very harmful, often resulting in anemic objects and procedural code. And goes against one of the most fundamental concept in OO design, the Dependency Inversion Principle. High level modules should not depend on low level modules. What's so special about databases anyway? What if our application instead of using a DB directly has to cooperate with a legacy system, storing the data through it and communication is based on web-service calls? And before it "stores" the data it communicates with other components through web-services? What I'd like to point out, that if we follow the "traditional" layering model, we have to assign the first WS calls to the Persistence Layer, but the second to the BLL. The distinction is simply arbitrary and contrived.
What DDD does, using an onion-like layering structure instead of the vertical, one-dimensional one, is what Alistair Cockburn proposed as Hexagonal Architecture, or Ports and Adapters Architecture, even before DDD appeared on the scene.



Please follow the link before read further. This could completely change the way how you think about software. Funny enough in spite of being around for almost 10 years by now I have yet to see a nice example on the net. So I'd like to fill this gap now.

Hexagonal (Ports and Adapters) Architecture example

Instead of the usual and boring Pet Clinic or Ordering application I choose to use an unlikely, but at least more interesting theme. Let's build an application that receives captured secret messages from the enemy, asks an other component to break them, then stores the decrypted messages. Here are the classes
//infrastructure layer
class MessageListener {
     void handleMessage(String jsonMessage) {
         EncryptedMessage encryptedMessage = getEncryptedMessageBuilder().build(jsonMessage);
         getCodeBreakerAppService().breakAndStore(encryptedMessage ); 
     }
}
class WSBasedDecrypter implements Decrypter {
   // call a WS to do the work
}
class MongoDecryptedMessageRepository implements DecryptedMessageRepository {
   // store stuff in Mongo
}
//app layer
class CodeBreakerAppService {
   void breakAndStore(EncryptedMessage encryptedMessage) {
        authentiationCheck();
        startTransaction();
        getCodeBreakerAndArchiver().breakAndArchive(encryptedMessage); 
        endTransaction();
   }
}
//domain layer
class CodeBreakerAndArchiver {
   private Decrypter decrypter ;
   private DecryptedMessageRepository decryptedMessageRepository ;
   void breakAndArchive(EncryptedMessage encryptedMessage) {
        DecryptedMessage decryptedMessage = decrypter.break(encryptedMessage);
        decryptedMessageRepository.archive(decryptedMessage); 
   }
}
interface Decrypter {
    DecryptedMessage break(EncryptedMessage encryptedMessage);
}
interface DecryptedMessageRepository {
    void archive(DecryptedMessage decryptedMessage);
}

Imagine the app is listening to a message broker, like ActiveMQ. We have a MessageListener instance, which is configured to listen to a JMS queue. For the sake of simplicity, imagine that the messages coming in are simple JSON strings. So, assuming you've read the article, you can surely indentify the ports and adapters of this app. As a reminder, a port is where our application interacts with the external word, sitting just on the boundaries of the application. We have 3 in our app.

1. The breakAndStore public method on CodeBreakerAppService. This is a driven port, which is called (indirectly) by some external entity. The entry point of our app. The adapter, that transforms the message to something the port can understand is the MessageListener. I said indirectly, because the message have to go through some integration layer (JMS listener mechanism here), then the adapter which eventually passes it to the port.

2. The Decrypter interface. This is a driving port that is called by the domain, triggering some effect on the external world. It is implemented by an adapter, in this case WSBasedDecrypter.

3. The DecryptedMessageRepository interface, similar driving port implemented by the MongoDecryptedMessageRepository as the adapter.

Now let's imagine another team in our company wants to use our app to spy their own enemies, but they abhor NoSQL (shame on them) and want to store the encrypted messages in Oracle. No problem at all. We only have to create a new implementation of the DecryptedMessageRepository interface, let's say JDBCDecryptedMessageRepository, and can plug it in the Domain in runtime if needed. Or if you don't want to store the messages but spit it out on-the-fly to the screen, you can create another implementation sending messages to the UI (although Repository may not be the most appropriate name for it anyomore). The point is, there is no up (UI) and down (DB) here. Just an outer layer of onion wrapping around the core. The adapters are responsible for the details.
Then the Product Owner says we need to open the app not only for JMS, but REST as well. So let's configure MessageListener as a REST endpoint too, or create a new class for that. Our Domain (and application layer) is intact. The Domain independent of all these details, not a single line needs to change.

This architecture style is so simple and elegant, I can't understand why Hexagonal Architecture hasn't become a household name by now. And it hasn't. Most often if I mention it to other developers I meet blank faces. Sometimes it rings a bell, I've failed to meet anyone saying, "yeah it's cool and we use it all the time".

Another interesting thing is that, I think, if you follow DIP, you can't help ending up with this. It just grows out of a very simple principle. That's it for now.