Tuesday, 26 November 2013

Persisting the Domain Objects - 2

Before I start continuing the previous post, let's just see a quick reminder about our example! We have a Domain Object and for persistence we've decided to create a DTO and a DO<->DTO mapper class in the ACL to gain some flexibility and make our Domain more independent from the underlying data structure and persistence solution. Persistence Independence is a good thing, anyway.

//the domain object
public class Bonus {
    private DateTime collectedAt;
    // type can be "EXTRA_CHIPS" or "JAZZ_NIGHT"
    private final String type;
    private final long recurrencyIntervalInMilliseconds;    
    public RecurrentBonus(String type, long recurrencyIntervalInMilliseconds) {
        this.type = type;
        this.recurrencyIntervalInMilliseconds = recurrencyIntervalInMilliseconds;
    }
    public boolean isCollectable() {
        return DateTime.now()
          .isAfter(collectedAt.plusMilliseconds(recurrencyIntervalInMilliseconds)); 
    }
    public void collect() {
       if (!isCollectable()) { throw new BonusPrematureCollectionAttemptException() } 
       DomainEventDispatcher.dispatch(new BonusCollectedEvent(this));
    }
}
//the persistent object (DTO) in the ACL (insfrastructure layer)
@JsonIgnoreProperties(ignoreUnknown=true)
public class PersistentBonus implements Serializable {
    private String type;
    private long recurrencyIntervalInMilliseconds;
    private DateTime collectedAt;
    //getters-setters for each field
}
//the mapper between the the DO and DTO in the ACL
public class BonusPersistenceAssembler {
    PersistentBonus toPersistence(Bonus bonus) { ... }
    Bonus toDomain(PersistentBonus bonus) { ... }  
}

Earlier we have seen that in some specific cases, like rolling release, this separation between our DOs and persistence DTOs comes very handy. Having said that, let's see how the mapping really works.

The trick - encapsulation gets in the way?

The mapping between PersistentBonus and Bonus takes place in the anti-corruption layer. The trick is, since the Bonus, as a well-behaving rich Domain Object, doesn't expose its internal structure, how can we do it at all? No other class knows about its private fields, let alone accessing them. In the followings I explore some possible ways, from trivial and not-so-ideal solutions to some more sophisticated ones.

Solution 1 - Add Getters-Setters to the DO and get over with it

Plainly exposing the internal structure of an object is a major no-no in DDD and OO in general. It violates loose coupling, encapsulation, extendibility, bla-bla. And we've taken pains to hide the internals of Bonus, we shouldn't nullify the effort by making the internals accessible. Forget it if possible.

Solution 2 - Separating interface and implementation of the Domain Object

Using an interface is usually a good idea regardless of our current problem. Let's do that and add an additional Factory class.

public interface Bonus {
    boolean isCollectable();
    void collect(); 
}
public class BonusImpl implements Bonus{
   //all the stuff from before
   // getters-setters
}
//the only class in the domain directly seeing BonusImpl
public class BonusFactory {
    Bonus createRecurrentBonus(String type, long recurrencyIntervalInMilliseconds) { ... }
}
// the mapper class in the anti-corruption layer
public class BonusPersistenceAssembler {
    PersistentBonus toPersistence(Bonus bonus) { ... }
    Bonus toDomain(PersistentBonus bonus) { ... }
}

Now we can arrange our code in a way that the other part of the Domain (everything but the BonusFactory) accesses the object strictly through the interface, and the BonusImpl will be only used by the BonusFactory and the BonusPersistenceAssembler. Although the BonusImpl has getters and setters, they are still shielded quite well from the rest of the Domain. I have two minor problems with this solution. One is the necessity of explicit casting from Bonus to BonusImpl in BonusPersistenceAssembler.toPersistence, the other is that at the end of the day refraining from the direct use of the BonusImpl in the Domain is entirely up to the discipline of the developers. In some cases explicit casting of Bonus to BonusImpl might seem the easy way to achieve something and a lazy fellow may fail to resist the temptation. Let's see what can we do about it.

Solution 3 - "Normal" interface + View interface + Factory + package protected implementation class

The first idea is to put the BonusImpl with the BonusFactory in a new package, and change the visibility of the BonusImpl to package protected. Now it's hidden from even the rest of the Domain, but alas, from the BonusPersistenceAssembler, too. Then comes the next idea. Create a new interface, BonusView, which captures only the view of the "data" in Bonus, and let's pass a BonusView to the BonusPersistenceAssembler. Only this class should use the BonusView. In code

public interface Bonus {
    boolean isCollectable();
    void collect(); 
}
public interface BonusView {
    long getRecurrencyIntervalInMilliseconds();
    String getType(); 
    DateTime collectedAt(); 
}
//package protected 
class BonusImpl implements Bonus, BonusView {
   //all the stuff from before
   // getters-setters
}
//the only class in the domain directly seeing BonusImpl.They are in the same package
public class BonusFactory {
    Bonus createRecurrentBonus(String type, recurrencyIntervalInMilliseconds) { ... }
}
// the mapper class in the anti-corruption layer
public class BonusPersistenceAssembler {
    PersistentBonus toPersistence(BonusView bonusView) { ... }
    //uses the BonusFactory
    Bonus toDomain(PersistentBonus bonus) { ... }
}

Better. We still have to cast from Bonus to BonusView before we pass the reference to the BonusPersistenceAssembler, but the ugly getters-setters are nicely confined to a small nook of the code.

Solution 3.1 What about the invariants? - static factory method

However there is still one possible problem to address. If the Bonus has some invariants (not in our simple example, but we are discussing the general idea), having individual setters for each field might be undesirable, since this approach leaves place for putting the object in a state that violates its invariant. In this case in the BonusFactory we can use a static factory method, passing all the fields together (or maybe only the ones that have to be set together to preserve an invariant) to it.

class BonusImpl {
    public static Bonus reinstantiate(String type, long recurrencyIntervalInMilliseconds, DateTime collectedAt) { ... }
    // other stuff
}

Finally we got rid of the setters. The method is called reinstantiate, suggesting to the developer, that this method is to recreate the DO from something. Conceptually it's very different from simply providing the means to set its fields individually. The drawback is that the factory method will be bloated if the class has more than a couple of fields.

Moral of the story: Setters-getters are troublemakers

It looks like we can tweak the problem in all kinds of smart ways, the result is always a compromise, and we wouldn't need to do anything at all if not for those setters-getters. Let's see how far we have come to mitigate the problem. We started with a simple class and ended up with two additional interfaces, a factory class, a static factory method and package protection. I start to doubt whether it's worth the effort. Let's see a different approach inspired by the static factory method.

Solution 4 - Memento

Instead of passing all the fields to the reinstantiate method, we can create a new class to wrap them up. Let's call it BonusMemento, because it's like a footprint of the Bonus object. It doesn't have any logic, only the data, like the PersistentBonus, just without JSON annotations, Serializable interface and the other technology-related stuff. Unlike the PersistentBonus, the BonusMemento is part of the Domain, even if it's only used to make persistence easier. You can think of it as the skeleton of the Bonus. Just the bones, no brain.

//the original Bonus class, not the interface
public class Bonus {
    public static Bonus reinstantiate(BonusMemento bonusMemento) { ... }
    // other stuff
}
public class BonusMemento {
    private String type;
    private long recurrencyIntervalInMilliseconds;
    private DateTime collectedAt;

    //getters-setters for each field
}

No more bloated factory method. No setters either, only getters. But wait! Why can't we use the BonusMemento to replace the getters, too? Instead of exposing its fields one by one through getters, the domain object can be responsible to creating its own Memento.

public class Bonus {
    public static Bonus reinstantiate(BonusMemento bonusMemento) { ... }
    public BonusMemento toMemento() { ... } 
    //all the stuff from before
}
// the mapper class in the anti-corruption layer
public class BonusPersistenceAssembler {
    PersistentBonus toPersistence(BonusMemento bonusMemento) { ... }
    Bonus toDomain(PersistentBonus bonus) { 
        BonusMemento bonusMemento = convertFrom(bonus);
        return Bonus.reinstantiate(bonusMemento); 
    }
}

Not bad. We are back to the original Bonus class + a static factory method + Memento class. Encapsulation preserved, no bothering compromises. Well, maybe one. The BonusMemento and the PersistentBonus are almost the same, which is a bit of a code duplication. We cannot use PersistentBonus directly in the Bonus class, because the former belongs to the infrastructure layer. But the PersistentBonus can extend BonusMemento, inheriting its content and keeping only the infrastructure-specific part of its former self.

public class BonusMemento {
    private String type;
    private long recurrencyIntervalInMilliseconds;
    private DateTime collectedAt;
    //getters-setters for each field
}
@JsonIgnoreProperties(ignoreUnknown=true)
public class PersistentBonus extends BonusMemento implements Serializable {}

//so what is inside the mapper
public class BonusPersistenceAssembler {
    PersistentBonus toPersistence(BonusMemento Memento) { 
       PersistentBonus ps = new PersistentBonus();
       ps.setType(Memento.getType());
       //set values for the other fields
       return ps;   
    }
    Bonus toDomain(PersistentBonus persistentBonus ) { 
        return Bonus.reinstantiate(persistentBonus ); 
    }
}

The toPersistence method of BonusPersistenceAssembler simply copies the content of the Memento to the PersistentBonus field by field. The toDomain is even more simple, since PersistentBonus is a subclass of BonusMemento, we can simply pass it into Bonus.reinstantiate. Awesome.

Solution 5 - Visitor

I've been playing with the idea of using the Visitor pattern to retrieve the values of the fields from the Domain Object.

public class Bonus {
    public void buildView(BonusVisitor visitor) { 
        visitor.setType(this.type);
        visitor.setRecurrencyIntervalInMilliseconds(this.recurrencyIntervalInMilliseconds);
        visitor.setCollectedAt(this.collectedAt);
    } 
    //all the stuff from before
}
//in the Domain
public interface BonusVisitor {
   //setters
}
//in the infrastructure
public PersistenceBonusVisitor implements BonusVisitor {
    private final PersistentBonus persistentBonus;
    public PersistenceBonusVisitor(PersistentBonus persistentBonus) { this.persistentBonus = persistentBonus }
    // in the setters call the setters of PersistentBonus 
}

The visitor pattern is suitable for this kind of situations when we want an object to have control over what it shares about its internals. But at the end of the day we already have it with the Mementos and in a simpler form, so I've decided to stick to that.

Solution 6 - Using reflection

To be frank, I have never tried using reflection. There are some tools available, like Dozer, which promise a painless mapping betweens DOs and DTOs. A colleague of mine has told me about their experiences with Dozer. They could spare themselves the time of writing mapper classes, but they still needed to create the DTOs, the DOs had to have getters-setters, they still wanted to test that the mapping is correct, so couldn't spare writing the tests for the mappers. And performance-wise reflection-based solutions are always heavy. Having said this, I still want to explore Dozer (or something alike) on day.

Summary

In the post we've started from a simple example and explored a couple of solutions for the problem. In the end I would stick to the Memento-one. Basically it's derived from the idea of using a static factory method instead of setters. Since we don't like long parameter-list, we've introduced a new class, the Memento, and then realized, we can use it instead of getters, too. As an additional bonus (no pun intended), we can even derive the persistence class from it. That's it.

Sunday, 17 November 2013

Persisting the Domain Objects - 1

This post is about the not-so-trivial, but omnipresent chore of persistency in a DDD project. Let's just plunge into the middle through an example. We have to develop a submodule in our imaginary in New York-residing on-line Casino application to reward players with bonuses. The Product Owner's wish is that they can collect tickets to Jazz-concerts in the local Royal Albert Hall once a fortnight (first bonus type) and 50 extra chips once a day (second bonus type). In the architecture the client and the server has an agreement, that the client can check the "collectability" of a bonus, and can collect it. After a bit of pondering we come up with the following class to represent the bonus.

public class Bonus {
    private DateTime collectedAt;
    // type can be "EXTRA_CHIPS" or "JAZZ_NIGHT"
    private final String type;
    private final long recurrencyIntervalInMilliseconds;    
    public RecurrentBonus(String type, long recurrencyIntervalInMilliseconds) {
        this.type = type;
        this.recurrencyIntervalInMilliseconds = recurrencyIntervalInMilliseconds;
    }
    public boolean isCollectable() {
        return DateTime.now()
          .isAfter(collectedAt.plusMilliseconds(recurrencyIntervalInMilliseconds)); 
    }
    public void collect() {
       if (!isCollectable()) { throw new BonusPrematureCollectionAttemptException() } 
       DomainEventDispatcher.dispatch(new BonusCollectedEvent(this));
    }
}

Although the code could be improved (for example for the sake of brevity I've completely skipped the part how a Bonus is tied to a Player, as deemed irrelevant for us now), we are quite satisfied with it. It's very object-orientated. It hides completely what the conditions of the collectability are. The PO can come up with totally different kind of collectable bonuses (ones that depend on some previous achievement from the player, or one which applies to only Irish players, whatever), and the interface of the class won't change. So, next step. How to persist it?

Do we need a PersistentBonus class?

Let's assume we use Mongo, which stores its data in JSON format. In this case the object to be persisted has to have getters-setters for each field and has to implement the Serializable interface (it's a usual requirement for a JSON-serializing library, like Jackson). It might even need some library-specific annotations. Definitely the kinds of things we don't want to press into our Domain Object. It would mean letting Infrastructure details creeping into the Domain.The solution is that we can create Data Transfer Objects for our Domain Objects to capture all the needs of the chosen persistence solution, and we can transform the DO to DTO when persisting it, and the other way around when it's reinstantiated from the DB. The DTO and the mapper in the anti-corruption layer would look something like

@JsonIgnoreProperties(ignoreUnknown=true)
public class PersistentBonus implements Serializable {
    private String type;
    private long recurrencyIntervalInMilliseconds;
    private DateTime collectedAt;
    //getters-setters for each field
}
public class BonusPersistenceAssembler {
    PersistentBonus toPersistence(Bonus bonus) { ... }
    Bonus toDomain(PersistentBonus bonus) { ... }  
}

If we use an ORM solution, like Hibernate, we might not even need the mandatory getters-setters and implementing Serializable. A couple of annotations or XML configuration can do the trick. It's tempting just not to bother with DTOs at all. But separating the persistence code from the domain has one big advantage.

Can you do a rolling release if there are data structure changes in DB?

It makes possible data structure changes in the DB without the need of downtime. Imagine we've chosen to go without DTOs and our application has gone live a while ago in New York and we already have thousands of entries for Bonus in our DB. Then, as the company grows and gains territory, it decides to launch in other states and the PO decides that the Extra Chips and the Jazz Night could mean different things in different states. From now on, the type and state together should characterize the Bonus. After a bit of thinking we figure out that the slightly changed Domain would be served better by a slightly changed Bonus class

public class Bonus {
    //instead of String type;
    private final BonusCategory category;
    //old stuff
}
public class BonusCategory {
    private final String type;
    private final State state;
    // constructor and getters
}

Very neat, but it's not compatible anymore with the entries in the DB. We can execute a DB patch to convert the data to a new format, changing type to bonusCategory and use NEW_YORK as the state for all, then deploy the new version of the Casino, but we can't do it without downtime. If the patch is executed first the new data would break the old Casino before we deploy the new one (unknown field bonusCategory and the expected type is missing), if the app is deployed first, it would break immediately because of the old format of the data (unknown field type and the expected bonusCategory is missing). The management want a rolling release. No downtime. If we have a separate class for persistence and a mapper in the ACL, it's possible, even easy. We do some minor changes in the DTO and the mapper to accomodate both data format.

@JsonIgnoreProperties(ignoreUnknown=true)
public class PersistentBonus implements Serializable {
    private String type;
    //new field
    private String state;
    private long recurrencyIntervalInMilliseconds;
    private DateTime collectedAt;
    //getters-setters for each field
}
public class BonusPersistenceAssembler {
    PersistentBonus toPersistence(Bonus bonus) { ... }
    Bonus toDomain(PersistentBonus persistentBonus) { 
       BonusCategory bonusCategory = getBonusCategory(persistentBonus);
       // inject bonusCategory into the Bonus and the other stuff        
    } 
    private static BonusCategory getBonusCategory(PersistentBonus persistentBonus) {
       if (persistentBonus.getState() == null) {
          //old data, belongs to New York
          return new BonusCategory("NEW_YORK",persistentBonus.getType());  
       } else {
          return new BonusCategory(persistentBonus.getState(),persistentBonus.getType()); 
       }  
   } 
}

Done. Now our app can handle both the new and the old format of data. We can deploy it, then execute the DB patch. In the next release we can remove the "if persistentBonus.getState() == null" check from the code entirely. It's called intermediate code.

Pros and contras

Hopefully I've managed to make a point why separating persistence DTOs and DOs can be very useful even it requires a bit more code. I know using the Domain POJOs directly in persistence (like Hibernate offers) and in messages between the server and the client is very tempting, and yields a clean and lean codebase with relatively smaller number of classes. Writing an ACL with all the DTOs and mappers is usually a tedious monkey-work. As almost everything in software development the decision is about the trade-off. Does your application needs this level of independence of the Domain from the data so much, that you are willing to go the extra mile?

What's next

If the answer is yes, then you still need to spend some time on contemplating how to implement the idea. Unfortunately the DO<->DTO mapping is not as trivial as it seems at first. In the next post I'll explore what difficulties we need to face with this approach and how can we overcome them. Stay tuned.

Sunday, 10 November 2013

BDD - choosing the scope of testing

When at the beginning of our project we decided to use BDD, and Cucumber as the tool, we had a little debate about the scope of the testing. Similar questions arose

  • Using DDD terminology, which layer would you choose to test against, the domain, the application, or the infrastructure?
  • If you regard your tests as clients (impersonating real clients) of your application, then what would be the boundaries of the your SUT (System Under Test)?
  • Where and what are your test doubles?
  • To which ports of the application does the test code bind itself to drive the test cases and verify its assertions?

These 4 questions basically ask the same with different choices of words.

Where are the boundaries?

Testing against the Application layer

The BDD approach of  testing directly against the Domain is, I think, most times doesn't make much sense. The Use Cases are implemented in the application layer, without its orchestration the Domain is pretty useless. Testing against the application layer on the other hand is a very attractive approach. All the logic to fulfill the requirements of the application is there, and you don't need to be bogged down by infrastructure details. You can simply stub out the external dependencies like database/messaging/web-service/... configuration/implementation. Borrowing now from the Ports and Adapters terminology, you hang test-stub adapters on your ports and get away with it quickly and elegantly. And the tests, unburdened by IO or network latencies, run very fast. Thus if at one point of the application's life you decide to change the type of the DB, or using JMS instead of REST, you don't have to change a single line of the test code. But...

End-to-end testing

But those infrastructure details must be there in production. Without real end-to-end tests, where for example your test code actually calls the web-service endpoint of your component and verifies its expectations by querying the database, how can you be sure that the DB really works the way you've intended? What if our Camel configuration has a typo, rendering your whole messaging layer useless? You'll never find it out until manual testing. Having black-box like End-to-end tests after a successful "mvn clean install" you can sleep in peace, knowing whatever you've done, it hasn't break any existing functionality. The price you pay is that your test suite runs much slower and the test code is tied to the Adapters' implementation.

Choosing from the two approaches is a difficult decision and I've been thinking for long how we could have the best of both worlds. Maybe we can postpone the decision.

Best of both worlds - demonstration by an example

Let's see how a very simple Cucumber test would look like against the very simple application from the previous post. In a nutshell our app receives encrypted messages through a SOAP-based web service, asks an other component via REST to break the encryption, then stores the result in an Oracle DB. The words in italic are implementation details, shouldn't appear in the test or domain vocabulary. The test code comprises a feature file to describe a test scenario and a java class containing the implementations for the step definitions.

The feature definition
---------------------------------------------------------------------------
Given the decrypting component can break the following secret messages
| encrypted message | decrypted message |
| Alucard                  |  Dracula                 |
| Donlon                   | London                  |

When the message 'Alucard' arrives

Then our message repository contains 'Dracula'
---------------------------------------------------------------------------

The step definitions

class StepDefinitions {
   @Given("^the decrypting component can break the following secret messages")
   public void givenTheDecryptionCodebookContains(List messagePairDTOs) {
      ... //store it in the Decrypter Test Double
   }
   @When("^the message '(\\w+)' arrives")
   public void whenTheEncryptedMessageArrives(String encryptedMessage) {
      ... // somehow trigger the use case
   }
   @Then("the decrypted messages repository contains") 
   public void thenTheDecryptedMessagesRepositoryContains(List messages) {
      ... // assert the expected result against the DB Test Double
   }
}

Introducing the TestAgent metaphor

The idea is that instead of putting the test code directly into the class containing the step definitions, we introduce a thin layer of abstraction between the step definitions and their implementations by a so called TestAgent. Regardless of the name (I guess it could be TestClient, FakeClient, ...), the TestAgent is the explicit manifestation of the concept that the test code is actually a client of you application.

interface TestAgent {
  void givenTheDecryptionCodebookContains(List messagePairDTOs);
  void whenTheEncryptedMessageArrives(String encryptedMessage);
  void thenTheDecryptedMessagesRepositoryContains(List messages);
}

The TestAgent actually represents 3 real clients of the application (one method for each), but it's irrelevant for the example. In more complex cases we might consider one per client. So the updated step definition class would look like

class StepDefinitions {
   @Given("^the decrypting component can break the following secret messages")
   public void givenTheDecryptionCodebookContains(List messagePairDTOs) {
      testAgent.givenTheDecryptionCodebookContains(messagePairDTOs);
   }
   @When("^the message '(\\w+)' arrives")
   public void whenTheEncryptedMessageArrives(String encryptedMessage) {
      testAgent.thenTheDecryptedMessagesRepositoryContains(encryptedMessage);
   }
   @Then("the decrypted messages repository contains") 
   public void thenTheDecryptedMessagesRepositoryContains(List messages) {
      testAgent.thenTheDecryptedMessagesRepositoryContains(messages);
   }
}

Here comes the interesting part. We can create different implentations of the TestAgent for each layer we want to test.

// testing against the app service
class AppLevelTestAgent implements TestAgent {
  void givenTheDecryptionCodebookContains(List messagePairDTOs) {
     fakeDecrypter.storeForFutureVerification(messagePairDTOs);
  }
  void whenTheEncryptedMessageArrives(String encryptedMessage) {  
     EncryptedMessage msg = build(encryptedMessage);
     codeBreakerAppService.breakAndStore(msg);
  }
  void thenTheDecryptedMessagesRepositoryContains(List messages) {
       DecryptedMessage decryptedMessage = inMemoryDecryptedMessageRepository.find(messages.get(0));
       assertEquals(decryptedMessage ,...);  
  }
}
// testing against the "black box"
class EndToEndTestAgent implements TestAgent {
  void givenTheDecryptionCodebookContains(List messagePairDTOs) {
       fakeDecrypterBehindTestWSEndpoint.storeForFutureVerification(messagePairDTOs);
  }
  void whenTheEncryptedMessageArrives(String encryptedMessage) {
       WSTransferMessage wsMessage = convertToWSDTOMessage(encryptedMessage);
       wsClient.send(wsMessage); 
  }
  void thenTheDecryptedMessagesRepositoryContains(List messages) {
       DecryptedMessage decryptedMessage = realDecryptedMessageRepository.find(messages.get(0));
       assertEquals(decryptedMessage ,...);  
  }
}

Ports and Adapters for the test

The test agents should be also responsible for initializing their test doubles, which are the same in role but different in nature depending on the scope.

DecryptedMessageRepository Decrypter Way to trigger the use case
AppLevelTestAgent in-memory implementation fake implementation of the interface call the app service directly
EndToEndTestAgent real DB-using implementation a fake service behind and WS endpoint started up by the test configuration make a web service call

That's it folks. The implementation of TestAgent can be chosen based on a property (like -Dbddtestscope=applevel if you use Maven), or you can configure your build to run the test suite for each. Since the application level implementation fakes out all the external dependencies, it's very quick, adding little additional overhead to the build on top of the end-to-end tests.

Pros and contras

I see the main argument against the approach is that introducing another layer is too much of an effort. Some even think that using Cucumber already adds unnecessary extra complexity. I would disagree. Separating the definition and the implementation of the steps alone is a good idea, yielding a cleaner code base. The test code is no longer tied to Cucumber, should you choose to use for example a simple JUnit-based approach, it can be reused without any change. The Cucumber-part is simply a layer above it.
Then some can say that we have to write the test code twice. It's not entirely true either. The feature files, the step definitions and the "smart part" of the test code is common. The implementations are the simpler, more mechanic part of writing the tests.

Possible extensions

After we'd discussed this idea a colleague pointed out that we might reuse the test code (feature files, step definition files and the TestAgent interface/abstract class) as a base to build up tests for the front end. It would require a new implementation of the TestAgent, which uses e.g. Selenium to drive the tests. I don't see any obstacle against packaging the test code in its own jar file, then let the project using it provide the implementation. I'm eager to see it in practice.

Saturday, 2 November 2013

DDD and Hexagonal architecture

Since two years ago I had the pleasure the start using DDD at my workplace I can hardly imagine developing software without it (long-haunting experiences from previous projects might have something to do with this). There is a lot to love here: Entities, Value Objects, Repository pattern, Aggregates, Bounded Contexts, Ubiquitous Language, Anti-corruption layers, ... But for me the most valuable part of it is the idea of placing the Domain in the heart of the application and building everything around it as opposed to the traditional layering theory, where the UI is on the top, the Domain in the middle, and everything lies on the DB at the bottom.

----------------------------------
UI Layer
----------------------------------
Business Logic Layer
----------------------------------
Persistence Layer
----------------------------------

This idea, placing the database and the data model at the center, have proved to be very harmful, often resulting in anemic objects and procedural code. And goes against one of the most fundamental concept in OO design, the Dependency Inversion Principle. High level modules should not depend on low level modules. What's so special about databases anyway? What if our application instead of using a DB directly has to cooperate with a legacy system, storing the data through it and communication is based on web-service calls? And before it "stores" the data it communicates with other components through web-services? What I'd like to point out, that if we follow the "traditional" layering model, we have to assign the first WS calls to the Persistence Layer, but the second to the BLL. The distinction is simply arbitrary and contrived.
What DDD does, using an onion-like layering structure instead of the vertical, one-dimensional one, is what Alistair Cockburn proposed as Hexagonal Architecture, or Ports and Adapters Architecture, even before DDD appeared on the scene.



Please follow the link before read further. This could completely change the way how you think about software. Funny enough in spite of being around for almost 10 years by now I have yet to see a nice example on the net. So I'd like to fill this gap now.

Hexagonal (Ports and Adapters) Architecture example

Instead of the usual and boring Pet Clinic or Ordering application I choose to use an unlikely, but at least more interesting theme. Let's build an application that receives captured secret messages from the enemy, asks an other component to break them, then stores the decrypted messages. Here are the classes
//infrastructure layer
class MessageListener {
     void handleMessage(String jsonMessage) {
         EncryptedMessage encryptedMessage = getEncryptedMessageBuilder().build(jsonMessage);
         getCodeBreakerAppService().breakAndStore(encryptedMessage ); 
     }
}
class WSBasedDecrypter implements Decrypter {
   // call a WS to do the work
}
class MongoDecryptedMessageRepository implements DecryptedMessageRepository {
   // store stuff in Mongo
}
//app layer
class CodeBreakerAppService {
   void breakAndStore(EncryptedMessage encryptedMessage) {
        authentiationCheck();
        startTransaction();
        getCodeBreakerAndArchiver().breakAndArchive(encryptedMessage); 
        endTransaction();
   }
}
//domain layer
class CodeBreakerAndArchiver {
   private Decrypter decrypter ;
   private DecryptedMessageRepository decryptedMessageRepository ;
   void breakAndArchive(EncryptedMessage encryptedMessage) {
        DecryptedMessage decryptedMessage = decrypter.break(encryptedMessage);
        decryptedMessageRepository.archive(decryptedMessage); 
   }
}
interface Decrypter {
    DecryptedMessage break(EncryptedMessage encryptedMessage);
}
interface DecryptedMessageRepository {
    void archive(DecryptedMessage decryptedMessage);
}

Imagine the app is listening to a message broker, like ActiveMQ. We have a MessageListener instance, which is configured to listen to a JMS queue. For the sake of simplicity, imagine that the messages coming in are simple JSON strings. So, assuming you've read the article, you can surely indentify the ports and adapters of this app. As a reminder, a port is where our application interacts with the external word, sitting just on the boundaries of the application. We have 3 in our app.

1. The breakAndStore public method on CodeBreakerAppService. This is a driven port, which is called (indirectly) by some external entity. The entry point of our app. The adapter, that transforms the message to something the port can understand is the MessageListener. I said indirectly, because the message have to go through some integration layer (JMS listener mechanism here), then the adapter which eventually passes it to the port.

2. The Decrypter interface. This is a driving port that is called by the domain, triggering some effect on the external world. It is implemented by an adapter, in this case WSBasedDecrypter.

3. The DecryptedMessageRepository interface, similar driving port implemented by the MongoDecryptedMessageRepository as the adapter.

Now let's imagine another team in our company wants to use our app to spy their own enemies, but they abhor NoSQL (shame on them) and want to store the encrypted messages in Oracle. No problem at all. We only have to create a new implementation of the DecryptedMessageRepository interface, let's say JDBCDecryptedMessageRepository, and can plug it in the Domain in runtime if needed. Or if you don't want to store the messages but spit it out on-the-fly to the screen, you can create another implementation sending messages to the UI (although Repository may not be the most appropriate name for it anyomore). The point is, there is no up (UI) and down (DB) here. Just an outer layer of onion wrapping around the core. The adapters are responsible for the details.
Then the Product Owner says we need to open the app not only for JMS, but REST as well. So let's configure MessageListener as a REST endpoint too, or create a new class for that. Our Domain (and application layer) is intact. The Domain independent of all these details, not a single line needs to change.

This architecture style is so simple and elegant, I can't understand why Hexagonal Architecture hasn't become a household name by now. And it hasn't. Most often if I mention it to other developers I meet blank faces. Sometimes it rings a bell, I've failed to meet anyone saying, "yeah it's cool and we use it all the time".

Another interesting thing is that, I think, if you follow DIP, you can't help ending up with this. It just grows out of a very simple principle. That's it for now.

Where does the Application Layer end and the Domain start?


After finishing the previous post it occurred to me that there is a couple of other stuff I'd planned to say about the topic. For one I didn't give any example for how the implementation of an app service might look like. So here it goes...
You have a good old Ordering application. The Product Owner figures out he needs a new function, namely displaying the user information (name, address, ...) and her last 10 orders in one page. That's a Use Case. The client could do two calls getting the user info then the orders, but you decide to mingle it into one. Then the app service will get the User from the User Subdomain, the Orders from the Order Subdomain and build a transfer object from them. The app service's responsibility is the delegation and the assembling of the transfer object (which belongs to the app layer, not the domain!).
class UnlikelyOrderHistoryViewService {
  public UnlikelyOrderHistoryView getUnlikelyOrderHistoryView(CustomerId customerId) {
        authenticationCheck(); 
        Customer customer = getCustomerRepository().find(customerId);
        List lastXOrders = getOrderRepository().findLast(customerId, 10);
        return getUnlikelyOrderHistoryViewAssembler().assemble(customer, lastXOrders);
  }
}

Of course distinction between the domain and the application layer still needs consideration in concrete situations. For example you start with a small domain. You can create, update and search for users. The domain might consist of only a User entity and a UserRepository. Your app service only delegates to the repository and do the boilerplate stuff. Then the Product Owner starts coming up with new and new types of queries. Search by name, age, profession. All users with first name Amanda, under 26 and not an accountant, or Martha who has a dog. At first you put extra methods and new parameters in the app service(s) but as the query criteria become more and more complex, the logic starts to form a new layer with its own vocabulary (that's the breaking point) and earns the right to be called domain. By its weight it sinks down to the Domain leaving the app service lean again. Evolution. The point is, it's always a matter of balance and the demarcation lines are susceptible to shift in time. And to say something practical at the end, I've found a simple rule of thumb that often helps me decide whether the logic in the app layer has grown too fat.

Thou shalt not suffer a decision in the app layer!

Nor in the infrastructure anyway. If you see a control structure there, a foreach or especially an if { ... } else { ... }, treat it with suspicion! Always ask yourself, why does this Yes-No decision need to be here instead of the Domain? Of course they have the right to be there sometimes, but rarely.

Friday, 1 November 2013

Application service(s) as the implementor of use-cases

For a long time the role of application layer services has puzzled me. Usually they say app services must be thin, doing orchestration, transaction handling, logging, etc. But still it's a bit of an abstract description, pretty vague and I haven't found it a very constructive guidance on a day-to-day basis. It doesn't say anything about how many app services do we need for example. Just one big facade or many smaller? Then last week I stumbled across a video of Robert C. Martin (where he tells some very good stuff if you can put up with his sometimes infantile style of performance long enough) and got enlightened. Application layer services are the implementors of use-cases. Using it as the guiding principle, organising the app layer becomes more straightforward. A simple Use Case could be implemented by a single public method on the app service. I tend to group simple and related Use Cases in one app service, a public method for each. And I create a new app service class for each complex Use Case (requiring multiple interactions with the server, so multiple public methods).
So let's see it in an example with the evergreen Ordering application

//simple use cases for managing Customers
interface CustomerAppService {
   void registerCustomer(Customer customer);
   User findCustomer(Query query);
}
//simple use cases for retrieving information about Orders
interface OrderHistoryAppService {
   OrderHistory getLastXOrders(int num);
   Integer getNumberOfOrdersOf(Customer customer);
}
//one complex use case of managing the multi-step order process
interface OrderCompletitionAppService {
   Order initiateOrder(Customer customer, Item item);
   void processPayment(Order order);
   void confirmAndCompleteOrder(Order order);
}

Another complementary organisational principle could be creating app services per subdomain (if you have any, if you don't, it's always worth checking whether you could). And of course you may need app services overarching multiple subdomains.

So to say something very wise-sounding, application services capture what the application DOES, as opposed to the domain, that defines what it IS.