Saturday 7 December 2013

Domain Services - representing external dependencies in the Domain

For the most of us who start learning DDD, Domain Services seem to be a strange beast at first. The general definitions says that logic that "falls between" entities should find its place in a Domain Service. Simple as it sounds, in concrete situations it can be difficult to decide where to put a piece of logic. In the entity or create a service? In this post I'd like to show a different, but very frequent use of domain services. This is the manifestation of external dependencies in the domain. Let's see an example for not being so abstract. Imagine we are developing a military application comprising multiple, distributed components. Our task is develop the component that receives an encrypted message from the enemy, decodes it, then sends the decrypted message to the headquarters. Physically there are 3 components in the system, one that processes the decrypted message (headquarters), one that actually does the decrypting, and our component in-between. They are communicating via web-services. Our domain could look like

interface EncryptedMessage  { ... }
interface DecryptedMessage  { ... }
// domain service representing the HeadQuarter component in the system
interface HeadQuarter {
   void send(DecryptedMessage decryptedMessage);
}
// domain service representing the CodeBreaker component in the system
interface CodeBreaker {
  DecryptedMessage breakIt(EncryptedMessage encryptedMessage);   
}
//the heart of our simple domain
class EnemyMessageCatcher {
  private HeadQuarter headQuarter;
  private CodeBreaker codeBreaker;
  void captureDecryptAndForward(EncryptedMessage encryptedMessage) {
    DecryptedMessage decryptedMessage = codeBreaker.breakIt(encryptedMessage);
    headquarter.send(decryptedMessage);
}
//infrastructure layer
class WSBasedHeadQuarter implements HeadQuarter { //calling some webservice }
class WSBasedCodeBreaker implements CodeBreaker { //calling some webservice }
}


Both the HeadQuarter and the CodeBreaker are domain services. Although our domain knows nothing about how these functionalities are implemented (whether they are in other physical components, or just in simple objects), it still knows about the concept, that there is a HeadQuarter who needs to be notified and there is a CodeBreaker that can decrypt the enemy's messages. That's why the interface(=concept) is in the Domain and the implementation(=the details) is in the infrastructure. Using Hexagonal Architecture-terminology, the domain service (interface) is the port, and the implementation is the adapter.
The DDD-savvy reader can notice, that the Repository pattern is actually an ordinary domain service. It represents the concept of storing and retrieving objects in the domain, and hides away how it's done exactly. I suppose the only reason for it being a separate pattern is simply that most application has to deal with persistence.

Tuesday 26 November 2013

Persisting the Domain Objects - 2

Before I start continuing the previous post, let's just see a quick reminder about our example! We have a Domain Object and for persistence we've decided to create a DTO and a DO<->DTO mapper class in the ACL to gain some flexibility and make our Domain more independent from the underlying data structure and persistence solution. Persistence Independence is a good thing, anyway.

//the domain object
public class Bonus {
    private DateTime collectedAt;
    // type can be "EXTRA_CHIPS" or "JAZZ_NIGHT"
    private final String type;
    private final long recurrencyIntervalInMilliseconds;    
    public RecurrentBonus(String type, long recurrencyIntervalInMilliseconds) {
        this.type = type;
        this.recurrencyIntervalInMilliseconds = recurrencyIntervalInMilliseconds;
    }
    public boolean isCollectable() {
        return DateTime.now()
          .isAfter(collectedAt.plusMilliseconds(recurrencyIntervalInMilliseconds)); 
    }
    public void collect() {
       if (!isCollectable()) { throw new BonusPrematureCollectionAttemptException() } 
       DomainEventDispatcher.dispatch(new BonusCollectedEvent(this));
    }
}
//the persistent object (DTO) in the ACL (insfrastructure layer)
@JsonIgnoreProperties(ignoreUnknown=true)
public class PersistentBonus implements Serializable {
    private String type;
    private long recurrencyIntervalInMilliseconds;
    private DateTime collectedAt;
    //getters-setters for each field
}
//the mapper between the the DO and DTO in the ACL
public class BonusPersistenceAssembler {
    PersistentBonus toPersistence(Bonus bonus) { ... }
    Bonus toDomain(PersistentBonus bonus) { ... }  
}

Earlier we have seen that in some specific cases, like rolling release, this separation between our DOs and persistence DTOs comes very handy. Having said that, let's see how the mapping really works.

The trick - encapsulation gets in the way?

The mapping between PersistentBonus and Bonus takes place in the anti-corruption layer. The trick is, since the Bonus, as a well-behaving rich Domain Object, doesn't expose its internal structure, how can we do it at all? No other class knows about its private fields, let alone accessing them. In the followings I explore some possible ways, from trivial and not-so-ideal solutions to some more sophisticated ones.

Solution 1 - Add Getters-Setters to the DO and get over with it

Plainly exposing the internal structure of an object is a major no-no in DDD and OO in general. It violates loose coupling, encapsulation, extendibility, bla-bla. And we've taken pains to hide the internals of Bonus, we shouldn't nullify the effort by making the internals accessible. Forget it if possible.

Solution 2 - Separating interface and implementation of the Domain Object

Using an interface is usually a good idea regardless of our current problem. Let's do that and add an additional Factory class.

public interface Bonus {
    boolean isCollectable();
    void collect(); 
}
public class BonusImpl implements Bonus{
   //all the stuff from before
   // getters-setters
}
//the only class in the domain directly seeing BonusImpl
public class BonusFactory {
    Bonus createRecurrentBonus(String type, long recurrencyIntervalInMilliseconds) { ... }
}
// the mapper class in the anti-corruption layer
public class BonusPersistenceAssembler {
    PersistentBonus toPersistence(Bonus bonus) { ... }
    Bonus toDomain(PersistentBonus bonus) { ... }
}

Now we can arrange our code in a way that the other part of the Domain (everything but the BonusFactory) accesses the object strictly through the interface, and the BonusImpl will be only used by the BonusFactory and the BonusPersistenceAssembler. Although the BonusImpl has getters and setters, they are still shielded quite well from the rest of the Domain. I have two minor problems with this solution. One is the necessity of explicit casting from Bonus to BonusImpl in BonusPersistenceAssembler.toPersistence, the other is that at the end of the day refraining from the direct use of the BonusImpl in the Domain is entirely up to the discipline of the developers. In some cases explicit casting of Bonus to BonusImpl might seem the easy way to achieve something and a lazy fellow may fail to resist the temptation. Let's see what can we do about it.

Solution 3 - "Normal" interface + View interface + Factory + package protected implementation class

The first idea is to put the BonusImpl with the BonusFactory in a new package, and change the visibility of the BonusImpl to package protected. Now it's hidden from even the rest of the Domain, but alas, from the BonusPersistenceAssembler, too. Then comes the next idea. Create a new interface, BonusView, which captures only the view of the "data" in Bonus, and let's pass a BonusView to the BonusPersistenceAssembler. Only this class should use the BonusView. In code

public interface Bonus {
    boolean isCollectable();
    void collect(); 
}
public interface BonusView {
    long getRecurrencyIntervalInMilliseconds();
    String getType(); 
    DateTime collectedAt(); 
}
//package protected 
class BonusImpl implements Bonus, BonusView {
   //all the stuff from before
   // getters-setters
}
//the only class in the domain directly seeing BonusImpl.They are in the same package
public class BonusFactory {
    Bonus createRecurrentBonus(String type, recurrencyIntervalInMilliseconds) { ... }
}
// the mapper class in the anti-corruption layer
public class BonusPersistenceAssembler {
    PersistentBonus toPersistence(BonusView bonusView) { ... }
    //uses the BonusFactory
    Bonus toDomain(PersistentBonus bonus) { ... }
}

Better. We still have to cast from Bonus to BonusView before we pass the reference to the BonusPersistenceAssembler, but the ugly getters-setters are nicely confined to a small nook of the code.

Solution 3.1 What about the invariants? - static factory method

However there is still one possible problem to address. If the Bonus has some invariants (not in our simple example, but we are discussing the general idea), having individual setters for each field might be undesirable, since this approach leaves place for putting the object in a state that violates its invariant. In this case in the BonusFactory we can use a static factory method, passing all the fields together (or maybe only the ones that have to be set together to preserve an invariant) to it.

class BonusImpl {
    public static Bonus reinstantiate(String type, long recurrencyIntervalInMilliseconds, DateTime collectedAt) { ... }
    // other stuff
}

Finally we got rid of the setters. The method is called reinstantiate, suggesting to the developer, that this method is to recreate the DO from something. Conceptually it's very different from simply providing the means to set its fields individually. The drawback is that the factory method will be bloated if the class has more than a couple of fields.

Moral of the story: Setters-getters are troublemakers

It looks like we can tweak the problem in all kinds of smart ways, the result is always a compromise, and we wouldn't need to do anything at all if not for those setters-getters. Let's see how far we have come to mitigate the problem. We started with a simple class and ended up with two additional interfaces, a factory class, a static factory method and package protection. I start to doubt whether it's worth the effort. Let's see a different approach inspired by the static factory method.

Solution 4 - Memento

Instead of passing all the fields to the reinstantiate method, we can create a new class to wrap them up. Let's call it BonusMemento, because it's like a footprint of the Bonus object. It doesn't have any logic, only the data, like the PersistentBonus, just without JSON annotations, Serializable interface and the other technology-related stuff. Unlike the PersistentBonus, the BonusMemento is part of the Domain, even if it's only used to make persistence easier. You can think of it as the skeleton of the Bonus. Just the bones, no brain.

//the original Bonus class, not the interface
public class Bonus {
    public static Bonus reinstantiate(BonusMemento bonusMemento) { ... }
    // other stuff
}
public class BonusMemento {
    private String type;
    private long recurrencyIntervalInMilliseconds;
    private DateTime collectedAt;

    //getters-setters for each field
}

No more bloated factory method. No setters either, only getters. But wait! Why can't we use the BonusMemento to replace the getters, too? Instead of exposing its fields one by one through getters, the domain object can be responsible to creating its own Memento.

public class Bonus {
    public static Bonus reinstantiate(BonusMemento bonusMemento) { ... }
    public BonusMemento toMemento() { ... } 
    //all the stuff from before
}
// the mapper class in the anti-corruption layer
public class BonusPersistenceAssembler {
    PersistentBonus toPersistence(BonusMemento bonusMemento) { ... }
    Bonus toDomain(PersistentBonus bonus) { 
        BonusMemento bonusMemento = convertFrom(bonus);
        return Bonus.reinstantiate(bonusMemento); 
    }
}

Not bad. We are back to the original Bonus class + a static factory method + Memento class. Encapsulation preserved, no bothering compromises. Well, maybe one. The BonusMemento and the PersistentBonus are almost the same, which is a bit of a code duplication. We cannot use PersistentBonus directly in the Bonus class, because the former belongs to the infrastructure layer. But the PersistentBonus can extend BonusMemento, inheriting its content and keeping only the infrastructure-specific part of its former self.

public class BonusMemento {
    private String type;
    private long recurrencyIntervalInMilliseconds;
    private DateTime collectedAt;
    //getters-setters for each field
}
@JsonIgnoreProperties(ignoreUnknown=true)
public class PersistentBonus extends BonusMemento implements Serializable {}

//so what is inside the mapper
public class BonusPersistenceAssembler {
    PersistentBonus toPersistence(BonusMemento Memento) { 
       PersistentBonus ps = new PersistentBonus();
       ps.setType(Memento.getType());
       //set values for the other fields
       return ps;   
    }
    Bonus toDomain(PersistentBonus persistentBonus ) { 
        return Bonus.reinstantiate(persistentBonus ); 
    }
}

The toPersistence method of BonusPersistenceAssembler simply copies the content of the Memento to the PersistentBonus field by field. The toDomain is even more simple, since PersistentBonus is a subclass of BonusMemento, we can simply pass it into Bonus.reinstantiate. Awesome.

Solution 5 - Visitor

I've been playing with the idea of using the Visitor pattern to retrieve the values of the fields from the Domain Object.

public class Bonus {
    public void buildView(BonusVisitor visitor) { 
        visitor.setType(this.type);
        visitor.setRecurrencyIntervalInMilliseconds(this.recurrencyIntervalInMilliseconds);
        visitor.setCollectedAt(this.collectedAt);
    } 
    //all the stuff from before
}
//in the Domain
public interface BonusVisitor {
   //setters
}
//in the infrastructure
public PersistenceBonusVisitor implements BonusVisitor {
    private final PersistentBonus persistentBonus;
    public PersistenceBonusVisitor(PersistentBonus persistentBonus) { this.persistentBonus = persistentBonus }
    // in the setters call the setters of PersistentBonus 
}

The visitor pattern is suitable for this kind of situations when we want an object to have control over what it shares about its internals. But at the end of the day we already have it with the Mementos and in a simpler form, so I've decided to stick to that.

Solution 6 - Using reflection

To be frank, I have never tried using reflection. There are some tools available, like Dozer, which promise a painless mapping betweens DOs and DTOs. A colleague of mine has told me about their experiences with Dozer. They could spare themselves the time of writing mapper classes, but they still needed to create the DTOs, the DOs had to have getters-setters, they still wanted to test that the mapping is correct, so couldn't spare writing the tests for the mappers. And performance-wise reflection-based solutions are always heavy. Having said this, I still want to explore Dozer (or something alike) on day.

Summary

In the post we've started from a simple example and explored a couple of solutions for the problem. In the end I would stick to the Memento-one. Basically it's derived from the idea of using a static factory method instead of setters. Since we don't like long parameter-list, we've introduced a new class, the Memento, and then realized, we can use it instead of getters, too. As an additional bonus (no pun intended), we can even derive the persistence class from it. That's it.

Sunday 17 November 2013

Persisting the Domain Objects - 1

This post is about the not-so-trivial, but omnipresent chore of persistency in a DDD project. Let's just plunge into the middle through an example. We have to develop a submodule in our imaginary in New York-residing on-line Casino application to reward players with bonuses. The Product Owner's wish is that they can collect tickets to Jazz-concerts in the local Royal Albert Hall once a fortnight (first bonus type) and 50 extra chips once a day (second bonus type). In the architecture the client and the server has an agreement, that the client can check the "collectability" of a bonus, and can collect it. After a bit of pondering we come up with the following class to represent the bonus.

public class Bonus {
    private DateTime collectedAt;
    // type can be "EXTRA_CHIPS" or "JAZZ_NIGHT"
    private final String type;
    private final long recurrencyIntervalInMilliseconds;    
    public RecurrentBonus(String type, long recurrencyIntervalInMilliseconds) {
        this.type = type;
        this.recurrencyIntervalInMilliseconds = recurrencyIntervalInMilliseconds;
    }
    public boolean isCollectable() {
        return DateTime.now()
          .isAfter(collectedAt.plusMilliseconds(recurrencyIntervalInMilliseconds)); 
    }
    public void collect() {
       if (!isCollectable()) { throw new BonusPrematureCollectionAttemptException() } 
       DomainEventDispatcher.dispatch(new BonusCollectedEvent(this));
    }
}

Although the code could be improved (for example for the sake of brevity I've completely skipped the part how a Bonus is tied to a Player, as deemed irrelevant for us now), we are quite satisfied with it. It's very object-orientated. It hides completely what the conditions of the collectability are. The PO can come up with totally different kind of collectable bonuses (ones that depend on some previous achievement from the player, or one which applies to only Irish players, whatever), and the interface of the class won't change. So, next step. How to persist it?

Do we need a PersistentBonus class?

Let's assume we use Mongo, which stores its data in JSON format. In this case the object to be persisted has to have getters-setters for each field and has to implement the Serializable interface (it's a usual requirement for a JSON-serializing library, like Jackson). It might even need some library-specific annotations. Definitely the kinds of things we don't want to press into our Domain Object. It would mean letting Infrastructure details creeping into the Domain.The solution is that we can create Data Transfer Objects for our Domain Objects to capture all the needs of the chosen persistence solution, and we can transform the DO to DTO when persisting it, and the other way around when it's reinstantiated from the DB. The DTO and the mapper in the anti-corruption layer would look something like

@JsonIgnoreProperties(ignoreUnknown=true)
public class PersistentBonus implements Serializable {
    private String type;
    private long recurrencyIntervalInMilliseconds;
    private DateTime collectedAt;
    //getters-setters for each field
}
public class BonusPersistenceAssembler {
    PersistentBonus toPersistence(Bonus bonus) { ... }
    Bonus toDomain(PersistentBonus bonus) { ... }  
}

If we use an ORM solution, like Hibernate, we might not even need the mandatory getters-setters and implementing Serializable. A couple of annotations or XML configuration can do the trick. It's tempting just not to bother with DTOs at all. But separating the persistence code from the domain has one big advantage.

Can you do a rolling release if there are data structure changes in DB?

It makes possible data structure changes in the DB without the need of downtime. Imagine we've chosen to go without DTOs and our application has gone live a while ago in New York and we already have thousands of entries for Bonus in our DB. Then, as the company grows and gains territory, it decides to launch in other states and the PO decides that the Extra Chips and the Jazz Night could mean different things in different states. From now on, the type and state together should characterize the Bonus. After a bit of thinking we figure out that the slightly changed Domain would be served better by a slightly changed Bonus class

public class Bonus {
    //instead of String type;
    private final BonusCategory category;
    //old stuff
}
public class BonusCategory {
    private final String type;
    private final State state;
    // constructor and getters
}

Very neat, but it's not compatible anymore with the entries in the DB. We can execute a DB patch to convert the data to a new format, changing type to bonusCategory and use NEW_YORK as the state for all, then deploy the new version of the Casino, but we can't do it without downtime. If the patch is executed first the new data would break the old Casino before we deploy the new one (unknown field bonusCategory and the expected type is missing), if the app is deployed first, it would break immediately because of the old format of the data (unknown field type and the expected bonusCategory is missing). The management want a rolling release. No downtime. If we have a separate class for persistence and a mapper in the ACL, it's possible, even easy. We do some minor changes in the DTO and the mapper to accomodate both data format.

@JsonIgnoreProperties(ignoreUnknown=true)
public class PersistentBonus implements Serializable {
    private String type;
    //new field
    private String state;
    private long recurrencyIntervalInMilliseconds;
    private DateTime collectedAt;
    //getters-setters for each field
}
public class BonusPersistenceAssembler {
    PersistentBonus toPersistence(Bonus bonus) { ... }
    Bonus toDomain(PersistentBonus persistentBonus) { 
       BonusCategory bonusCategory = getBonusCategory(persistentBonus);
       // inject bonusCategory into the Bonus and the other stuff        
    } 
    private static BonusCategory getBonusCategory(PersistentBonus persistentBonus) {
       if (persistentBonus.getState() == null) {
          //old data, belongs to New York
          return new BonusCategory("NEW_YORK",persistentBonus.getType());  
       } else {
          return new BonusCategory(persistentBonus.getState(),persistentBonus.getType()); 
       }  
   } 
}

Done. Now our app can handle both the new and the old format of data. We can deploy it, then execute the DB patch. In the next release we can remove the "if persistentBonus.getState() == null" check from the code entirely. It's called intermediate code.

Pros and contras

Hopefully I've managed to make a point why separating persistence DTOs and DOs can be very useful even it requires a bit more code. I know using the Domain POJOs directly in persistence (like Hibernate offers) and in messages between the server and the client is very tempting, and yields a clean and lean codebase with relatively smaller number of classes. Writing an ACL with all the DTOs and mappers is usually a tedious monkey-work. As almost everything in software development the decision is about the trade-off. Does your application needs this level of independence of the Domain from the data so much, that you are willing to go the extra mile?

What's next

If the answer is yes, then you still need to spend some time on contemplating how to implement the idea. Unfortunately the DO<->DTO mapping is not as trivial as it seems at first. In the next post I'll explore what difficulties we need to face with this approach and how can we overcome them. Stay tuned.

Sunday 10 November 2013

BDD - choosing the scope of testing

When at the beginning of our project we decided to use BDD, and Cucumber as the tool, we had a little debate about the scope of the testing. Similar questions arose

  • Using DDD terminology, which layer would you choose to test against, the domain, the application, or the infrastructure?
  • If you regard your tests as clients (impersonating real clients) of your application, then what would be the boundaries of the your SUT (System Under Test)?
  • Where and what are your test doubles?
  • To which ports of the application does the test code bind itself to drive the test cases and verify its assertions?

These 4 questions basically ask the same with different choices of words.

Where are the boundaries?

Testing against the Application layer

The BDD approach of  testing directly against the Domain is, I think, most times doesn't make much sense. The Use Cases are implemented in the application layer, without its orchestration the Domain is pretty useless. Testing against the application layer on the other hand is a very attractive approach. All the logic to fulfill the requirements of the application is there, and you don't need to be bogged down by infrastructure details. You can simply stub out the external dependencies like database/messaging/web-service/... configuration/implementation. Borrowing now from the Ports and Adapters terminology, you hang test-stub adapters on your ports and get away with it quickly and elegantly. And the tests, unburdened by IO or network latencies, run very fast. Thus if at one point of the application's life you decide to change the type of the DB, or using JMS instead of REST, you don't have to change a single line of the test code. But...

End-to-end testing

But those infrastructure details must be there in production. Without real end-to-end tests, where for example your test code actually calls the web-service endpoint of your component and verifies its expectations by querying the database, how can you be sure that the DB really works the way you've intended? What if our Camel configuration has a typo, rendering your whole messaging layer useless? You'll never find it out until manual testing. Having black-box like End-to-end tests after a successful "mvn clean install" you can sleep in peace, knowing whatever you've done, it hasn't break any existing functionality. The price you pay is that your test suite runs much slower and the test code is tied to the Adapters' implementation.

Choosing from the two approaches is a difficult decision and I've been thinking for long how we could have the best of both worlds. Maybe we can postpone the decision.

Best of both worlds - demonstration by an example

Let's see how a very simple Cucumber test would look like against the very simple application from the previous post. In a nutshell our app receives encrypted messages through a SOAP-based web service, asks an other component via REST to break the encryption, then stores the result in an Oracle DB. The words in italic are implementation details, shouldn't appear in the test or domain vocabulary. The test code comprises a feature file to describe a test scenario and a java class containing the implementations for the step definitions.

The feature definition
---------------------------------------------------------------------------
Given the decrypting component can break the following secret messages
| encrypted message | decrypted message |
| Alucard                  |  Dracula                 |
| Donlon                   | London                  |

When the message 'Alucard' arrives

Then our message repository contains 'Dracula'
---------------------------------------------------------------------------

The step definitions

class StepDefinitions {
   @Given("^the decrypting component can break the following secret messages")
   public void givenTheDecryptionCodebookContains(List messagePairDTOs) {
      ... //store it in the Decrypter Test Double
   }
   @When("^the message '(\\w+)' arrives")
   public void whenTheEncryptedMessageArrives(String encryptedMessage) {
      ... // somehow trigger the use case
   }
   @Then("the decrypted messages repository contains") 
   public void thenTheDecryptedMessagesRepositoryContains(List messages) {
      ... // assert the expected result against the DB Test Double
   }
}

Introducing the TestAgent metaphor

The idea is that instead of putting the test code directly into the class containing the step definitions, we introduce a thin layer of abstraction between the step definitions and their implementations by a so called TestAgent. Regardless of the name (I guess it could be TestClient, FakeClient, ...), the TestAgent is the explicit manifestation of the concept that the test code is actually a client of you application.

interface TestAgent {
  void givenTheDecryptionCodebookContains(List messagePairDTOs);
  void whenTheEncryptedMessageArrives(String encryptedMessage);
  void thenTheDecryptedMessagesRepositoryContains(List messages);
}

The TestAgent actually represents 3 real clients of the application (one method for each), but it's irrelevant for the example. In more complex cases we might consider one per client. So the updated step definition class would look like

class StepDefinitions {
   @Given("^the decrypting component can break the following secret messages")
   public void givenTheDecryptionCodebookContains(List messagePairDTOs) {
      testAgent.givenTheDecryptionCodebookContains(messagePairDTOs);
   }
   @When("^the message '(\\w+)' arrives")
   public void whenTheEncryptedMessageArrives(String encryptedMessage) {
      testAgent.thenTheDecryptedMessagesRepositoryContains(encryptedMessage);
   }
   @Then("the decrypted messages repository contains") 
   public void thenTheDecryptedMessagesRepositoryContains(List messages) {
      testAgent.thenTheDecryptedMessagesRepositoryContains(messages);
   }
}

Here comes the interesting part. We can create different implentations of the TestAgent for each layer we want to test.

// testing against the app service
class AppLevelTestAgent implements TestAgent {
  void givenTheDecryptionCodebookContains(List messagePairDTOs) {
     fakeDecrypter.storeForFutureVerification(messagePairDTOs);
  }
  void whenTheEncryptedMessageArrives(String encryptedMessage) {  
     EncryptedMessage msg = build(encryptedMessage);
     codeBreakerAppService.breakAndStore(msg);
  }
  void thenTheDecryptedMessagesRepositoryContains(List messages) {
       DecryptedMessage decryptedMessage = inMemoryDecryptedMessageRepository.find(messages.get(0));
       assertEquals(decryptedMessage ,...);  
  }
}
// testing against the "black box"
class EndToEndTestAgent implements TestAgent {
  void givenTheDecryptionCodebookContains(List messagePairDTOs) {
       fakeDecrypterBehindTestWSEndpoint.storeForFutureVerification(messagePairDTOs);
  }
  void whenTheEncryptedMessageArrives(String encryptedMessage) {
       WSTransferMessage wsMessage = convertToWSDTOMessage(encryptedMessage);
       wsClient.send(wsMessage); 
  }
  void thenTheDecryptedMessagesRepositoryContains(List messages) {
       DecryptedMessage decryptedMessage = realDecryptedMessageRepository.find(messages.get(0));
       assertEquals(decryptedMessage ,...);  
  }
}

Ports and Adapters for the test

The test agents should be also responsible for initializing their test doubles, which are the same in role but different in nature depending on the scope.

DecryptedMessageRepository Decrypter Way to trigger the use case
AppLevelTestAgent in-memory implementation fake implementation of the interface call the app service directly
EndToEndTestAgent real DB-using implementation a fake service behind and WS endpoint started up by the test configuration make a web service call

That's it folks. The implementation of TestAgent can be chosen based on a property (like -Dbddtestscope=applevel if you use Maven), or you can configure your build to run the test suite for each. Since the application level implementation fakes out all the external dependencies, it's very quick, adding little additional overhead to the build on top of the end-to-end tests.

Pros and contras

I see the main argument against the approach is that introducing another layer is too much of an effort. Some even think that using Cucumber already adds unnecessary extra complexity. I would disagree. Separating the definition and the implementation of the steps alone is a good idea, yielding a cleaner code base. The test code is no longer tied to Cucumber, should you choose to use for example a simple JUnit-based approach, it can be reused without any change. The Cucumber-part is simply a layer above it.
Then some can say that we have to write the test code twice. It's not entirely true either. The feature files, the step definitions and the "smart part" of the test code is common. The implementations are the simpler, more mechanic part of writing the tests.

Possible extensions

After we'd discussed this idea a colleague pointed out that we might reuse the test code (feature files, step definition files and the TestAgent interface/abstract class) as a base to build up tests for the front end. It would require a new implementation of the TestAgent, which uses e.g. Selenium to drive the tests. I don't see any obstacle against packaging the test code in its own jar file, then let the project using it provide the implementation. I'm eager to see it in practice.

Saturday 2 November 2013

DDD and Hexagonal architecture

Since two years ago I had the pleasure the start using DDD at my workplace I can hardly imagine developing software without it (long-haunting experiences from previous projects might have something to do with this). There is a lot to love here: Entities, Value Objects, Repository pattern, Aggregates, Bounded Contexts, Ubiquitous Language, Anti-corruption layers, ... But for me the most valuable part of it is the idea of placing the Domain in the heart of the application and building everything around it as opposed to the traditional layering theory, where the UI is on the top, the Domain in the middle, and everything lies on the DB at the bottom.

----------------------------------
UI Layer
----------------------------------
Business Logic Layer
----------------------------------
Persistence Layer
----------------------------------

This idea, placing the database and the data model at the center, have proved to be very harmful, often resulting in anemic objects and procedural code. And goes against one of the most fundamental concept in OO design, the Dependency Inversion Principle. High level modules should not depend on low level modules. What's so special about databases anyway? What if our application instead of using a DB directly has to cooperate with a legacy system, storing the data through it and communication is based on web-service calls? And before it "stores" the data it communicates with other components through web-services? What I'd like to point out, that if we follow the "traditional" layering model, we have to assign the first WS calls to the Persistence Layer, but the second to the BLL. The distinction is simply arbitrary and contrived.
What DDD does, using an onion-like layering structure instead of the vertical, one-dimensional one, is what Alistair Cockburn proposed as Hexagonal Architecture, or Ports and Adapters Architecture, even before DDD appeared on the scene.



Please follow the link before read further. This could completely change the way how you think about software. Funny enough in spite of being around for almost 10 years by now I have yet to see a nice example on the net. So I'd like to fill this gap now.

Hexagonal (Ports and Adapters) Architecture example

Instead of the usual and boring Pet Clinic or Ordering application I choose to use an unlikely, but at least more interesting theme. Let's build an application that receives captured secret messages from the enemy, asks an other component to break them, then stores the decrypted messages. Here are the classes
//infrastructure layer
class MessageListener {
     void handleMessage(String jsonMessage) {
         EncryptedMessage encryptedMessage = getEncryptedMessageBuilder().build(jsonMessage);
         getCodeBreakerAppService().breakAndStore(encryptedMessage ); 
     }
}
class WSBasedDecrypter implements Decrypter {
   // call a WS to do the work
}
class MongoDecryptedMessageRepository implements DecryptedMessageRepository {
   // store stuff in Mongo
}
//app layer
class CodeBreakerAppService {
   void breakAndStore(EncryptedMessage encryptedMessage) {
        authentiationCheck();
        startTransaction();
        getCodeBreakerAndArchiver().breakAndArchive(encryptedMessage); 
        endTransaction();
   }
}
//domain layer
class CodeBreakerAndArchiver {
   private Decrypter decrypter ;
   private DecryptedMessageRepository decryptedMessageRepository ;
   void breakAndArchive(EncryptedMessage encryptedMessage) {
        DecryptedMessage decryptedMessage = decrypter.break(encryptedMessage);
        decryptedMessageRepository.archive(decryptedMessage); 
   }
}
interface Decrypter {
    DecryptedMessage break(EncryptedMessage encryptedMessage);
}
interface DecryptedMessageRepository {
    void archive(DecryptedMessage decryptedMessage);
}

Imagine the app is listening to a message broker, like ActiveMQ. We have a MessageListener instance, which is configured to listen to a JMS queue. For the sake of simplicity, imagine that the messages coming in are simple JSON strings. So, assuming you've read the article, you can surely indentify the ports and adapters of this app. As a reminder, a port is where our application interacts with the external word, sitting just on the boundaries of the application. We have 3 in our app.

1. The breakAndStore public method on CodeBreakerAppService. This is a driven port, which is called (indirectly) by some external entity. The entry point of our app. The adapter, that transforms the message to something the port can understand is the MessageListener. I said indirectly, because the message have to go through some integration layer (JMS listener mechanism here), then the adapter which eventually passes it to the port.

2. The Decrypter interface. This is a driving port that is called by the domain, triggering some effect on the external world. It is implemented by an adapter, in this case WSBasedDecrypter.

3. The DecryptedMessageRepository interface, similar driving port implemented by the MongoDecryptedMessageRepository as the adapter.

Now let's imagine another team in our company wants to use our app to spy their own enemies, but they abhor NoSQL (shame on them) and want to store the encrypted messages in Oracle. No problem at all. We only have to create a new implementation of the DecryptedMessageRepository interface, let's say JDBCDecryptedMessageRepository, and can plug it in the Domain in runtime if needed. Or if you don't want to store the messages but spit it out on-the-fly to the screen, you can create another implementation sending messages to the UI (although Repository may not be the most appropriate name for it anyomore). The point is, there is no up (UI) and down (DB) here. Just an outer layer of onion wrapping around the core. The adapters are responsible for the details.
Then the Product Owner says we need to open the app not only for JMS, but REST as well. So let's configure MessageListener as a REST endpoint too, or create a new class for that. Our Domain (and application layer) is intact. The Domain independent of all these details, not a single line needs to change.

This architecture style is so simple and elegant, I can't understand why Hexagonal Architecture hasn't become a household name by now. And it hasn't. Most often if I mention it to other developers I meet blank faces. Sometimes it rings a bell, I've failed to meet anyone saying, "yeah it's cool and we use it all the time".

Another interesting thing is that, I think, if you follow DIP, you can't help ending up with this. It just grows out of a very simple principle. That's it for now.

Where does the Application Layer end and the Domain start?


After finishing the previous post it occurred to me that there is a couple of other stuff I'd planned to say about the topic. For one I didn't give any example for how the implementation of an app service might look like. So here it goes...
You have a good old Ordering application. The Product Owner figures out he needs a new function, namely displaying the user information (name, address, ...) and her last 10 orders in one page. That's a Use Case. The client could do two calls getting the user info then the orders, but you decide to mingle it into one. Then the app service will get the User from the User Subdomain, the Orders from the Order Subdomain and build a transfer object from them. The app service's responsibility is the delegation and the assembling of the transfer object (which belongs to the app layer, not the domain!).
class UnlikelyOrderHistoryViewService {
  public UnlikelyOrderHistoryView getUnlikelyOrderHistoryView(CustomerId customerId) {
        authenticationCheck(); 
        Customer customer = getCustomerRepository().find(customerId);
        List lastXOrders = getOrderRepository().findLast(customerId, 10);
        return getUnlikelyOrderHistoryViewAssembler().assemble(customer, lastXOrders);
  }
}

Of course distinction between the domain and the application layer still needs consideration in concrete situations. For example you start with a small domain. You can create, update and search for users. The domain might consist of only a User entity and a UserRepository. Your app service only delegates to the repository and do the boilerplate stuff. Then the Product Owner starts coming up with new and new types of queries. Search by name, age, profession. All users with first name Amanda, under 26 and not an accountant, or Martha who has a dog. At first you put extra methods and new parameters in the app service(s) but as the query criteria become more and more complex, the logic starts to form a new layer with its own vocabulary (that's the breaking point) and earns the right to be called domain. By its weight it sinks down to the Domain leaving the app service lean again. Evolution. The point is, it's always a matter of balance and the demarcation lines are susceptible to shift in time. And to say something practical at the end, I've found a simple rule of thumb that often helps me decide whether the logic in the app layer has grown too fat.

Thou shalt not suffer a decision in the app layer!

Nor in the infrastructure anyway. If you see a control structure there, a foreach or especially an if { ... } else { ... }, treat it with suspicion! Always ask yourself, why does this Yes-No decision need to be here instead of the Domain? Of course they have the right to be there sometimes, but rarely.

Friday 1 November 2013

Application service(s) as the implementor of use-cases

For a long time the role of application layer services has puzzled me. Usually they say app services must be thin, doing orchestration, transaction handling, logging, etc. But still it's a bit of an abstract description, pretty vague and I haven't found it a very constructive guidance on a day-to-day basis. It doesn't say anything about how many app services do we need for example. Just one big facade or many smaller? Then last week I stumbled across a video of Robert C. Martin (where he tells some very good stuff if you can put up with his sometimes infantile style of performance long enough) and got enlightened. Application layer services are the implementors of use-cases. Using it as the guiding principle, organising the app layer becomes more straightforward. A simple Use Case could be implemented by a single public method on the app service. I tend to group simple and related Use Cases in one app service, a public method for each. And I create a new app service class for each complex Use Case (requiring multiple interactions with the server, so multiple public methods).
So let's see it in an example with the evergreen Ordering application

//simple use cases for managing Customers
interface CustomerAppService {
   void registerCustomer(Customer customer);
   User findCustomer(Query query);
}
//simple use cases for retrieving information about Orders
interface OrderHistoryAppService {
   OrderHistory getLastXOrders(int num);
   Integer getNumberOfOrdersOf(Customer customer);
}
//one complex use case of managing the multi-step order process
interface OrderCompletitionAppService {
   Order initiateOrder(Customer customer, Item item);
   void processPayment(Order order);
   void confirmAndCompleteOrder(Order order);
}

Another complementary organisational principle could be creating app services per subdomain (if you have any, if you don't, it's always worth checking whether you could). And of course you may need app services overarching multiple subdomains.

So to say something very wise-sounding, application services capture what the application DOES, as opposed to the domain, that defines what it IS.

Thursday 31 October 2013

Searching for packaging guidelines - 1

Defining a satisfying package structure always gives me a headache. Sometimes I feel it "right" but until recently I've had no formal principle to rely on to confirm my hunch. Couple of weeks ago I started developing a plugin that can calculate design quality metrics and so I've dug into topic, and I think I found some useful stuff.

Let's see the problem in an simple enough example, which works with very few components to make it concise but enough to demonstrate the point. So we have the following classes
interface Compressor {
  CompressedContent compress(UncompressedContent uncompressedContent);
}
interface CompressedContent { ... } 
class UncompressedContent { ... } 
class ContentDownloader {
  void downloadCompressAndSave(Compressor compressor) {
       UncompressedContent uncompressedContent = downloadContent();
       CompressedContent compressedContent = compressor.compress(uncompressedContent);
       save(compressedContent);
  }
}
class ZipCompressor implements Compressor { ... }
class RARCompressor implements Compressor { ... }

Our mini application downloads contents from God knows where, compresses and saves them. Different compressing algorithms can be plugged-in easily (Strategy pattern). The ContentDownloader uses other components too, but they are not relevant for our example, so let's just say ContentDownloader from now represents a bunch of classes. Similarly there can be several other implementations of Compressor. The question is, how would you package your components?
In the following, lacking both a UML visualisation plugin and the commitment to get one I will use a simple notation. Dependencies between packages will be represented by '-->' and packages by ( <class1>, <class2>, ...). So the structure where the package containing classA and classB depending on package containing classC and classD, and both are in the same superpackage will be represented by

( (classA, classb) --> (classC, classD) )

Solution 0

(ContentDownloaderCompressor, ZipCompressor, RarCompressor)

Everything in one bag. Obviously it's not a good solution. The packaging should one way or another represent the structure of the application and tell the developer something about it.

Solution 1

 (ContentDownloader) -> (CompressorZipCompressor, RarCompressor)

Seems better. We've confined all the compressor code (interfaces and implementations) into one package. However I like having classes belonging to the same abstraction level in a package. Having interfaces and their implementations in the same place, I feel, violates this.

Solution 2

( ContentDownloader -> (Compressor <- (ZipCompressor, RarCompressor) ) )

This is something I see quite frequently. For example put the façade interface under the package org.something.app and the implementation under org.something.app.impl.

Solution 3 

(ContentDownloader) -> ( (Compressor) <- (ZipCompressor, RarCompressor) )

A variation when the interface is under org.something.app.client and the implementation under org.something.app.impl.

The common problem with both last approaches is that (ContentDownloader) depends on a package both containing the interfaces it uses, and the implementations it has no knowledge of. Why is it a problem? Let's stop here for a short intermezzo and ponder what we want from packages actually. What I want, for one, is that if I need to do some change in functionality, I have to touch as few packages as possible and in the packages I do have to, I want as few classes as possible not related to the change (This is called the Common Closure Principle).Why? Because unrelated components localised closely to the place of change are noise.
Or to view it from another perspective, let's a run a small hypothetical experiment. Let's assume the packages are units of release (even if not, the general idea is worth considering). Adding another implementation will require recompiling the full package ( (Compressor) <- (ZipCompressor, RarCompressor) ) and (ContentDownloader)  too. This is bad, since no code change has happened in (ContentDownloader). And it wouldn't be enough to release ( (Compressor) <- (ZipCompressor, RarCompressor) ), we would have to release the bigger package containing everything. Experiment ends.
Again it's only hypothetical, because in Java the packages are not units of release. But I like to find general principles behind things on different scale, and I think structuring your deployable components (projects) or libraries has lot in common with package structuring. After all you can always decide that a submodel in your component has grown big enough to earn his place as a separate library (or embedded component). It's always a possibility, so keeping it mind (until it goes against other considerations deemed more important) while designing the package structure could make further refactorings and maintenance easier.

I hope these unorganized ramblings have made some sense and we can try another approach.

Solution 4

 (ContentDownloader) -> (Compressor) <- (ZipCompressor, RarCompressor)

Seems much better. Adding another implementation only requires recompiling (and releasing) (ZipCompressor, RarCompressor) and nothing else. We reached the point I've been heading to, but actually there is another interesting variation.

Solution 5

 (ContentDownloader,Compressor) <- (ZipCompressor, RarCompressor)

This structure can be justified by pointing out that the ContentDownloader directly depends on the Compressor, so for the sake of package cohesion it might be a good idea to put them the together. This is what Martin Fowler calls Separated Interface. After all for the developer of the ContentDownloader the implementation of the Compressor is an irrelevant detail he doesn't even want to know about. But code change in the ContentDownloader would lead to recompiling  (ZipCompressor, RarCompressor), so for now I drop this solution.

Drawing conclusions

What happened here is we started with an initial state of (if we don't count solution 0)

(ContentDownloader) -> (CompressorZipCompressor, RarCompressor)

and transformed it to

(ContentDownloader) -> (Compressor) <- (ZipCompressor, RarCompressor).

We had a depender and a dependee package. We extracted the visible part of the dependee package into a new package and both the depender package and the remainder of the original dependee package depends on that now.
Doesn't it resemble something? If we replace the word "package" to "class" and "visible part" to "interface" what we get is the appliance of the DIP (Dependency Inversion Principle). An example with classes and interfaces:

UserService ------uses------> UserRepository <----implements----- MongoUserRepository

And (not so) surprisingly there is an equivalent existing principle for packages, called SDP (Stable Dependencies Principle), coined by the same Robert C. Martin, who has come up with DIP (or SOLID). If you haven't heard of them yet, I strongly advise to follow the links above.

Back to the game, after reinventing the wheel, I'm quite content with the result. We have found a general principle not only to justify the "rightness" (from at least one point of view) of our chosen package structure at the end, but it might even give us some practical advices how to achieve it. To put it in one sentence, if we have the package structure

(A) -> (B)

then move all classes in B that are not referenced directly from A to a new package B2, rename B to B1 and the dependencies will look like

(A) -> (B1) <- (B2)

Of course this is a simplistic example with only 2 packages involved, but I think the general idea can shine through. And of course there is a lot of other important factors, package cohesion for example, we haven't really taken into consideration. In the following posts I want to explore those and investigate the "usefulness" of some package coupling metrics.



Friday 25 October 2013

Represent your IDs as Value Objects

I'm a big fan of creating Value Object classes for ids. It might seem over-engineering at first, so if you think so, please suspend your disbelief for a short time and let me try to show you some (3) benefits.

Let's have the following example. In your RPG game a Player can choose an Avatar for herself from a bunch. Let's assume first that you haven't followed my advice and used simply primitives (or their autoboxed versions) for ids. It's easy to imagine that the code for this use case includes an application service (or facade if you like) which does everything application services are supposed to do, and delegates to a domain service.

In the Domain all kind of checks and fiddling could ensue with passing on and on the playerId and the avatarId to various objects and methods.

//application service - orchestrates beetween domain services and objects,
//authenticates, handles transactions, ....
class PlayerAccessoriesService {
 
        public void assignAvatarToPlayer(Integer playerId, Integer avatarId) {
                authenticationCheck();
                logging();
                transactionStart();
                playerService.assignAvatarToPlayer(playerId, avatarId);
                transactionEnd();
        }      
}
//domain service
class PlayerService {
        
        public void assignAvatarToPlayer(Integer playerId, Integer avatarId) {
                avatarRepository.checkAvatarIdIsValid(avatarId);
                Player player = playerRepository.find(playerId);
                player.setAvatar(avatarId);
                playerRepository.update(player);
        }             
}
 
interface PlayerRpository {
      Player find(Integer playerId);  
}
interface AvatarRepository {
      void checkAvatarIdIsValid(Integer avatarId);  
}
So let's see the various problems with this approach!

Problem 1

Now, since both ids are Integers imagine how easy to make a small mistake and pass playerId instead of avatarId somewhere. I assure you, in a fairly complex code with many moving parts, hours of debugging can follow before you detect this mistake. Introducing id classes like


class PlayerId {
        private final Integer value;
        public PlayerId(Integer value) { this.value = value; }
        public Integer getValue() { return this.value; }
}
eliminates this problem. The new method signatures would look like

class PlayerService {
        public void assignAvatarToPlayer(PlayerId playerId, AvatarId avatarId) { ... }             
}
The difference is somewhat similar to the one between static and dynamic typing. The transformation from primitives to Value Objects happens in the Anti-corruption layer, wherever you may draw it (between application and domain layer, or outside of the application layer).

Problem 2

At some point it turns out that Integer is too limited to hold the playerIds and the decision to convert to Long is made. Without a wrapper class every single method which has it as an Integer argument must change (see the code above). It can mean a pretty large number of changes. If it's wrapped around with a Value Object, the only places we need to change are the ones when it's actually evaluated (at the boundaries of the anti-corruption layers, where it is transformed to/from primitive).

Problem 3

Let's assume the client was sloppy and sent us a Null as playerId. The code passes it around for a while before something tries to actually use it (like the database), then most probably blows up with some strange exception. If it happened deep enough (long way from the entry point, the app service), it might prove tricky to locate the source of the problem. If in our Value Object class constructor we add some validation and wrap the primitive value in it at once when it steps inside our domain, like this
class PlayerId {
        private final Integer value;
        public PlayerId(Integer value) { 
                Validate.notNull("Player Id must be non-null", value)
                this.value = value; 
        }
        public Integer getValue() { return this.value; }
}
it will blow up immediately, making the problem very easy to locate. What we actually did here is applying the Fail-fast principle.

Naming of interfaces and abstract classes

This gonna be a short one. What naming conventions (if any) do you follow for interfaces, abstract classes and implementations? Sometimes I come across the C# way of using the prefix 'I' in interfaces and feel an instant dislike. I think by referring to the type of the reference we are breaching encapsulation and going against polymorphism as well. Let's see the following example

class UserLifecyleService {
        private IUserRepository userRepository;
        public void createUser(User user) {
                //do some stuff, logging, authentication,...
                userRepository.create(user);
                //do some stuff, send out an event or whatever
        }
}
So the repository it's an interface. What if later the need arise to make it an abstract class instead? Or a concrete one? Or the other way around, it was concrete then we realised that we are supposed to depend on abstractions? What's happening here is that we've exposed some implementation detail to the client of the repository, creating an unwanted coupling. If we change the type of our component, then we need to do code change in the client (here the UserLifecycleService), too.
But since it seems a default convention in C# circles (and I've come across some cases in Java projects), I'd be interested in any counter-arguments.

Thursday 24 October 2013

Testing different implementations of the Domain Repository interface

The post is about a simple but useful way of writing tests against different implementations of a Repository interface. Assuming you have a PlayerRepository in your Domain (this is just an interface) and two implementations in the Infrastructure layer, InMemoryPlayerRepository and MongoPlayerRepository. The latter is used in production, the former is mainly for tests.

interface PlayerRepository {
        boolean exists(PlayerId playerId); 
}
class InMemoryPlayerRepository implements PlayerRepository {
        @Override public boolean exists(PlayerId playerId) {  ...  }
}
class MongoPlayerRepository implements PlayerRepository {
        @Override public boolean exists(PlayerId playerId) {  ...  }
}
I used to maintain different test suites for them, since MongoPlayerRepository required for example a running Mongo process and a running Spring application context, while the in-memory version did not. But most tests were identical so it was simply duplicated code. The solution I'm using nowadays is extracting the tests in an abstract test template class and do implementation specific test configuration in the subclasses.

abstract class AbstractPlayerRepositoryTestTemplate {
        protected PlayerRepository testObj;
        @Test
        public void exists() { 
                // setup
                ...
                //assert
                assertTrue(testObj.exists(playerId));
        }
}
 
class InMemoryPlayerRepositoryTest extends AbstractPlayerRepositoryTestTemplate {
        @Before
        public void before() {
                testObj = new InMemoryPurchaseStore();
        }
}
 
@ContextConfiguration("classpath:applicationContext-test.xml")
@RunWith(SpringJUnit4ClassRunner.class)
class MongoPlayerRepositoryITest extends AbstractPlayerRepositoryTestTemplate {
        @Resource
        private MongoPlayerRepository mongoRepository;
        @Before
        public void before() {
                testObj = mongoRepository;
                testObj.clear();
        }
}
Of course you can use the approach for anything where there are multiple implementations of an interface, I just come across the need most frequently with Repositories.

Share your open source project with Sonatype

Two weeks ago the idea struck me, that having a Maven plugin to check DDD layer violations and optionally break the build would prove very useful in our projects. Then thanks to Kaloz the idea has improved to be more ambitious, so I've decided to

1. put a bit more design checks in the plugin
2. go to open source.

After some googling I ended up configuring a Sonatype Open Source Project Repository Hosting. This guide is a very good starting point, I'd only like to fill up some holes I felt missing in the guidance.

1. Before creating a JIRA ticket, you have to own a domain, like tindalos.org, and your groupId will reflect it. Simple way to create a domain is through Go Daddy.  

2. As the Guide points out, you should install GPG on your computer. I went very smooth on Mac (as opposed to Windows 8)

3. Put the following snippets into your settings.xml

   

4. The pom should look like


The groupId should match with the one you have specified when creating the JIRA ticket and somehow resemble to the domain you own.

5. run the following commands in command line

mvn release:clean
mvn release:prepare -Pgpg
mvn release:perform -Pgpg

6. Once you released your artifact with mvn release:perform, log in to Sonatype OSS. Find your staging repository, close it, then release (the first time I think you have to Promote, too and comment on your JIRA ticket). 

That's it. In a short while you should see your artifacts in the Central Repo.