Sunday, May 25, 2008

Transformations using Domain Adapters

Actually, this entry started as a reply on the Mule-dev mailing list.

...

What caught my attention was your transformation challenge. Specifically, how you decided to have a less anemic domain model and move transformations there instead of dedicated transformers (hope I didn't misinterpret it). Could you shed more light on this move? This could be an interesting pattern for some cases.

Andrew

Well, the idea is that your domain uses adapters to wrap the source data and the accessors perform the transformation in place. Since it's usually a straight mapping, we haven't found the need to cache the values.

The mutators also store directly to the underlying source bean, except in the cases where the value is derived from multiple input fields and updating them would break some consistency rules (actually in such a case, it would be better to avoid providing accessor if you can). A third approach is to have a map for changed properties and have all your accessors check there first and all mutators write there. This way you don't have to do a deep copy when you move the message using the VM transport.

The technical part is that there is a transformer, which has its source and output classes configured in the Mule configuration (I've had to add a custom setter for the source class). In the transformer initialization, it resolves a constructor of the output class, that takes a single instance of the source class as argument. The transformation itself is invoking the constructor with the payload. Note that the specified output class has to be a concrete instance in this case. Perhaps I could have done something similar using expressions but I like the type safety of this approach (if one of the classes is missing it blows at runtime).

Pros:
  • You can easily trace why the data is the way it is.
  • Adding new field requires changes to only one class (the adapter).
Cons:
  • At least the first layer of adapters is coupled to your source objects (if you use regular transformers, the transformer clearly decouples the src and output models). I would advice putting thest in a separate packages.
  • Needs better regression testing. Usually one catches a good number of breaking data changes in the transformation step. Since we transform on demand, this means that you either need bigger unit test or might have problems go unnoticed until integration testing
  • You lug a lot of data around, I can imagine that the serialization and cloning overhead could become prohibitive. In such cases you can have a method like Adapter.pruneStuffIDontNeed() that removes the parts of the input message that have not been used until now (you also need to track them).

No comments:

About Me: check my blogger profile for details.

About You: you've been tracked by Google Analytics and Google Feed Burner and Statcounter. If you feel this violates your privacy, feel free to disable your JavaScript for this domain.

Creative Commons License This work is licensed under a Creative Commons Attribution 3.0 Unported License.