In my last post I went back to basics and talked about how integration systems are built using messaging and explained the motivation for the architectural changes in Mule 3.0.  Now I’ve set the stage and clearly explained the “Why?” we are ready to talk about the “What?”; the architectural changes themselves.

In this post we’ll zoom in and look, more concretely, at how the Pipes and Filters style is used for message processing in Mule 3′s with some API snippets to really bring things down to earth.

From Architectural Style to API

Let’s start my revisiting the definition of the Pipes and Filters architectural style:

  • A Filter can be thought of as a black-box processing unit with a single entry-point that receives a message and processes before making the result of this processing available.
  • A Pipe simply represents the connections between Filters such that result of Filter n is passed onto and used as the the input of Filter n + 1.

This architectural style can be used in distributed fashion where each Filter can potentially be located in different places, on different hardware or operating systems.  It can also be used within a single system as a way of architecting a very flexible and modular design; and this is the model we have used for Mule’s internal architecture.

Mule’s architecture needs to allow you to chain and compose Filters in order to implement and expose message processing flows.  In it’s simplest form the chaining of Filters is all done in a single virtual machine and no explicit pipes need to exist .  The way I chose to implement this was to use the Decorator variant of the Intercepting Filter pattern where each filter has a reference to the next filter.  The reason for the use of this variant is that it better supports the forking of message flows and runtime resolution of the next Filter as compared to the variant that uses a Chain object and iterates over the Filters.

interceptingfilterWhile pipes don’t explicitly exist, they can very easily be added as required by simply implementing the same Filter interface.  This technique could easily be used to transparently add message queuing between filters or more explicitly to distribute a flow.

The MessageProcessor

We choose to call our implementation of the Filter “Message Processor” for two reasons.  Firstly because it more accurately describes the application of Filters in the context of Mule and secondly to prevent any confusion with message filtering or the interfaces we currently have for filtering messages.

As you can see from the interface below the MessageProcessor interface is very simple:

This is the single identical filter interface I talked about in the last blog post.  It defines a single process method that is called with a MuleEvent. The return value is also a MuleEvent and this is the result of the invocation.  A MuleEvent encapsulates the message that is being processed with additional context information such as the inbound endpoint, security credentials etc. In Mule 3 you’ll notice that almost everything that does anything with a message now implements this interface.

In the Building Blocks page of the Mule 3 User Guide you’ll find information about the different types of message processor that can be used with Mule organized by category.

Where do messages come from? The MessageSource!

I said that almost everything in Mule 3 now implements MessageProcessor. One of the few things that are not MessageProcessors are inbound endpoints. Why not? Because they do not process messages but rather act as sources of messages. We could potentially also have other types of message sources in the future too. A message source may receive or generate messages in different ways depending on their implementation means,  but from the perspective of the rest of the flow it is just a source.

The MessageSource interface is simply one that allows a MessageProcessor listener to be set as you can see below.  This listener may be a single MessageProcessor, or a chain of MessageProcessors: this does not matter and neither does the source need to know, that’s the beauty of this model.

Let’s create a chain…

Ok, so we now have the MessageProcessor and the MessageSource, and a MessageProcessor can be set as a MessageSource listener connecting them. This is all great, but it doesn’t allow us to chain two or more message processors as described above.  The chosen approach does not rely on iterating through a list of MessageProcessors but rather relies on each MessageProcessor knowing which MessageProcessor comes next.  What this comes down to is that each MessageProcessors needs to be MessageSources too.  So thats what we have, the InterceptingMessageProcessor that implements both MessageProcessor and a MessageSource .

An InterceptingMessageProcessor is used where the next item in the chain is known when the chain is created, otherwise alternative MessageProcessor implementations can be used determine the next message processor to be used in runtime.  One example of where this is message routing.

How about routing?

With the above interfaces we can easily chain together as many MessageProcessors as we like but you’ll notice that an InterceptingMessageProcessor, as it is a MessageSource, only has a single listener. This allows us to filter the flow or proxy the next MessageProcessor, but not route to a processor resolved in runtime or to multiple processors.

A MessageRouter by definition has multiple possible routes and is responsible for choosing which route should be used to continue message processing of a particular message. The MessageRouter interface below is a MessageProcessor (of course) but rather than allowing a single listener to be set, it allows multiple routes to be added (or removed).

MessageRouter implementations determine which route(s) should be used.

Summary

The quickest and easiest way to summarize is with a UML class diagram :-)

classdiagram

What next?

I don’t have another post for this series planned but if there is anything people would like me to cover I’m quite happy to cover it, just let me know via a comment.

Anyhow, now we’ve learnt about the streamlined architecture and it’s API you are probably wondering what this means to you the user.   we have a couple of blog posts on the way for you.

  • Configuring Message Processors on Endpoints describes how endpoints have been made more extensible and configurable using Message Processors.
  • Going with the <flow> in Mule 3” will describe and explain how to use the new <flow> construct in Mule 3 that allows you to build flow’s how you want.

I’m sure others will also follow…

No related posts.


7 Responses to “Mule 3 Architecture, Part 2: Introducing the Message Processor”

From the Mule’s Mouth » Blog Archive » Mule 3 Architecture, Part 1: Back to Basics September 29th, 2010, 3:58 pm

[...] the next blog post I’ll return to Mule and talk about how the Pipes and Filters architectural style has been [...]

Suresh December 2nd, 2010, 8:54 am

The MessageSource interface is confusing. The message source should produce a message and MessageTarget should consume/listen a message. Setting messageprocessor as a listener in MessageSource interface does not make any sense at all.

Daniel Feist December 2nd, 2010, 9:24 am

The MessageSource does produce messages and for every new message that is produced the listener is invoked. Listener/MessageProcessor/MessageTarget are synonyms in my mind. Or are you suggesting that rather than the source having a single listener, processors should instead have 0-> sources?

From the Mule’s Mouth » Migrating to Mule 3: Service or Flow April 14th, 2011, 9:19 am

[...] jersey support in Mule has been changed from being configured as an endpoint to what is called a message processor in Mule 3. The following shows how the configuration [...]

Sanjay Bharatiya May 23rd, 2011, 9:10 pm

This is a wonderful article. Can you please publish an article which explains the way in which response messages are dealt in a request-response exchange pattern. Also how can I find which outbound endpoint was invoked after a response has come back?

From the Mule’s Mouth » Build your Cloud Connector in 5 easy steps June 30th, 2011, 4:13 am

[...] SDK will take every method that is annotated with @Operation and build a Message Processor around it. The message processor will handle expression evaluation, argument transformation, [...]

Xavier April 17th, 2012, 11:49 pm

Something based on this series should be the starting point of the Mule documentation! Thanks!
The missing link to the followup post is http://blogs.mulesoft.org/go-with-the-flow-with-mule-3-1/

Leave a Comment