A frequent issue I come across writing integration applications with Mule is deciding how to communicate back and forth between my front end application, typically a web or mobile application, and a flow hosted on Mule.

I could use web services and do something like annotate a component with JAX-RS and expose this out over HTTP.  This is potentially overkill,  particularly if I only want to host a few methods, the methods are asynchronous or I don’t want to deal with the overhead of HTTP.  It also could be a lot of extra effort if the only consumers of the API, at least initially, are internal facing applications.

Another choice is to use “synchronous” JMS with temporary reply queues.  While Mule makes this easy to do, particularly with MuleClient, I now have to deal with the overhead of spinning up a JMS infrastructure.   I could also be limited to Java only clients, depending on which JMS broker I choose.  The latter is particularly signifcant, as Java probably isn’t the technology of choice on the web or mobile layer.

ØMQ for RPC

ØMQ, or , is a networking library designed from the ground up to ease integration between distributed applications.  In addition to supporting a variety of messaging patterns, which are enumerated in the extremely well written guide,  the library is written in platform agnostic C with wrappers for different languages like Java, Python and Ruby.

These features make it a good candidate to solve the challenges I introduced above, particularly since a community contributed module for ØMQ was released recently.  Let’s consider a  simple service that accepts a request for a range of stock quotes and returns the results and see how we can host this service with Mule and expose it out with the ØMQ Module.

Data Serialization with Protocol Buffers

Data is transported back and forth over ØMQ as byte arrays.  We, as such, need to decide on a way to serialize our stock quote request and responses “on the wire.”  Before we do that, however, let’s take a look at the Java canonical data model we’re using on the client and server side.  The following Gists show the important bits of the StockQuote and StockQuoteResponse classes.

and what follows is the interface for StockDataService.

We could use Java serialization to get the objects into byte arrays.  Ignoring the other deficiencies of default Java serialization, the main drawback is that it limits our clients to one’s running on a JVM.  XML or JSON provide better alternatives, but for the purposes of this example we’ll assume we want a more compact representation of the data (this isn’t totally unrealistic, stock quote data can be extremely time sensitive and we probably want to minimize serialization and deserialization overhead.)

Protocol Buffers provide a good middle ground and also boast a Mule Module to provide the necessary transformers we need to move back and forth from the byte array representations.  Let’s define two .proto files to define the wire format and generate the intermediary stubs for serialization.

You typically would use the “protoc” compiler to generate the Java stubs.  This is tedious, however, so we’ll instead modify the pom.xml of our project to compile the protoc files during the compile goals:

Since we already have a domain model we’ll add some helper classes to simplify the serialization tasks on the client side.

Configuring StockDataService

Now that we have a canonical data model and a wire format defined we’re ready to wire up a Mule flow to expose the service out.  Note that for this to work you need to have jzmq installed locally on your system.  The following dependency needs to be added to your pom.xml once its installed:

Where systemPath is the location of the zmq.jar on your filesystem.

Once that’s out of the way we can configure the flow, as illustrated below:

The ZeroMQ inbound-endpoint will be bound to TCP port 9090 with a request-response exchange pattern.  The deserialize MP in the protobuf module will deserialize the byte array to the generated StockQuoteRequestBuffer class.  From there we’ll use MEL to invoke the helper method on StockQuoteRequest to transform the intermediary class to the domain model.

The List of StockQuotes returned from StockDataService will  be transformed by the MEL expression using the “toProtocolBuffer” helper method on the domain model.  The Protocol Buffer Module is then smart enough to implicitly transform the intermediary object to a byte array for the response.

Consuming the Service from the Client Side

Now that the server is ready we can turn our attention to the client side code to invoke the remote service.  Let’s take a look at how this works:

We start off by defining the StockQuoteRequest object to give us all the quotes for Facebook stock from the last week.  We can then open up a ZMQ socket, set the timeout, connect to the ZMQ socket on the remote Mule instance and send the byte representation of the StockQuoteRequest to it.

zmqSocket.recv is then used to receive the bytes back from Mule.  From here we can use the listOfStockQuotesFromBytes helper method we wrote above to convert the Protocol Buffer representation to a List of StockQuotes.  Despite the fair bit of plumbing we did above, this is a pretty concise bit of client side code to invoke the remote service.

Conclusion

This blog post only touched on the features of ØMQ and the ØMQ Mule Module.  In addition to request-reply, other exchange-patterns are supported, like one-way, push and pull.  This effectively gives you the benefits of a reliable, asynchronous messaging layer without a centralized infrastructure.  I hope to cover this in a later post.

Protocol buffers also seem like a natural fit as a wire format for ØMQ.  protobuffers echo  ØMQ’s principals of being lightweight, fast and platform agnostic.  These are also, not coincidently, principals Mule shares as an integration framework.

The project for this example is available on GitHub.

No related posts.


2 Responses to “Lightweight RPC with ØMQ and Protocol Buffers”

Hassan November 5th, 2012, 2:27 am

Looks like an async alt. to the sync Restful/JAX-RS for distributed RPC-style calls. Not super clear to me how this is more light-weight than the Restful alt.
Actually, having an embedded messaging library seems more like an overkill if all you need is a couple of RPC methods on the interface!?
For use cases that may justify this use, it’s a pretty cool thing to have the messaging layer w/o the infrastructure dependenncy… Next question – does this support persistent transactional messaging?

john.demic November 7th, 2012, 6:49 am

Thanks for the response Hassan. I personally think there’s a fair bit of overhead, both architecturally and operationally, defining an HTTP API for certain operations, but that’s definitely debatable – as you allude to. I think the above approach provides a certain amount of flexibility not offered by similar approaches, namely synchronous JMS. AFAIK there isn’t built in transactional capabilities with ZeroMQ but implementing a compensating transaction with exception strategies should be relatively easy.

Leave a Comment