is an ever-present concern for IT. It can be a rather daunting area when one considers all of the different possible dangers and the large variety of solutions to address them. But, the aim of Enterprise Security really just boils down to establishing and maintaining various levels of access control. Mule itself has always facilitated secure message processing both at the level of the transport, the service layer and of the message . Mule configurations can include all that Spring Security has to offer giving, for example, easy access to an LDAP server for authentication and authorisation. On top of that Mule Applications can apply WS-Security thus facilitating, for example, the validation of  incoming SAML messages. But in this post, rather than delve into all the details of the very extensive security feature set , I would rather approach the subject by considering the primary concerns that drive the need for security in a Service Oriented Architecture, how the industry as a whole has addressed those concerns, the consequent emergence of popular technologies based on this Industrial best practice and finally, the implementation of these technologies in Mule. 

At MuleSoft, we’ve been saying for years that point-to-point integration is evil. With time to market measured in minutes or hours, point-to-point integration projects measured in man-years are headed the way of the Dodo. And the no-software no-hardware model of iPaaS promises to shrink time to market even more.

But how fast can you deploy an enterprise-grade integration from scratch? We’re setting out to break preconceived notions of time to market with the 15-minute integration. Like the 4-minute mile before Roger Bannister, the 15-minute integration sounds like a myth. So is it for real?

As Nicolas pointed out in “7 things you didn’t know about DataMapper“, it’s not a trivial task to map a big file to some other data structure without eating up a lot of memory.

If the input is large enough, you might run out of memory: either while mapping or while processing the data in your flow-ref

Enabling the “” function in DataMapper makes this a lot easier (and efficient!).

just enable "streaming"

When we started working on the Mule High Availability () solution we wanted to create the simplest and most complete ESB HA solution out there. With Mule 3.4 we have further enhanced the capabilities of the Mule HA solution. In this blog post we would like to share with you some details about some of the the following highlight HA features of Mule 3.4:

  • Dynamic Scale Out
  • Unicast Cluster Discovery
  • Distributed Locking
  • Concurrent File Processing

If you think that telemetry should only be dealt with by Mr. Chekov, think again… When the “Internet of things” met publish/subscribe, the need for a lightweight messaging protocol became more acute. And this is when the MQ Telemetry Transport ( in acronym parlance) came into play. From its own words, this connectivity protocol “was designed as an extremely lightweight publish/subscribe messaging transport. It is useful for connections with remote locations where a small code footprint is required and/or network bandwidth is at a premium“.

In a world where everything will eventually have an IP address, messages will be constantly flowing between devices, message brokers and service providers. And when there are messages to massage, who you gonna call? Mule ESB of course! With this new MQTT Connector, built on Dan Miller‘s solid ground work, the Internet of things, which had its rabbit, will now have its Mule!

In this blog we will look at an example where Mule is used to integrate conference booth QR Code scanners with an MQTT broker. Why using MQTT? If you’ve ever been to a technical conference and expo, you’ve surely tasted how abysmal the quality of a WIFI network can be. Beside confirming that the shoemaker’s children always go barefoot, it is also an encouragement for using a messaging protocol that’s both resilient and modest in its network needs. With this said, let’s first start by looking at the overall architecture diagram.

Apache Cassandra is a column-based, distributed database.  Until recently the only way to interact with databases from Mule was to reuse one of the existing Java clients, like Hector or Astyanax, in a component.  Mule’s Cassandra DB Module now provides message processors to insert, update, query and delete data in Cassandra.

To show off some of the features of the Cassandra module I’ll show how to implement a simple account management API.  This API will allow clients to perform CRUD operations on accounts, behaving similarly to something like an LDAP directory.

Picture an architecture where production data gets painstakingly replicated to a very expensive secondary database, where, eventually, yesterday’s information gets analyzed. What’s the name for this “pattern”? If you answered “Traditional Business Intelligence (BI)”, you’ve won a rubber Mule and a warm handshake at the next Mule Summit!

As the volume of data to analyze kept increasing and the need to react in real-time became more pressing, new approaches to BI came to life: the so-called Big Data problem was recognized and a range of tools to deal with it started to emerge.

Apache Hadoop is one of these tools. It’s “an open-source software framework that supports data-intensive distributed applications. It supports the running of applications on large clusters of commodity hardware. Hadoop was derived from Google’s MapReduce and Google File System (GFS) papers” (Wikipedia). So how do you feed real-time data into Hadoop? There are different ways but one consists in writing directly to its primary data store named HDFS (aka Hadoop Distributed File System). Thanks to its Java client, this is very easily done in simple scenarios. If you start throwing concurrent writes and the need to organize data in specific directory hierarchies, its a good time to bring Mule into the equation.

In this post we will look at how Mule’s HDFS Connector can help you write time series data in HDFS, ready to be map-reduced to your heart’s content.

In Part 1 of this three part blog, we created an HTTP REST service that retrieves employee records from an HR database and returns it in JSON format. In Part 2, we took a look at how to easily turn this into a SOAP XML service without any coding by utilizing the SOAP component for top-down web service generation and the Data Mapper for transformations. Let’s now publish the Employee Record as a message to MQ, which is a common approach for integrating with legacy on-premise systems. (Note: Setup steps are at the end of each part for the necessary software. Parts 1 and 2 of this blog needs to be completed.)

In Part 1 of this three part blog, we created a simple message flow in Mule Studio exposed as a basic HTTP service that retrieves employee data from an HR database and returns it in JSON format. JSON is a standard format that is very popular among web and mobile applications. Let’s now take a look at how to easily turn this into a SOAP web service, which is a standard in use in a lot of internal SOA and on-premise integration projects. We will do this without any coding. We will first generate a SOAP web service using a top-down approach with an existing WSDL and then graphically map the database table structure to the expected message format of the SOAP web service (Note: Setup steps are at the end of each part for the necessary software. Part 1 of this blog needs to be completed.)

I made a shift to MuleSoft! After spending most of my career in Big Red and Big Blue, I decided to jump from the walls of the big commercial enterprise technology vendors to the fast moving world of open-source technologies, SaaS and the Cloud. I’ve worked with several of the traditional on-premise integration tools from and and now I’ll be working with MuleSoft’s latest and greatest integration platform that brings integration to the cloud.