Mule Clustering is the easiest way to transparently synchronize state across Mule applications in a single data center. Mule Clustering, however, assumes that the Mule nodes are “close” to each other , typically on the same network, in terms of network topology. This allows Mule applications to be developed independently from the underlying cluster technology and not to explicitly account for scenarios like network latency or cluster partitioning.
These assumptions aren’t as sound when dealing with multi data center deployments. Unless you’re lucky enough to have fast and reliable interconnects between your DC’s you need to start accounting for latency between datacenters, the remote data center going offline, etc. In such situations the choice of a data synchronization mechanism becomes paramount.
I’m pleased to announce the availability of the Oct 2013 release of Studio and CloudHub. It greatly expands our support for DataSense in our Anypoint™ Connectors, improves connector usability through auto-paging of result sets, and includes many other improvements.
In the past, as now, Mule ESB follows a release schedule that introduces a new version of our industry-leading ESB software every 9 – 12 months, supplemented with maintenance releases approximately every 6 months. Though this cadence fit very tightly with the demands of our customers who deploy Mule on premises, we came to realize that our customers deploying Mule to CloudHub were much more flexible in terms of updating versions of software, and were more eager to take advantage of new features and functionality.
As I’ve talked about in a previous webinar, Welcome to the API Economy, the adoption of APIs is driving immense change in how organizations connect with their customers, suppliers and partners. Nowhere is this change more marked than in healthcare, where healthcare payers and providers are using technology to reinvent how they deliver healthcare services and how they engage with patients.
In our next webinar, we’re excited to host Kin Lane, currently a White House Presidential Innovation Fellow at the Department of Veteran Affairs, and Ed Martin, Deputy Director at the UCSF’s School of Medicine and Center for Digital Health Innovation. Kin and Ed will provide first-hand perspectives on how APIs are being used to extend the point of care, build and nurture patient communities and provide an agility layer for mainframe systems.
Suppose that you have a Maven project and you want to download Node.js modules previously uploaded to NPM. One way of doing that without running node is by using the npm-maven-plugin. It allows the user to download the required Node modules without running node.js: It is completely implemented on the JVM.
First of all you will need to add the Mule Maven repo to you pom.xml file:
After doing that, you will need to add the following to the build->plugin section of your pom.xml file:
We have a lot of cool things happening at MuleSoft, here is a quick round up of things you shouldn’t miss.
Discover how to take your integration strategy to the next level at MuleSoft Summit — coming to a city near you this Fall! Join the core MuleSoft team and integration experts to learn best practices and empower your development team to stay one step ahead of evolving business needs. The eight cities on this Fall Summit tour are:
In the past few months, you may have noticed that we have regularly announced the release of new Mule connectors for NoSQL data-stores. Two main forces are at play behind the need for these types of data-stores:
Big Data – The need to deal in realtime or near-realtime with the vast amounts of data “web-scale” applications can generate,
BASE vs ACID – The need to scale reliably in the unreliable environment that is the cloud leading to the relaxation of RDBM’s ACID properties (Atomicity, Consistency, Isolation and Durability) towards BASE ones (Basically Available, Soft state, Eventually consistent).
So where is Mule coming into play in this equation you might ask?
Mule can help integrating such NoSQL data-stores with the resources that produce and consume data. This integration goes way beyond than simply establishing protocol connectivity: thanks to Mule queuing, routing and transformation infrastructure, important tasks like data capture and curation can be achieved. Mule can also be used to expose APIs that make either raw data or processed data available for use in custom applications.
In your daily work as an integration developer you’re working with different kinds of patterns, even if you’re not aware of it.
Since Mule is based on EIP (Enterprise Integration Patterns) you’re most definitely using patterns when using Mule.
One of those patterns that seems to raise a lot of questions is the “fork and join pattern”. The purpose of the fork and join pattern is to send a request to different targets, in parallel, and wait for a aggregated response from all the targets.
The recently upgraded Redis connector for Mule allows you to interact with this NoSQL data-store in a convenient manner. This blog is a tutorial that you can follow in order to get your feet wet with Redis, if you don’t know it already, or Mule, if you have Redis experience and want to see how they both can work together.
In this tutorial, we will build a very simple back-end that captures page visit count for identified users via a web bug. This example illustrates the usage of Mule as a tool for capturing events and routing them to NoSQL storage for later analysis.
It’s hard to believe that MuleSoft’s Fall 2013 Summit series is less than a month away. Summit is one of the most rewarding things I do all year. For me, it’s an opportunity to talk to many of our customers, partners and prospects about the integration challenges they face and the innovative ways they’re using our solutions to address them. Summit is a great opportunity to share best practices, lessons learned, and network with other like-minded members of the MuleSoft community.