Category: Mule ESB

Today I will introduce our performance test of the Batch Module introduced on the Mule’s December 2013 release. I will guide you through the test scenario and explain all the data collected.

But first, if you don’t know what batch is, please read the great Batch Blog from our star developer Mariano Gonzalez, and for any other concerns you also have the documentation.

Excited? Great! Now we can start with the details, this performance test was run on a CloudHub’s Double worker, using the default threading profile of 16 threads. We will compare the on-premise vs cloud performance. Henceforth we will talk about HD vs CQS performance. Why? On-Premise and CloudHub users will be using by default the HardDisk for temporal storage and resilience but, this is not very useful on CloudHub as if for any reason the the worker is restarted, the current job will loose all its messages, then if  Persistent Queues are enabled the Batch module will automatically store all the data with CQS (Cloud Queue Storage) to achieve the expected resilience.

We’ve all been there. Sooner or later, someone asks you to periodically synchronize information from one system into another. No shame to admit it, it happens in the best families. Such an integration should start with getting the objects that have been modified since the last sync (or all of them in the case of the very first sync). Sure, this first part sounds like the easiest of all the sync process (and in some cases it actually is), but it’s not without its complexity. Step by step you need to:

  • Go into a persistent store and grab the timestamp for the last update
0

Companies are adopting a variety of SaaS applications to meet their business goals but are ending up with a highly fragmented ecosystem. At the same time, customers are interacting with businesses across an increasing number of channels, including websites, numerous social media platforms, call centers, in-store and more. With so many customer touchpoints, optimizing engagement and having a single view of your customer data across all applications, devices, and interactions is crucial. Many businesses are interested in becoming customer companies and improving service and support across all channels, but just don’t know where to start.

Fun Fact – Here at MuleSoft, we use nearly 80 SaaS applications!

The challenging part of adopting new SaaS applications is connecting them to existing applications and syncing data across a customer’s lifecycle to ensure a streamlined customer experience. If these applications are not connected, data remains siloed and companies fail to obtain a single view of the customer. An integration platform makes it possible to break down those silos, streamline processes, build a future-proof enterprise, and improve customers’ experience.

We are all very proud to announce that Mule’s December 2013 release shipped with a major leap forward feature that will massively change and simplify Mule’s user experience for both SaaS and On-Premise users. Yes, we are talking about the new Batch jobs. If you need to handle massive amounts of data, or you’re longing for record based reporting and error handling, or even if you are all about resilience and reliability with parallel processing, then this post is for you!

We’re excited to announce the Mule Studio (December 2013) release! It includes a new CloudHub Mule Runtime (December 2013), which introduces new batch capabilities for complex ETL-like data integration tasks such as synchronizing SaaS and on-premise applications, flat file processing, and database reporting and synchronization. Also new in this release are Studio support for Mule expression language auto-completion, cron-expressions for poll scheduling and support for management of job schedules in CloudHub.

 

Join David Chao, Senior Product Manager at MuleSoft, as he discusses the state of the media industry with guest speaker Richard Donovan from Sky in our upcoming MuleSoft webinar, “Integration Trends in Media: 2013 Review.

During 2013, media companies have been working with a sense of purpose to expand their traditional distribution models and move towards delivering content across new platforms. Cross platform integration will be key as another significant development this year is the consumer’s perspective about how they want to receive and interact with content which, in turn, means media companies will need to get closer to their customers.

Picture cool kids in startups, cranking code as if their lives depend on it, focusing on the proverbial MVP above all else. At this stage, who cares if technical debt accumulates as fast as code gets written? It would be a waste of time and focus to try to keep the field as green as it was initially. Then the worst happens: the cool kids have it right, people love their new app and traffic starts to surge. Though strong, the duct tape that olds the application together starts to show signs of fatigue. Maintenance becomes painful, adding new features is excruciating. The blood of the architecture that has been sacrificed on the altar of time-to-market is calling for revenge.

One of the most typical architectural mishap that comes back to haunt startups is tight coupling: the whole system is a monolith where coupling manifests itself both from a temporal manner (everything is synchronous) and a lack of abstraction in the interactions between the subsystems (everything knows the intimate details of everything else).

The good news is that there is hope: the giants of past time, upon whose shoulders everything is built, have fought these problems and won. Take Hohpe/Woolf’s Enterprise Integration Patterns (EIP) for example. They discuss how messaging can be used to alleviate coupling issues. Sure enough, the “enterprise” term in the name is dubbed “run away!” by our startups’ cool kids. So in this post we’ll look at a few of these patterns and how they could be used beneficially in modern applications. And hopefully these patterns will feel more lovely than enterprisey!

Besides the upcoming World Cup and Olympic Games, Brazil will also be hosting the first MuleSoft Summit in Latin America. Yes, now the cariocas and paulistas have another reason to celebrate!

On December 5th, our Muleys including our founder, Ross Mason, will be visiting Sāo Paulo for our first integration event in South America. It’s no surprise for anyone that Brazil is a hot spot for new technology trends, and our developer community in this country is proving to be one of the biggest worldwide.

 

In this installment of our MuleSoft webinar series, we discuss  how to identify legacy assets within your organization, synchronize data between modern and legacy systems, and service-enable legacy applications with APIs built through MuleESB.

A little bit about SOA

Effectively implementing a Service Oriented Architecture approach within your enterprise can help deliver faster time to ROI through increased agility.

The reality of legacy systems

Today, a large amount of business data and processes are tied up in legacy systems, which are difficult to access and modify due to a lack of modern interfaces and a scarcity of available expertise to work on the system. Legacy systems house critical information and functionality that need to be accessed by other systems and people.

In case you missed last week’s Meet a Muley post (featuring Eva, our Senior Java Developer) we’ve started a new weekly series! Every Friday we’ll introduce you to a new member of the MuleSoft team to give you some insight into what we’re all about.

This week we’ll be chatting with James Donelan, our VP of Engineering. When we chose James for this post, we immediately agreed that there needed to be some sort of voice recording for you all to truly understand why everyone loves to listen to James. And we found one!