Tag: CloudHub

I am excited to announce release 39 of CloudHub! This release is based on a lot of user feedback, and contains a beta of our redesigned user interface as well as one of our most requested features – & .

Redesigned Experience

We’ve been hard at work the last few months building a revamped user interface which helps you be more productive and integrates seamlessly with the Anypoint Platform for APIs. We’re excited to preview some of that work today. You’ll notice a clean, modern interface that makes it easier to get things done. For example, the the home page now provides easy access to your applications, settings, and logs at a glance. It now also has a handy summary of resource utilization and the number of recent transactions processed.

This is the third post on the Gradle Plugin Series, and a lot has happened to the plugin since the first article of the series was published. Today, I’m announcing exciting new features useful for tackling enterprise users’ needs. For more information on how to get started on building apps with , please check the previous blog post and specially the project’s readme file. Now let’s get started.

Fine tuning Mule Dependencies

This plugin is designed to be future-proof and still remain concise, so we’ve introduced a DSL for customizing the mule dependencies to be included as part of the . This allows users to (in a very concise way) fine-tune the modules, transports and connectors included when unit-testing, compiling your code and running your app.

In this post we are going to discuss a few emerging trends within computing including cloud based Database platforms, APIs and Integration Platform as a Service (iPaaS) environments.  More specifically we are going to discuss how to:

  • Connect to a SQL Database using Mule ESB  (for the remainder of this post I will call it Azure SQL)
  • Expose a simple API around an Azure SQL Database
  • Demonstrate the symmetry between Mule ESB On-Premise and its iPaaS equivalent CloudHub

For those not familiar with Azure SQL, it is a fully managed relational database service in Microsoft’s Azure cloud.  Since this is a managed service, we are not concerned with the underlying database infrastructure.  Microsoft has abstracted all of that for us and we are only concerned with managing our data within our SQL Instance.  For more information on Azure SQL please refer to the following link.

Prerequisites

In order to complete all of the steps in this blog post we will need a Microsoft Azure account, a MuleSoft account and MuleSoft’s AnyPoint Studio – Early Access platform.  A free Microsoft trial account can be obtained here and a free CloudHub account can be found here. To enable the database connectivity between MuleSoft’s ESB platform and Azure SQL we will need to download the Microsoft JDBC Driver 4.0 for SQL Server which is available here.

Mariano Gonzalez on Tuesday, March 25, 2014

Batch Module Reloaded

0

With ’s December 2013 release we introduced the new batch module. We received great feedback about it and we even have some users happily using it in production! However, we know that the journey of has just begun and for the Early Access release of Mule 3.5 we added a bunch of improvements. Let’s have a look!

Support for not Serializable Objects

A limitation in the first release of batch was that all records needed to have a Serializable payload. This is so because batch uses persistent queues to buffer the records making it possible to processes “larger than memory” sets of data. However, we found that non Serializable payloads were way more common that we initially thought. So, we decided to have batch use the Kryo serializer instead of the Java’s standard. Kryo is a very cool serialization library that allows:

  • Serializing objects that do not implement the Serializable interface
  • Serializing objects that do not have (nor inherit) a default constructor
  • It’s way faster than the Java serializer and produces smaller outputs

Introducing Kryo into de project did not only made batch more versatile by removing limitations, it also had a great impact in performance. During our testing, we saw performance improvements of up to 40% by doing nothing but just using Kyro (of course that the speed boost is relative to the jobs characteristics; if you have a batch job that  spends 90% of its time doing IO, the impact in performance won’t be as visible as in one that juggles between IO and processing)

Today I will introduce our performance test of the introduced on the Mule’s December 2013 release. I will guide you through the test scenario and explain all the data collected.

But first, if you don’t know what batch is, please read the great Batch Blog from our star developer Mariano Gonzalez, and for any other concerns you also have the documentation.

Excited? Great! Now we can start with the details, this performance test was run on a ’s Double worker, using the default threading profile of 16 threads. We will compare the on-premise vs cloud performance. Henceforth we will talk about HD vs CQS performance. Why? On-Premise and CloudHub users will be using by default the HardDisk for temporal storage and resilience but, this is not very useful on CloudHub as if for any reason the the worker is restarted, the current job will loose all its messages, then if  Persistent Queues are enabled the module will automatically store all the data with CQS (Cloud Queue Storage) to achieve the expected resilience.

We are all very proud to announce that Mule’s December 2013 release shipped with a major leap forward feature that will massively change and simplify Mule’s user experience for both SaaS and On-Premise users. Yes, we are talking about the new Batch jobs. If you need to handle massive amounts of data, or you’re longing for record based reporting and error handling, or even if you are all about resilience and reliability with parallel processing, then this post is for you!

Release 34 is now live! With this release we’ve made a number of improvements to to make managing your integrations easier. These include the ability to promote applications from sandboxes, monitor workers for problems, create secure environment variables, and scale applications vertically, as well as horizontally.

In the past, as now, ESB follows a release schedule that introduces a new version of our industry-leading software every 9 – 12 months, supplemented with maintenance releases approximately every 6 months. Though this cadence fit very tightly with the demands of our customers who deploy Mule on premises, we came to realize that our customers deploying Mule to were much more flexible in terms of updating versions of software, and were more eager to take advantage of new features and functionality.

0

Just when you thought it couldn’t get any better, it got better. Dataloader.io, the most popular Salesforce data loading solution on the AppExchange now supports importing and exporting of files to and from Dropbox!

dataloader.io and dropbox

Data loading aficionados can now quickly and easily import or export data directly to and from their Dropbox account. By simply entering your Dropbox credentials, users can make Dropbox their source for CSV files. Similarly, exporting to Dropbox is as easy as choosing Dropbox as your connection and destination folder from a tab. Then, by following the standard steps to import and export data with dataloader.io, you’ll be up and running in no time – it’s that simple!

Ross Mason on Monday, July 29, 2013

Raspberry Pi gets an API

0

In the Internet of things no device is an island. And while Raspberry Pi are pretty cool on their own adding an API makes them a lot more interesting. We have been playing around with Raspberry for a while now and have a small distribution of , called ‘Anypoint Edge’ that happily runs on small embeddable devices like the Raspberry Pi.  These ARM-based devices are taking the world by storm since they are lower powered, low cost and can be embedded into small hubs to control other things like lightbulbs, or be used inside anything from PoS kiosks to gas pumps to cars to medical devices.