It’s a very good thing that Tomcat is open source software. Because it is open, it enjoys broad stand-alone adoption, plus it has been incorporated as part of many other application server products, both commercial and open source. Why reinvent the wheel when Tomcat works great as a generic web container, and the source code is free? Many smart application server teams have chosen to embed Tomcat as their web container. They pull a copy of the Tomcat source code that they know works well, put it into their own source tree, and hook Tomcat’s Ant build system into their own, and rebuild Tomcat as part of their project.
I often get questions about how to tune Tomcat for better performance. It is usually best to answer this only after first spending some time understanding the installation of Tomcat, the web site’s traffic level, and the web applications that it runs. But, there are some general performance tips that apply regardless of these important details. In general, Tomcat performs better when you:
The promise of a monitoring solution that will pinpoint application problems and give you exact steps to fix the problem has remained a dream. In addition, monitoring systems have become notorious for being expensive and difficult to maintain. Diagnosing application performance problems requires application-specific diagnostic information that general-purpose monitoring tools often do not provide.
While system monitoring products are useful in triaging a problem and assigning responsibility to a particular team (for ex: Application Server team), they often do not provide the necessary details to help you determine the problem and fix it. Monitoring products are described by their users as mile wide, inch deep – great for providing high-level visibility into broader systems such as browsers, web servers, app servers, network devices, databases, storage etc, but not so great for specific diagnostic information that you need for fixing problems.
Instead, it often takes specific diagnostics tools tied to the application container to really be able to effectively drill down into the data sufficiently.
In this article, we will use Apache Tomcat as an example, and explore a few scenarios where Tomcat administrators need more information to help determine the problem.
We’ve been busy working on Mule releases recently, so this blog hasn’t had as much developer voice as it deserves. Working on things like WebSphere MQ can be demanding, which is another reason to appreciate the all-new shiny WebSphere MQ connector in Mule Enterprise 2.2.1. Makes one’s life much much easier.
That is not to say we didn’t cure our (and your) itch for new features. Many great ideas are currently being born, killed and re-born again, and I’m happy to announce an official user-facing kick-off of Mule 3.x (yes, it’s our third one already!) with the availability of the bleeding-edge 3.0 Milestone 1 Build.
Many features in this build aren’t obvious on the surface, like our massive private Bamboo infrastructure behind the firewall – an Octopus would be a more precise name for this highly distributed build monster, and it’s spawning more and more offspring, OMG! . But although the build may look the same at first glance, there’s a subtle twist. A hot one, or more precisely, a hot-deployment one!
We have been running Galaxy successfully on our in-house servers and laptops for demo purposes for some time now and decided that having a running image of Galaxy on Amazon’s EC2 was the next logical step. Galaxy in the cloud gives us the opportunity to expose a running instance to a much wider audience than might otherwise interact directly with the product.
If you missed last week’s webinar on Scalable SOA with GigaSpaces and Mule, you can catch it again in the archives. Uri Cohen from GigaSpaces did an excellent job demonstrating how easy it is to take services developed using Mule and make them highly-available and linearly scalable.
The demo application shown is also available. Download it and try it out. It shows how integrate GigaSpaces & Mule using an AJAX based web front end. You should have the following installed on your machine for the demo to work properly:
This webinar is intended for developers and architects looking for an end-to-end SOA solution, featuring application resiliency, failover and linear scalability
During this 1-hour event Uri Cohen, Product Manager at GigaSpaces and myself will introduce the joint solution and discuss the underlying details around how the Mule/GigaSpaces integration works. Attendees will see several ways to scale an SOA implementation, the benefits of this integration, example use cases, a live demo and more.
I will be presenting a webinar tomorrow Dec. 9th at 9 AM PT/ Noon ET covering integration between Mule and webapps. It will be a technical walk-through of an example application consisting of two webapps consuming Mule services, one with Mule running inside it. The audience is assumed to have some prior experience developing with webapps and/or Mule.
There are several ways to tune performance in Mule. I’ve just finished a page on performance tuning in the Mule 2.x User Guide that walks through the available performance tuning options and provides formulas for calculating threads. Following is an excerpt of the high-level information from that page.