12 October 2016

How to Cluster with Cold Fusion

ColdFusion isn't one of the most commonly used application servers, but one that c2b2 Head of Support, Claudio Salinitro stumbled upon during a troubleshooting engagement he performed for one of our middleware support customers. With a remit to embrace new Java technologies, Claudio spent some time investigating ColdFusion, and here describes how to set up a simple cluster.

ColdFusion is definitely not one of the most popular enterprise application servers on the market, but despite a few weaknesses and lack of good documentation, in the right scenario, its small footprint and fast development time can make it a very good choice as part of a Java middleware infrastructure.

In this article, I'm going to cluster two instances using ColdFusion server 2016 with Apache (2.2.31) configured with mod_jk (1.2.41) and do so according to the following logic architecture.

To accomplish this, I'm going to need...

  • 1x Load Balancer
    Whilst I'm using Apache with mod_jk, any other similar solution would do just fine.
  • 2x ColdFusion server instances
    Depending on your resources and needs, these could be on two different machines, or the same.

ColdFusion Configuration

ColdFusion doesn’t have a central admin server like other Java application servers, but instead offers us a “cfusion” instance that is used as a repository for the default configuration, and to create instances and clusters.

So, from the “cfusion” admin web interface (CFIDE) of node2, create a ColdFusion instance named "instance2".

From the “cfusion” admin web interface (CFIDE) of node1 create a ColdFusion instance named "instance1" and register "instance2" of the node2 as a remote instance.

Note - Pay attention to identify the correct HTTP and AJP port, and the JVM Route (double check the server.xml on node2).

From the “cfusion” admin web interface (CFIDE) of node1 create a cluster, add both "instance1" and "instance2" to the newly created cluster.

Remember to flag the options “sticky sessions” and “sessions replication” - and to take note of the multicast port...

Since version 10, ColdFusion no longer uses Jrun, but is running on top of Apache Tomcat, and for this reason, the cluster configuration basically follows the same process as a clean Tomcat installation.

In the [coldfusion_installation_dir]/instance2/runtime/conf/server.xml file add the following configuration between “</Host>” and “</Engine>”:

<Cluster channelSendOptions="8" className="org.apache.catalina.ha.tcp.SimpleTcpCluster">
  <Manager className="org.apache.catalina.ha.session.DeltaManager"  expireSessionsOnShutdown="false" notifyListenersOnReplication="true" />
   <Channel className="org.apache.catalina.tribes.group.GroupChannel">
      <Membership address="" port="45564" className="org.apache.catalina.tribes.membership.McastService" dropTime="3000" frequency="500"/>
      <Receiver selectorTimeout="5000" address="node2" autoBind="100" port="4001" className="org.apache.catalina.tribes.transport.nio.NioReceiver" maxThreads="6"/>
      <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
        <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
      <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
      <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
    <Valve filter="" className="org.apache.catalina.ha.tcp.ReplicationValve"/>
    <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
    <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>

Pay attention that the following has been set up correctly:
  • The membership address and port must be the same between all members of the cluster - so double check the server.xml of "instance1" to make sure they are the same.

  • The receiver element address must be the IP address or the hostname of the related node. This IP must be reachable by the other members of the cluster (no or localhost);

Then edit the [coldfusion_installation_dir]/instance2/runtime/conf/context.xml and comment out the <Manager></Manager> section. In my case this was:

<!--<Manager pathname="" />-→

Do the same on node 1.

Load Balancer Configuration

At the moment, the preferred way to configure Apache to work with ColdFusion is using mod_jk - however, Adobe can automatically configure your Apache installation via the wsconfig tool. This can be done during the installation or after, but only works if wsconfig has access to the Apache configuration files - otherwise we can proceed manually.

The mod_jk binary is shipped in the ColdFusion installation directory. Check into the [coldfusion_installation_dir]/cfusion/runtime/lib/wsconfig.jar to find the binary for your Apache version and operating system.

Extract the binary in [apache_installation_dir]/modules.

Then edit the Apache configuration, adding the following:

LoadModule    jk_module  "[apache_installation_dir]/modules/mod_jk.so"

JkWorkerProperty worker.list=lb
JkWorkerProperty worker.instance1.type=ajp13
JkWorkerProperty worker.instance1.host=
JkWorkerProperty worker.instance1.port=8012
JkWorkerProperty worker.instance1.connection_pool_timeout=60
JkWorkerProperty worker.instance2.type=ajp13
JkWorkerProperty worker.instance2.host=
JkWorkerProperty worker.instance2.port=8012
JkWorkerProperty worker.instance2.connection_pool_timeout=60
JkWorkerProperty worker.lb.type=lb
JkWorkerProperty worker.lb.balance_workers=instance1,instance2
JkWorkerProperty worker.lb.sticky_session=false

JkMount /session_test/* lb

It’s the standard configuration for mod_jk, to define a module of type “lb” to balance the requests between the other 2 workers - one for the node1 and one for the node2.

Finally, we map the context /session_test to the load balancer worker so that any request starting with /session_test will/ be balanced on the two ColdFusion nodes.

Configuration Test

A note on the “JkWorkerProperty worker.lb.sticky_session” setting:

The default value usually is “true” because it's preferred to keep the requests of the same client on the same server- and/or depending on your applications, it could be a requirement. In our case we need to test the session replication and the load balancing between the nodes, so that setting will make everything easier.

I’m going to use the session_test web application provided by Steven Erat in his No Frills Guide to CFMX61 Clustering post, but since this was created for ColdFusion 6, I had to update the code to run on Tomcat.

On both nodes, create the folder [coldfusion_installation_dir]/[node]/wwwroot/session_test, and inside it create the following five files:


<cfapplication name="j2ee_session_replication_test" sessionmanagement="Yes" clientmanagement="No">
        System = createObject("java","java.lang.System");


    session.currentTimestamp = timeformat(now(),"HH:mm:ss");
    message = "[#session.currentTimestamp#] [#createobject("component","CFIDE.adminapi.runtime").getinstancename()#] [New Session: #session.isnew()#] [id: #session.sessionid#]";
<br><br>System.out has written the above data to the console of the active jrun server
<br><br><a href="index.cfm">Refresh</a> | <a href="cgi.cfm">CGI</a> | <a href="sessionData.cfm"><cfif not isdefined("session.myData")>Create<cfelse>View</cfif> nested structure</a>
<cfif not isDefined("session.session_lock_init_time")>
    <cflock scope="SESSION" type="EXCLUSIVE" timeout="30">
        <cfset session.session_lock_init_time = timeformat(now(),"HH:mm:ss")>
        <cfset session.session_lock_init_servername = #createobject("component","CFIDE.adminapi.runtime").getinstancename()#>
<cfif session.session_lock_init_servername neq #createobject("component","CFIDE.adminapi.runtime").getinstancename()#>
    <CFSET session.session_failedOver_to = #createobject("component","CFIDE.adminapi.runtime").getinstancename()#>
    <strong><font color="red">
    Session has failed over
    <BR>from <cfoutput>#session.session_lock_init_servername#
    <BR>to #createobject("component","CFIDE.adminapi.runtime").getinstancename()#</cfoutput>
<cfelseif isDefined("session.session_failedOver_to")>
    <br><br><strong><font color="green">
    Session has been recovered to original server
    after a failover to <cfoutput>#session.session_failedOver_to#</cfoutput>
<cfdump var="#session#" label="CONTENTS OF SESSION STRUCTURE">


<a href="index.cfm">Back</a><br><br>
<cfdump var="#cgi#" label="current file path: #getDirectoryFromPath(expandPath("*.*"))#">


<a href="index.cfm">Index</a><br><br>
<cfdump var="#session#" label="session scope">


<a href="index.cfm">Back</a><br><br>

    if(not isdefined("session.myData")){
        writeOutput('<font size="4" color="red">Creating nested session data...</font><br><br>');
        // create deep structure for replication
        a.time1 = now();
        a.time2 = now();
        b.time1 = now();
        b.time2 = now();
        session.myData["ab"]["a"] = a;
        session.myData["ab"]["b"] = b;
        session.myData["a"] = a;
        session.myData["b"] = b;
        session.myData["mydata_session_init_time"] = timeformat(now(),"HH:mm:ss");
        session.myData["mydata_session_init_servername"] = #createobject("component","CFIDE.adminapi.runtime").getinstancename()#;

<br><br><cfdump var="#session.myData#" label="CONTENTS OF SESSION.MYDATA">
<br><br>Current Time: #timeformat(now(),"HH:mm:ss")#
<br><br>Current Server: #createobject("component","CFIDE.adminapi.runtime").getinstancename()#

Then, make multiple attempts to access the following URL:


and test that:

  1. The requests are balanced alternatively between "instance1" and "instance2"
  2. The session created with the first request is replicated between the two nodes (id doesn’t change)
  3. That, in case of failure of one of the two nodes, the requests are sent only to active node;
  4. In case the failed node starts to be active again, the load balancer once again balances the requests on both nodes.

With this setup you will have a simple but solid ColdFusion infrastructure with high availability thanks to the session sharing. Also in case of increased load, you can easily horizontally scale your setup, adding additional nodes to the cluster and load balancer.

And on a final note, to separate the client's network traffic from the cluster session replication traffic, the best way forward is to have two dedicated network interfaces. 

26 September 2016

JBUG - Experience the Potential of JBoss Private Cloud

As organisers and sponsors of the London JBUG, we were delighted to welcome back one of our most popular speakers, Eric D.Schabell for two talks on Red Hat Private Cloud. In this post, c2b2 Senior Consultant, Brian Randell looks back at the evening and offers a summary of Eric's talks.

It was a pleasure to host the London JBoss User Group on the 20th September. It was my first time at a JBUG meet-up and my first time hosting - so I wasn't sure entirely what to expect!  

We assembled at Skills Matter's CodeNode venue which, situated directly in the city near Moorgate Tube, is easy to get to and seemed alive with technical professionals attending conferences, classes and speaker events like ours. I was delighted to see so many attendees - some with laptops out and ready for the live demos.  Once the technician had completed a final check of the cameras and sound and we were away...

I felt especially privileged to welcome our speaker Eric D. Schabell from Red Hat who was to deliver both talks. Eric travels the world speaking about Red Hat's Integrated Solutions Technology and is a highly respected and engaging speaker, so I knew we were in for a very revealing and informative night.

The first talk 'A local private PaaS in minutes with Red Hat CDK' had Eric showing us how, by using the Container Development Kit, we could have a private cloud running BPM Suite on an OpenShift pod in minutes!

By using Vagrant, Kubernetes, Virtual Box, OpenShift Container Platform and BPM Suite, you can deploy a local Virtual Machine, running a RHEL docker container, have OpenShift CP deployed into that container, and deploy a Red Hat BPM suite into a newly created JBoss Pod.

How easy was that ?! 

This brings all the benefits of containerisation. You can create and trash them as often as you need, providing an easy way for creating demos, code testing environments, prototyping, and whatever else you may want.

Eric talked about how having a good container strategy allows you to take control and test your application in as like-for-like environment as you can get.  It's the Red Hat Container Development Kit that allows us this possibility - with the ability to run lots of services at the same and test how they interface and talk to each other.

The CDK is easy to use and is distributed as a vagrant box.  It can run on the most used platforms, use varying virtualisation providers, and contains examples to help you along.  It really made me want to go try it at home as I build up my own demo systems and playgrounds with which to try things out.  

See the links below to try it out for yourself.

After a quick break and a (c2b2 sponsored!) cold beer at the CodeNode {{SpaceBar}} the second talk 'Painless containerization in your very own Private Cloud' expanded on what we had seen with the CDK and put it more into a business context.

Containers provide a way of allowing environments to be provisioned locally, and be tested on without having to wait for them to be made available by Operations - or be as dependent on other teams.  Using standard images you can concentrate on the things that matter to you.

The Red Hat Cloud Suite is useful here as it uses the OpenShift Container Platform to provide a simple way to deploy and build applications. You can then rationalise your containers to be more specific to its service, and get them to interact and talk to each other so that you can standardise the interfaces and focus on the container you need to.

And that was that, a couple of really great talks ending with some great pizza (also sponsored by c2b2) and more drinks, what could be better ?!

I left the meet-up having had a really good evening. Thanks to Eric for presenting and thanks to all that showed up to listen. Am looking forward to the next one and itching to get playing with the CDK :)

Eric's talk and slides are available on his blog:

The evening is available on the Skills Matter website:


12 August 2016

How to Configure JBoss EAP 7 as a Load Balancer

Following on from Brian's recent work on Planning for a JBoss EAP 7 migration, he returns to the new features now available to JBoss administrators, and looks specifically at configuring Undertow as an http load balancer.


The environment I used was an Amazon EC2 t2.medium tier shared host running Red Hat Linux 7.2 (Maipo) with 4GB Ram and 2vCPUs.  This has Oracle Java JDK 8u92 and JBoss EAP 7.0.0 installed.

I wanted to have three separate standalone servers, so I copied the standalone folder three times from the vanilla EAP 7 install and renamed them standalone-lb, standalone-server1, standalone-server2.

I then ran three instances of JBoss using the -Djboss.server.base.dir command line argument for each one, to specify the three different configurations. I kept the lb server with the default ports but used the port offset argument -Djboss.socket.binding.port-offset to offset the ports by 100 for each server.

Hence, for http the lb was running on port 8080 and server1 and server2 were running on ports 8180 and 8280 respectively.


Next I needed to find a web application to run on JBoss that would show me that the load balancing had pointed to that server.

I decided to use the JBoss EAP 7 Quickstart for helloworld-html5. This provides a very simple page to display where you can add your name, press a button and it will display it.  What it also does is provide a stdout message in the logs with the name you have entered.  Hence it is easy to know which server you have connected to.

I imported the helloworld-html5 project into JBoss Developer Studio and exported the war file which I then deployed onto server1 and server2.
Testing it on server 2 (with port offset of 200) using the URL http://<hostname>:8280/helloworld-html5/ you can see the message displayed on the screen and the name entered in the log file:


So now we need to configure the load balancing on the lb server.
For this we need to add some configuration to Undertow to reference the outbound destinations we also need to configure for our servers.
We will be :

  • Adding in remote outbound destinations for the servers we want to load balance (providing the hostname and port)
  • Adding a reverse proxy handler into Undertow
  • Adding the outbound destinations to the reverse proxy handler (setting up the scheme we want to use, i.e. ajp or http, and the path for the application which in our case is helloworld-html5)
  • Adding the Reverse Proxy location (i.e. what url path will we follow on the Load Balancer for it to be redirected)

The following CLI configured the outbound destinations:

/socket-binding-group==standard-sockets/remote-destination-outbound-socket-binding=lbhost1/:add(host=, port=8180)
/socket-binding-group==standard-sockets/remote-destination-outbound-socket-binding=lbhost2/:add(host=, port=8280)

You can see these now configured in the console :

The following CLI added the reverse proxy handler (I have called it ‘lb-handler’) to Undertow:


The following CLI adds the remote destinations to the reverse proxy handler (I have named the hosts ‘lb1’ and ‘lb2’ and have named the instance-id ‘lbroute’ for both so it will round robin around them):

/subsystem=undertow/configuration=handler/reverse-proxy=lb-handler/host=lb1:add(outbound-socket-binding=lbhost1, scheme=http, instance-id=lbroute, path=/helloworld-html5)
/subsystem=undertow/configuration=handler/reverse-proxy=lb-handler/host=lb2:add(outbound-socket-binding=lbhost2, scheme=http, instance-id=lbroute, path=/helloworld-html5)

We can now see the completed handler configuration :

    "outcome" => "success",
    "result" => {
        "cached-connections-per-thread" => 5,
        "connection-idle-timeout" => 60L,
        "connections-per-thread" => 10,
        "max-request-time" => -1,
        "problem-server-retry" => 30,
        "request-queue-size" => 10,
        "session-cookie-names" => "JSESSIONID",
        "host" => {
            "lb1" => {
                "instance-id" => "lbroute",
                "outbound-socket-binding" => "lbhost1",
                "path" => "/helloworld-html5",
                "scheme" => "http",
                "security-realm" => undefined
            "lb2" => {
                "instance-id" => "lbroute",
                "outbound-socket-binding" => "lbhost2",
                "path" => "/helloworld-html5",
                "scheme" => "http",
                "security-realm" => undefined

And to complete the configuration I add the location which the handler will handle with the following CLI (setting this so that anything going to the /app URL will be handled):


This we can now see in the settings:

    "outcome" => "success",
    "result" => {
        "handler" => "lb-handler",
        "filter-ref" => undefined

That is all the configuration we need.


To test we use the URL on the LoadBalancer with /app, this should now redirect to the remote servers using /helloworld-html5.  If I then type in a value and press the button I can see which server I have been redirected to.

We can see when tailing the logs on both the servers that I can refresh the browser which redirects to the other server and it continues in a round robin pattern.


There you go - a straight-forward process to configure JBoss EAP 7 standalone mode as a http load balancer in front of two standalone servers using round robin.

10 August 2016

Looking for Reliable WildFly Support for your Business or Organisation? Six Things You Need to Think About!

If you’re using WildFly as part of a commercial middleware infrastructure, then you’ll understand the importance of having access to high quality middleware support – and the need for expert WildFly troubleshooting advice when faced with business-critical tickets.

Expert WildFly Support from the UK's Leading Middleware experts

Whilst the WildFly community offers a huge source of knowledge, expertise and enthusiasm for upstream JBoss middleware (we should know, because we’re part of it) finding the specific information you need at the time you need it most - and being able to apply it to your operational environment with best practice expertise within a priority one time frame can challenge even the best in-house middleware teams.

In this post, we’ve summarised our experiences of the WildFly world (and those of clients) into six key things to think about if you’re accountable for your organisation’s WildFly support. Hopefully it will shed some light on some of the pitfalls that might lie ahead – and help you make better decisions when planning your middleware support strategy.

JBoss WildFly Troubleshooting and support
The most common touch-points we have with new clients is when something breaks and they reach out to us for WildFly troubleshooting engagements.  Often they’re searching for solutions to poor deployments implemented by alternative service providers, but frequently they find that whilst their in-house team delivered a decent implementation project, they have since run into problems.

Regardless of the middleware technology deployed, there’s a huge difference between the skill sets required to implement the product, and those needed to manage and maintain operational performance. WildFly is no different - and because you can’t fall back on a Red Hat support license, the need for a sound support strategy becomes even more critical.

Without hands-on experience of how the technology works for real-world business operations, we often find that the in-house team members who have championed the initial WildFly implementation project don’t always have the understanding or investigatory skills needed to fully support operational performance.

What to think about…
The situation described above isn’t a great one to find yourself - so if you’re using WildFly, make sure you get your operational performance objectives clear and carry out an honest appraisal of your team’s ability to investigate problems across the infrastructure and support those issues.

If you’re embarking on a WildFly implementation, do the same – but look beyond the potentially low cost entry and focus on the potential costs of meeting those objectives - and the cost to the business of operational failure. Think about the investment you’ll need to make in recruitment, training and resource management to achieve the level of cover and the expertise you’ll need to resolve a whole spectrum of service failure scenarios.

Even the less complex middleware environments use a range of dependent technologies which can be implicitly related to WildFly performance – for example, load balancers like Apache, Nginx or HAProxy), databases, ESB, or message brokers likeActiveMQ or other JMS providers.

Whether there is a particular tech under the WildFly hood or whether WildFly is working in tandem with other Java technologies, an expertise across the whole landscape of your systems is essential in order to investigate and identify the root-causes of your WildFly tickets.

Even the most ardent in-house WildFly enthusiasts may not have the experience of working across all the technologies you’re using – or understand the interdependencies that can affect operational performance. It can be pretty straight forward getting an app running, but much harder getting it to perWeform well.

It can take years of supporting complex middleware infrastructure to analyse issues with speed and accuracy; and an awareness of the whole landscape to deliver an appropriate solution.

What to think about…
What are your team’s real core skills and can you rely on them to deliver bullet-proof operational fixes when you need them most and against the clock?

If you’re thinking about support – think about proven WildFly problem-solving skills in a pressurised commercial environment.

Even if it were possible to stick a plug into the back of your head and download the WildFly community knowledge base, your team still need the skills to research and resolve complex investigations.

It’s true, the WildFly community is a hub of expertise and knowledge - but for all the thousands of blog posts, forums, technical documents, videos, webinars, opinion pieces, case studies and walk-throughs that you could find if you looked hard enough, how many will be relevant to the specific support issues your team will face on a daily basis?

Bear in mind that not one member of that global community has any knowledge of you, your business or your operational needs; they haven’t examined your architecture, they don’t know your configurations, they have no idea how WildFly fits within your infrastructure or how it meets your business needs. Nor do they have any concept of the risks you face or the resolution times you have to meet.

On top of this, the process of searching through the vast expanse of information out there is a daunting prospect. Knowing which sources to trust, identifying relevant content, and piecing together your solutions not only requires excellent search and discover skills, but can devour your response times. 

What to think about…
If you think in these terms, using the WildFly community as a business-critical resource is something that probably shouldn’t feature too strongly in your support strategy! 

As an up-stream open source solution, it can also be more difficult to transition from one version of WildFly to the next – and this difficulty isn’t limited to WildFly itself when there are other dependencies within the infrastructure.

So, consider whether your team have the capabilities to implement ongoing release cycles, updates and patching across your middleware – and will they understand broader implications?

If your business services are operating around the clock, at some point you’ll face the challenge of rectifying systems or reinstating service availability out of office hours when your team isn’t available and you might not be able to attend to the problem remotely. 

Unless you invest in an in-house team structure that guarantees on-call expert WildFly support around the clock regardless of leave and illness, you’ll continually run the risk of extended business downtime – and since WildFly is an unsupported Red Hat product, the challenge of rectifying services in that scenario can become a solitary one!

What to think about…
When planning for support, think about how a team rota would look if you’re covering your business operations. Accommodating for leave, maternity/paternity, illness and recruitment issue can start to look expensive – and remember that not only will you need to find those skills in the market-place, but you’ll need to manage them as well! 

The conversations we have with companies and organisations when discussing support services often feature tales of frustration and dissatisfaction with experiences of help-desks employing off-shore support engineers.

Whilst working with large sub-contracting organisation may offer a relatively lower-cost option on paper, you only realise the true value of support when you really need it most. When your business is down and the accountability for rectification is on your shoulders, the last thing you want to deal with is a support engineer with limited or no knowledge of your company, a lack of understanding about your infrastructure, and no experience of your operational priorities.

Instead of answering questions to fill the knowledge gaps about who you are and what middleware you’re using – you should be answering questions directly posed to investigate and resolve the issue in hand.

What to think about…
When you enter into an out-sourced WildFly support contract, make sure you have a clear understanding of the relationship you’ll have with the service provider. A larger provider may not give you the level of personal service you enjoy with a niche company, and are less likely to be as engaged with your objectives. 

You might feel more reassured by a provider who offers up-front health checks and infrastructure evaluation prior to support commencing, or one who demonstrates an ethos of partnership in the provision of support. Ultimately you want to know that you’re working with people who not only support your middleware, but your business objectives as well.

If you do find a company capable of supporting your WildFly environments to the standards you expect, it can be frustrating to then find they don’t have the skills or resources to offer additional middleware service solutions.

Working with a provider who can deliver a completely integrated portfolio of WildFly services is a more reassuring and easier proposition. What clients we speak with find of enormous value is having a dedicated account manager who not only handles support services, but who can also deliver project proposals from proof-of-concept and architectural design to implementation and DevOps.

What to think about…
Take another step forward and think of how good it would be to have the account manager, the support engineers and the professional services consultants fully integrated into the same team. 

This holistic ethos would create a genuine shared knowledge about your organisation, an understanding of your infrastructure and a team of experts available to call upon via support tickets or full-service projects. 

When we spoke with our clients and asked them about their decision-making processes for WildFly support – these six items were always the ones we had the most conversations about. Of course, it’s not an exhaustive list, but hopefully demonstrates a few of the issues you might want to think about when putting together a support strategy.

Finally, the issues raised here concern supporting business operations, but there may come a time when the organisation wants to consider new paradigms such as transitioning to microservices architecture – or even moving to JBoss enterprise infrastructure. Whilst this goes beyond the scope of a support service, think of the advantages of having an independent professional middleware consultancy with expertise in swarm, service discovery, containers, DevOps and enterprise application platforms.

If you are considering WildFly support or want to discuss broader WildFly projects, contact us using the form below and we’ll put you in touch with one of our Red Hat specialists.

5 August 2016

Planning a Successful Cloud Migration

For most organisations, migrating some or all of your applications to a cloud hosting provider can deliver significant benefits, but like any project it is not without risks and needs to be properly planned to succeed.  In this post, c2b2 Head of Profession, Matt Brasier offers an overview of a talk he recently gave to delegates attending the Oracle User Group (OUG) Scotland. He'll look at some of the things that you can do at a high level to help you get the most of your cloud migration - and break down some of the common factors into more concrete considerations.

Understand what you are looking to get out of the migration

As with anything your business does, there needs to be a good reason to do it, and in the case of a cloud migration, the reasons usually come down to costs. 

However there are other benefits that can be realised during a cloud migration programme that (while they come down to reducing costs in the end) produce more immediate and tangible benefits. Moving some of your organisation's infrastructure to the cloud is going to necessitate some changes in job roles and responsibilities, together with bringing in new ways of working and processes. A cloud migration programme is a great time to bring more modern development and operations (or DevOps) processes in, providing benefits in terms of delivering quicker application fixes and improvements to the end users.

Cloud infrastructure often provides less scope for customisation compared to running the same applications on-premise, and while that can cause some problems, it does force your organisation to limit itself to using common or standards based approaches, rather than developing their own. 

Eliminating customised or bespoke infrastructure and applications where possible reduces your support costs, as it allows you to find commodity skills in the market to maintain them, rather than having to train people in-house.

In order to make sure that your migration is actually successful (i.e. delivers what you are looking for rather than just moving you to cloud because someone told you it was a good idea), you need to properly identify where you are expecting to make savings (hardware, infrastructure management staff, licenses and support costs) and what your new costs will be (how much capacity will you need, what retraining is needed, what processes and procedures need to change).

Cloud vendors will often over-hype the cost savings you can make by not including the costs of things like upskilling and retraining in their analysis. If the main objective of a migration to the cloud is cost saving then you need to ensure you fully understand all the costs of the migration.

Plan the migration and manage risks

Once you understand why you want to migrate, and what success looks like, you need to plan how you get there. The risks to a migration project can broadly be categorised into two types:

  • Technical risks
    Where applications or components don’t work in the cloud, or need wore work than anticipated to get working
  • Non-technical risks
    Where business or process factors need to change

A migration from on-premise infrastructure (or infrastructure rented in a data centre) to a cloud provider is different to just upgrading your infrastructure versions and refreshing the hardware. 

It is key to consider from the start that you will be significantly changing the way some people need to perform their jobs, and will possibly even making people redundant. The project therefore needs to be handled with sensitivity and care from the start, ensuring that retraining and organisational change are tackled at the same time as technical migration.

At a more technical level, it is important to understand how many systems you are planning on moving to the cloud and when they will move. There will be dependencies between systems that will need to be managed, and interfaces between your cloud infrastructure and on-premise infrastructure that need to be migrated.

One of the biggest factors to consider when planning your migration is whether you plan to “cut over” to the cloud systems as a big bang, where all systems are migrated at once, or in a staged process. There are advantages and disadvantages to both approaches, so its worth considering in detail for your particular organisation. It is also possible (in fact likely) that there are some applications in your organisation which will be very costly (or possibly technically impossible) to migrate, so you need to plan for what you do with these – for example you may need to rewrite them in different technologies, or just age them off.

A cloud migration is not just a technical task, and there will be a number of business processes and strategies that will need to be reconsidered or rewritten from scratch. DR and business continuity plans, support processes, new account processes, etc, may all need to be rewritten with new ways of doing things.

Avoid common pitfalls

There are a number of common pitfalls that people come across, that can result in a cloud migration project not delivering its benefits, or not giving the anticipated savings. 

The main cause of this is underestimating in some way the complexity of the task, or trying to rush it and making early mistakes. It is not uncommon to find undocumented interfaces in a system that is to be migrated (for example, reliance on shared file systems or authentication providers) that are not covered on infrastructure diagrams, and so get forgotten in migration planning.

Another key cause of failure is not planning for the business and process change needed with adopting a cloud provider, leaving staff forced to accept changes that they don’t understand, and without the necessary skills to perform their jobs.

All of the above pitfalls can be avoided with good planning and an understanding of the complexities of a cloud migration project, allowing you to deliver the very real benefits and cost savings that a cloud infrastructure can provide. 

If you're considering the cloud as part of your infrastructure strategy and would like to discuss your project with Matt, contact us on 0845 539457 to arrange a conference call.

14 July 2016

Monitoring Tomcat with JavaMelody

In this post, troubleshooting specialist, Andy Overton describes an on-premise monitoring solution he deployed for a customer using a small number of Tomcat instances for a business transformation project. Using a step-by-step approach, he walks through his JavaMelody configuration and how he implements alerts in tandem with Jenkins.


Whilst working with a customer recently I was looking for a simple, lightweight monitoring solution for monitoring a couple of Tomcat instances when I came across JavaMelody.


After initial setup - which is as simple as adding a couple of jar files to your application - you immediately get a whole host of information readily available with no configuration whatsoever.

Playing about with it for a while and being impressed, I decided to write this blog because I thought I might be able to use my experiences to throw some light on a few of the more complex configurations (e-mail notifications, alerts etc.).

Technology Landscape

I’m going to start from scratch so you can follow along. To begin with, all of this was done on a VM with the following software versions:
  • OS – Ubuntu 16.04
  • Tomcat – 8.0.35
  • JDK - 1.8.0_77
  • JavaMelody – 1.59.0
  • Jenkins - 1.651.2

Tomcat Setup

Download from http://tomcat.apache.org/download-80.cgi

Add an admin user:
Add the following line to tomcat-users.xml:

<role rolename="manager-gui"/>
<user username="admin" password="admin" roles="manager-gui"/>

Start Tomcat by running <TOMCAT_DIR>/bin/startup.sh

The management interface should now be available at: http://localhost:8080/manager

JavaMelody Setup

Download from https://github.com/javamelody/javamelody/releases and unzip.

Add the files javamelody.jar and jrobin-x.jar to to the WEB-INF/lib directory of the war file you want to monitor.

I used a simple testapp used for testing clustering. Obviously we’re not testing clustering here but it doesn’t actually matter what the application does for our purposes.

Download the clusterjsp.war from here (or use your own application):

Drop the war file in the <TOMCAT_DIR>/webapps directory and it should auto-deploy.

Point a browser to http://localhost:8080/clusterjsp/monitoring and you should see a screen similar to this screen grab from github:

First Look

For new users, I'll just offer a quick run-down of my out-of-the-box experience. First thing you see are the graphs you have immediately available:

  • Used memory
  • CPU
  • HTTP Sessions
  • Active Threads
  • Active JDBC connections
  • Used JDBC connections
  • HTTP hits per minute
  • HTTP mean times (ms)
  • % of HTTP errors
  • SQL hits per minute
  • SQL mean times (ms)
  • % of SQL errors
You can access additional graphs for such things as garbage collection, threads, memory transfers and disk space via the 'Other Charts' link, and helpfully these can be easily expanded with a mouse click. Less helpfully, there's no auto-refresh so you do need to update the charts manually.

If you scroll down, you'll find that 'System Data' will make additional data available and here you can perform the following tasks:
  • Execute the garbage collector
  • Generate a heap dump
  • View a memory histogram
  • Invalidate http sessions
  • View http sessions
  • View the application deployment descriptor
  • View MBean data
  • View OS processes
  • View the JNDI tree

You can also view the debugging logs from this page - offering useful information on how JavaMelody is operating.

Reporting Configuration Guide

JavaMelody features a reporting mechanism that will produce a PDF report of the monitored application which can be generated on an ad-hoc basis or be scheduled for daily, weekly or monthly delivery.

To add this capability simply copy the file itext-2.1.7.jar, located in the directory src/test/test-webapp/WEB-INF/lib/ of the supplied javamelody.zip file to <TOMCAT_DIR>/lib and restart Tomcat.

This will add 'PDF' as a new option at the top of the monitoring screen.

Setting up an SMTP Server
In order to set up a schedule for those reports to be generated and sent via email, you first need to set up a Send Only SMTP server.

Install the software: sudo apt-get install mailutils

This will bring up a basic installation GUI and here you can select 'Internet Site' as the mail server configuration type. Then simply set the system mail name to the hostname of the server.

You'll then need to edit the configuration file /etc/postfix/main.cf and alter the following line from inet_interfaces = all to inet_interfaces = localhost

Restart postfix withsudo service postfix restart

You can test it with the following command (replacing the e-mail address):
echo "This is a test email" | mail -s "TEST" your_email_address

Scheduling the Report
With the email done, the next step is to schedule JavaMelody to send out daily e-mails of the PDF report. Firstly we need to download a couple of additional libraries.

When you have these, copy both files to <TOMCAT_DIR>/lib and add the following code to <TOMCAT_DIR>/conf/context.xml (replacing the e-mail address):

<Resource name="mail/MySession" auth="Container" type="javax.mail.Session"
<Parameter name="javamelody.admin-emails" value="your_email_address" override="false"/>
<Parameter name="javamelody.mail-session" value="mail/MySession" override="false"/>
<Parameter name="javamelody.mail-periods" value="day" override="false"/>

Once the server is started, you can send a test mail by calling this action:


Alerts (Using Jenkins)

Alerting takes a little more setting up and isn’t provided by JavaMelody itself. Instead, it's provided by Jenkins with a Monitoring add-on, so first of all, you'll need to download Jenkins from:

Use the following command to run Jenkins (we need to run on a different port as we have Tomcat running on the default 8080):  java -jar jenkins.war --httpPort=9090

Jenkins is now available at: http://localhost:9090

The nest step is to install the following plug-ins for Jenkins:
  • Monitoring – Needed for linking in with JavaMelody
  • Groovy – Needed to run Groovy code. This is required for setting up the alerts.
  • Email Extension – Needed to customise the e-mails Jenkins sends out

To install the monitoring plugin:
  1. Click 'Manage Jenkins'
  2. Select 'Manage Plugins'
  3. Select 'Available'
  4. Find and select the 'Monitoring Plugin'
  5. Click 'Install without restart'

Then follow the same procedure for Groovy and Email Extension. 

Groovy Configuration

Now, let's make sure the Groovy runtime is installed and configured by using sudo apt-get install groovy to install it to /usr/share/groovy

In order to run our Groovy scripts and call JavaMelody methods we'll need log4j and JavaMelody on the Groovy classpath. JavaMelody uses an old version for log4j (1.2.9) which can be downloaded from:

To configure Groovy:
  1. Go to Manage Jenkins, select 'Configure System'
  2. Under the Groovy section, select 'Groovy Installations'
  3. Add a name for your installation.
  4. Set GROOVY_HOME to /usr/share/groovy

Email Extension Plugin Configuration
  1. Go to Manage Jenkins, select 'Configure System'
  2. Under Jenkins location, set the URL to: http://hostname:9090 (replacing hostname with your hostname)
  3. Set the System Admin e-mail address to: donotreply@jenkins.com (or something similar – this is the address that alert e-mails will be sent from.
  4. Under the Extended E-mail Notification section, set SMTP server to localhost

Creating Alerts
Next up we'll set up a test alert, which triggers when there are more than 0 HTTP sessions - obviously not realistic, but good for demo and testing purposes.

From the main Jenkins menu:
  1. Select 'New Item'
  2. Select 'Freestyle' project
  3. Add the following details:
    • Name - High Session Count Alert
    • Description - Test alert triggered when there are more than 0 HTTP sessions
  4. Under 'Build Triggers', select 'Build' and 'Periodically'

    Now you can schedule how often to run your alert check. The syntax is exactly like a cronjob. Here we will set it to run our check every 10 minutes using the following: */10 * * * *
  5. Under 'Build', click 'Add build step'
  6. Select 'Execute Groovy' script
  7. Set the 'Groovy Version' to whatever you called it previously
  8. Add the following Groovy code:

import net.bull.javamelody.*;

url = "http://localhost:8080/clusterTest/monitoring";

sessions = new RemoteCall(url).collectSessionInformations(null);

if (sessions.size() > 0) throw new Exception("Oh No - More than zero sessions!!!");

This simple piece of code calls the URL of JavaMelody, retrieves the sessions information and then if that size is greater than zero, throws an Exception. Add javamelody.jar and log4j jar to the classpath (under Advanced) e.g.:


Under 'Post-Build Actions', select 'Add post build action', then select 'Email Notification', add the email address to send the alert to and finally, Save.


In order to test the alert triggers as required simply call your application e.g.

You should receive an e-mail with the subject 'Build failed in Jenkins', which looks something like this:

Started by user anonymous
Building in workspace <>
[workspace] $ /usr/share/groovy/bin/groovy -cp /home/andy/javamelody/javamelody.jar:/home/andy/logging-log4j-1.2.9/dist/lib/log4j-1.2.9.jar "<">
Caught: java.lang.Exception: Alert-Start
Oh No - More than zero sessions!!! Number of sessions: [SessionInformations[id=9BBFCF23C5126EDDBD44B371F1B11FD0, remoteAddr=, serializedSize=229]]
java.lang.Exception: Alert-Start
Oh No - More than zero sessions!!! Number of sessions: [SessionInformations[id=9BBFCF23C5126EDDBD44B371F1B11FD0, remoteAddr=, serializedSize=229]]
        at hudson4959397560302939243.run(hudson4959397560302939243.groovy:7)
Build step 'Execute Groovy script' marked build as failure

As Jenkins is generally used as a build tool, the outgoing e-mail isn’t the most user friendly when we’re looking to use it for alerting purposes. So, the final thing we will look at is altering the outgoing e-mail into something more legible.

Editing the Outgoing Email

First of all we will alter the Groovy script so that we can strip out the stack trace and additional information that we don’t need as we’re alerting on a specific condition of our app, not the underlying JavaMelody code.

In order to do so we will use Alert-Start and Alert-End to indicate the start and end of the alert message we want to put in the e-mail we will send out. Later we will use a regular expression to extract this from the whole Exception.

Go to the High Session Count Alert project and alter the last line of the Groovy script, changing it from:

if (sessions.size() > 0) throw new Exception("Oh No - More than zero sessions!!!");


if (sessions.size() > 0) throw new Exception("Alert-Start\nOh No - More than zero sessions!!! Number of sessions: " + sessions.size() + “\nAlert-End);

  1. Click Configure
  2. Delete the e-mail notification post-build action
  3. Add a new one - Editable Email Notification
  4. Set Project Recipient List, add your e-mail address
  5. Set the Default Subject to - JavaMelody - High Session Count ALERT
  6. Set the Default Content to the following:

Build URL : ${BUILD_URL}


Description: ${JOB_DESCRIPTION}

${BUILD_LOG_EXCERPT, start="^.*Alert-Start.*$", end="^.*Alert-End.*$"}

This will result in an e-mail containing the following:

Build URL :

Alert : High Session Count Alert

Description: Test alert triggered when there are more than 0 HTTP sessions

Oh No - More than zero sessions!!! Number of sessions: 1

The key thing here is the BUILD_EXCERPT. This takes in 2 regular expressions to indicate the start and end lines within the build log. This is where we strip out all of the extraneous stack trace info and just get the message between the Alert-Start and Alert-End tags.

To see a list of all available email tokens and what they display, you can click the "?" (question mark) next to the Default Content section.


Hopefully, this blog has given you a good starting point for using JavaMelody and Jenkins to monitor your Tomcat instances. There is a lot more that I haven’t covered but I’ll leave that as an exercise for the reader to dig a little deeper.

I’ve been impressed by it as a simple to set up, free monitoring tool. Configuring the alerts is a bit more of an effort but it’s nothing too difficult and it’s a tool I’d certainly recommend.