26 September 2016

JBUG - Experience the Potential of JBoss Private Cloud

As organisers and sponsors of the London JBUG, we were delighted to welcome back one of our most popular speakers, Eric D.Schabell for two talks on Red Hat Private Cloud. In this post, c2b2 Senior Consultant, Brian Randell looks back at the evening and offers a summary of Eric's talks.

It was a pleasure to host the London JBoss User Group on the 20th September. It was my first time at a JBUG meet-up and my first time hosting - so I wasn't sure entirely what to expect!  

We assembled at Skills Matter's CodeNode venue which, situated directly in the city near Moorgate Tube, is easy to get to and seemed alive with technical professionals attending conferences, classes and speaker events like ours. I was delighted to see so many attendees - some with laptops out and ready for the live demos.  Once the technician had completed a final check of the cameras and sound and we were away...

I felt especially privileged to welcome our speaker Eric D. Schabell from Red Hat who was to deliver both talks. Eric travels the world speaking about Red Hat's Integrated Solutions Technology and is a highly respected and engaging speaker, so I knew we were in for a very revealing and informative night.

The first talk 'A local private PaaS in minutes with Red Hat CDK' had Eric showing us how, by using the Container Development Kit, we could have a private cloud running BPM Suite on an OpenShift pod in minutes!

By using Vagrant, Kubernetes, Virtual Box, OpenShift Container Platform and BPM Suite, you can deploy a local Virtual Machine, running a RHEL docker container, have OpenShift CP deployed into that container, and deploy a Red Hat BPM suite into a newly created JBoss Pod.

How easy was that ?! 

This brings all the benefits of containerisation. You can create and trash them as often as you need, providing an easy way for creating demos, code testing environments, prototyping, and whatever else you may want.

Eric talked about how having a good container strategy allows you to take control and test your application in as like-for-like environment as you can get.  It's the Red Hat Container Development Kit that allows us this possibility - with the ability to run lots of services at the same and test how they interface and talk to each other.

The CDK is easy to use and is distributed as a vagrant box.  It can run on the most used platforms, use varying virtualisation providers, and contains examples to help you along.  It really made me want to go try it at home as I build up my own demo systems and playgrounds with which to try things out.  

See the links below to try it out for yourself.

After a quick break and a (c2b2 sponsored!) cold beer at the CodeNode {{SpaceBar}} the second talk 'Painless containerization in your very own Private Cloud' expanded on what we had seen with the CDK and put it more into a business context.

Containers provide a way of allowing environments to be provisioned locally, and be tested on without having to wait for them to be made available by Operations - or be as dependent on other teams.  Using standard images you can concentrate on the things that matter to you.

The Red Hat Cloud Suite is useful here as it uses the OpenShift Container Platform to provide a simple way to deploy and build applications. You can then rationalise your containers to be more specific to its service, and get them to interact and talk to each other so that you can standardise the interfaces and focus on the container you need to.

And that was that, a couple of really great talks ending with some great pizza (also sponsored by c2b2) and more drinks, what could be better ?!

I left the meet-up having had a really good evening. Thanks to Eric for presenting and thanks to all that showed up to listen. Am looking forward to the next one and itching to get playing with the CDK :)

Eric's talk and slides are available on his blog:

The evening is available on the Skills Matter website:


12 August 2016

How to Configure JBoss EAP 7 as a Load Balancer

Following on from Brian's recent work on Planning for a JBoss EAP 7 migration, he returns to the new features now available to JBoss administrators, and looks specifically at configuring Undertow as an http load balancer.


The environment I used was an Amazon EC2 t2.medium tier shared host running Red Hat Linux 7.2 (Maipo) with 4GB Ram and 2vCPUs.  This has Oracle Java JDK 8u92 and JBoss EAP 7.0.0 installed.

I wanted to have three separate standalone servers, so I copied the standalone folder three times from the vanilla EAP 7 install and renamed them standalone-lb, standalone-server1, standalone-server2.

I then ran three instances of JBoss using the -Djboss.server.base.dir command line argument for each one, to specify the three different configurations. I kept the lb server with the default ports but used the port offset argument -Djboss.socket.binding.port-offset to offset the ports by 100 for each server.

Hence, for http the lb was running on port 8080 and server1 and server2 were running on ports 8180 and 8280 respectively.


Next I needed to find a web application to run on JBoss that would show me that the load balancing had pointed to that server.

I decided to use the JBoss EAP 7 Quickstart for helloworld-html5. This provides a very simple page to display where you can add your name, press a button and it will display it.  What it also does is provide a stdout message in the logs with the name you have entered.  Hence it is easy to know which server you have connected to.

I imported the helloworld-html5 project into JBoss Developer Studio and exported the war file which I then deployed onto server1 and server2.
Testing it on server 2 (with port offset of 200) using the URL http://<hostname>:8280/helloworld-html5/ you can see the message displayed on the screen and the name entered in the log file:


So now we need to configure the load balancing on the lb server.
For this we need to add some configuration to Undertow to reference the outbound destinations we also need to configure for our servers.
We will be :

  • Adding in remote outbound destinations for the servers we want to load balance (providing the hostname and port)
  • Adding a reverse proxy handler into Undertow
  • Adding the outbound destinations to the reverse proxy handler (setting up the scheme we want to use, i.e. ajp or http, and the path for the application which in our case is helloworld-html5)
  • Adding the Reverse Proxy location (i.e. what url path will we follow on the Load Balancer for it to be redirected)

The following CLI configured the outbound destinations:

/socket-binding-group==standard-sockets/remote-destination-outbound-socket-binding=lbhost1/:add(host=, port=8180)
/socket-binding-group==standard-sockets/remote-destination-outbound-socket-binding=lbhost2/:add(host=, port=8280)

You can see these now configured in the console :

The following CLI added the reverse proxy handler (I have called it ‘lb-handler’) to Undertow:


The following CLI adds the remote destinations to the reverse proxy handler (I have named the hosts ‘lb1’ and ‘lb2’ and have named the instance-id ‘lbroute’ for both so it will round robin around them):

/subsystem=undertow/configuration=handler/reverse-proxy=lb-handler/host=lb1:add(outbound-socket-binding=lbhost1, scheme=http, instance-id=lbroute, path=/helloworld-html5)
/subsystem=undertow/configuration=handler/reverse-proxy=lb-handler/host=lb2:add(outbound-socket-binding=lbhost2, scheme=http, instance-id=lbroute, path=/helloworld-html5)

We can now see the completed handler configuration :

    "outcome" => "success",
    "result" => {
        "cached-connections-per-thread" => 5,
        "connection-idle-timeout" => 60L,
        "connections-per-thread" => 10,
        "max-request-time" => -1,
        "problem-server-retry" => 30,
        "request-queue-size" => 10,
        "session-cookie-names" => "JSESSIONID",
        "host" => {
            "lb1" => {
                "instance-id" => "lbroute",
                "outbound-socket-binding" => "lbhost1",
                "path" => "/helloworld-html5",
                "scheme" => "http",
                "security-realm" => undefined
            "lb2" => {
                "instance-id" => "lbroute",
                "outbound-socket-binding" => "lbhost2",
                "path" => "/helloworld-html5",
                "scheme" => "http",
                "security-realm" => undefined

And to complete the configuration I add the location which the handler will handle with the following CLI (setting this so that anything going to the /app URL will be handled):


This we can now see in the settings:

    "outcome" => "success",
    "result" => {
        "handler" => "lb-handler",
        "filter-ref" => undefined

That is all the configuration we need.


To test we use the URL on the LoadBalancer with /app, this should now redirect to the remote servers using /helloworld-html5.  If I then type in a value and press the button I can see which server I have been redirected to.

We can see when tailing the logs on both the servers that I can refresh the browser which redirects to the other server and it continues in a round robin pattern.


There you go - a straight-forward process to configure JBoss EAP 7 standalone mode as a http load balancer in front of two standalone servers using round robin.

10 August 2016

Looking for Reliable WildFly Support for your Business or Organisation? Six Things You Need to Think About!

If you’re using WildFly as part of a commercial middleware infrastructure, then you’ll understand the importance of having access to high quality middleware support – and the need for expert WildFly troubleshooting advice when faced with business-critical tickets.

Expert WildFly Support from the UK's Leading Middleware experts

Whilst the WildFly community offers a huge source of knowledge, expertise and enthusiasm for upstream JBoss middleware (we should know, because we’re part of it) finding the specific information you need at the time you need it most - and being able to apply it to your operational environment with best practice expertise within a priority one time frame can challenge even the best in-house middleware teams.

In this post, we’ve summarised our experiences of the WildFly world (and those of clients) into six key things to think about if you’re accountable for your organisation’s WildFly support. Hopefully it will shed some light on some of the pitfalls that might lie ahead – and help you make better decisions when planning your middleware support strategy.

JBoss WildFly Troubleshooting and support
The most common touch-points we have with new clients is when something breaks and they reach out to us for WildFly troubleshooting engagements.  Often they’re searching for solutions to poor deployments implemented by alternative service providers, but frequently they find that whilst their in-house team delivered a decent implementation project, they have since run into problems.

Regardless of the middleware technology deployed, there’s a huge difference between the skill sets required to implement the product, and those needed to manage and maintain operational performance. WildFly is no different - and because you can’t fall back on a Red Hat support license, the need for a sound support strategy becomes even more critical.

Without hands-on experience of how the technology works for real-world business operations, we often find that the in-house team members who have championed the initial WildFly implementation project don’t always have the understanding or investigatory skills needed to fully support operational performance.

What to think about…
The situation described above isn’t a great one to find yourself - so if you’re using WildFly, make sure you get your operational performance objectives clear and carry out an honest appraisal of your team’s ability to investigate problems across the infrastructure and support those issues.

If you’re embarking on a WildFly implementation, do the same – but look beyond the potentially low cost entry and focus on the potential costs of meeting those objectives - and the cost to the business of operational failure. Think about the investment you’ll need to make in recruitment, training and resource management to achieve the level of cover and the expertise you’ll need to resolve a whole spectrum of service failure scenarios.

Even the less complex middleware environments use a range of dependent technologies which can be implicitly related to WildFly performance – for example, load balancers like Apache, Nginx or HAProxy), databases, ESB, or message brokers likeActiveMQ or other JMS providers.

Whether there is a particular tech under the WildFly hood or whether WildFly is working in tandem with other Java technologies, an expertise across the whole landscape of your systems is essential in order to investigate and identify the root-causes of your WildFly tickets.

Even the most ardent in-house WildFly enthusiasts may not have the experience of working across all the technologies you’re using – or understand the interdependencies that can affect operational performance. It can be pretty straight forward getting an app running, but much harder getting it to perWeform well.

It can take years of supporting complex middleware infrastructure to analyse issues with speed and accuracy; and an awareness of the whole landscape to deliver an appropriate solution.

What to think about…
What are your team’s real core skills and can you rely on them to deliver bullet-proof operational fixes when you need them most and against the clock?

If you’re thinking about support – think about proven WildFly problem-solving skills in a pressurised commercial environment.

Even if it were possible to stick a plug into the back of your head and download the WildFly community knowledge base, your team still need the skills to research and resolve complex investigations.

It’s true, the WildFly community is a hub of expertise and knowledge - but for all the thousands of blog posts, forums, technical documents, videos, webinars, opinion pieces, case studies and walk-throughs that you could find if you looked hard enough, how many will be relevant to the specific support issues your team will face on a daily basis?

Bear in mind that not one member of that global community has any knowledge of you, your business or your operational needs; they haven’t examined your architecture, they don’t know your configurations, they have no idea how WildFly fits within your infrastructure or how it meets your business needs. Nor do they have any concept of the risks you face or the resolution times you have to meet.

On top of this, the process of searching through the vast expanse of information out there is a daunting prospect. Knowing which sources to trust, identifying relevant content, and piecing together your solutions not only requires excellent search and discover skills, but can devour your response times. 

What to think about…
If you think in these terms, using the WildFly community as a business-critical resource is something that probably shouldn’t feature too strongly in your support strategy! 

As an up-stream open source solution, it can also be more difficult to transition from one version of WildFly to the next – and this difficulty isn’t limited to WildFly itself when there are other dependencies within the infrastructure.

So, consider whether your team have the capabilities to implement ongoing release cycles, updates and patching across your middleware – and will they understand broader implications?

If your business services are operating around the clock, at some point you’ll face the challenge of rectifying systems or reinstating service availability out of office hours when your team isn’t available and you might not be able to attend to the problem remotely. 

Unless you invest in an in-house team structure that guarantees on-call expert WildFly support around the clock regardless of leave and illness, you’ll continually run the risk of extended business downtime – and since WildFly is an unsupported Red Hat product, the challenge of rectifying services in that scenario can become a solitary one!

What to think about…
When planning for support, think about how a team rota would look if you’re covering your business operations. Accommodating for leave, maternity/paternity, illness and recruitment issue can start to look expensive – and remember that not only will you need to find those skills in the market-place, but you’ll need to manage them as well! 

The conversations we have with companies and organisations when discussing support services often feature tales of frustration and dissatisfaction with experiences of help-desks employing off-shore support engineers.

Whilst working with large sub-contracting organisation may offer a relatively lower-cost option on paper, you only realise the true value of support when you really need it most. When your business is down and the accountability for rectification is on your shoulders, the last thing you want to deal with is a support engineer with limited or no knowledge of your company, a lack of understanding about your infrastructure, and no experience of your operational priorities.

Instead of answering questions to fill the knowledge gaps about who you are and what middleware you’re using – you should be answering questions directly posed to investigate and resolve the issue in hand.

What to think about…
When you enter into an out-sourced WildFly support contract, make sure you have a clear understanding of the relationship you’ll have with the service provider. A larger provider may not give you the level of personal service you enjoy with a niche company, and are less likely to be as engaged with your objectives. 

You might feel more reassured by a provider who offers up-front health checks and infrastructure evaluation prior to support commencing, or one who demonstrates an ethos of partnership in the provision of support. Ultimately you want to know that you’re working with people who not only support your middleware, but your business objectives as well.

If you do find a company capable of supporting your WildFly environments to the standards you expect, it can be frustrating to then find they don’t have the skills or resources to offer additional middleware service solutions.

Working with a provider who can deliver a completely integrated portfolio of WildFly services is a more reassuring and easier proposition. What clients we speak with find of enormous value is having a dedicated account manager who not only handles support services, but who can also deliver project proposals from proof-of-concept and architectural design to implementation and DevOps.

What to think about…
Take another step forward and think of how good it would be to have the account manager, the support engineers and the professional services consultants fully integrated into the same team. 

This holistic ethos would create a genuine shared knowledge about your organisation, an understanding of your infrastructure and a team of experts available to call upon via support tickets or full-service projects. 

When we spoke with our clients and asked them about their decision-making processes for WildFly support – these six items were always the ones we had the most conversations about. Of course, it’s not an exhaustive list, but hopefully demonstrates a few of the issues you might want to think about when putting together a support strategy.

Finally, the issues raised here concern supporting business operations, but there may come a time when the organisation wants to consider new paradigms such as transitioning to microservices architecture – or even moving to JBoss enterprise infrastructure. Whilst this goes beyond the scope of a support service, think of the advantages of having an independent professional middleware consultancy with expertise in swarm, service discovery, containers, DevOps and enterprise application platforms.

If you are considering WildFly support or want to discuss broader WildFly projects, contact us using the form below and we’ll put you in touch with one of our Red Hat specialists.

5 August 2016

Planning a Successful Cloud Migration

For most organisations, migrating some or all of your applications to a cloud hosting provider can deliver significant benefits, but like any project it is not without risks and needs to be properly planned to succeed.  In this post, c2b2 Head of Profession, Matt Brasier offers an overview of a talk he recently gave to delegates attending the Oracle User Group (OUG) Scotland. He'll look at some of the things that you can do at a high level to help you get the most of your cloud migration - and break down some of the common factors into more concrete considerations.

Understand what you are looking to get out of the migration

As with anything your business does, there needs to be a good reason to do it, and in the case of a cloud migration, the reasons usually come down to costs. 

However there are other benefits that can be realised during a cloud migration programme that (while they come down to reducing costs in the end) produce more immediate and tangible benefits. Moving some of your organisation's infrastructure to the cloud is going to necessitate some changes in job roles and responsibilities, together with bringing in new ways of working and processes. A cloud migration programme is a great time to bring more modern development and operations (or DevOps) processes in, providing benefits in terms of delivering quicker application fixes and improvements to the end users.

Cloud infrastructure often provides less scope for customisation compared to running the same applications on-premise, and while that can cause some problems, it does force your organisation to limit itself to using common or standards based approaches, rather than developing their own. 

Eliminating customised or bespoke infrastructure and applications where possible reduces your support costs, as it allows you to find commodity skills in the market to maintain them, rather than having to train people in-house.

In order to make sure that your migration is actually successful (i.e. delivers what you are looking for rather than just moving you to cloud because someone told you it was a good idea), you need to properly identify where you are expecting to make savings (hardware, infrastructure management staff, licenses and support costs) and what your new costs will be (how much capacity will you need, what retraining is needed, what processes and procedures need to change).

Cloud vendors will often over-hype the cost savings you can make by not including the costs of things like upskilling and retraining in their analysis. If the main objective of a migration to the cloud is cost saving then you need to ensure you fully understand all the costs of the migration.

Plan the migration and manage risks

Once you understand why you want to migrate, and what success looks like, you need to plan how you get there. The risks to a migration project can broadly be categorised into two types:

  • Technical risks
    Where applications or components don’t work in the cloud, or need wore work than anticipated to get working
  • Non-technical risks
    Where business or process factors need to change

A migration from on-premise infrastructure (or infrastructure rented in a data centre) to a cloud provider is different to just upgrading your infrastructure versions and refreshing the hardware. 

It is key to consider from the start that you will be significantly changing the way some people need to perform their jobs, and will possibly even making people redundant. The project therefore needs to be handled with sensitivity and care from the start, ensuring that retraining and organisational change are tackled at the same time as technical migration.

At a more technical level, it is important to understand how many systems you are planning on moving to the cloud and when they will move. There will be dependencies between systems that will need to be managed, and interfaces between your cloud infrastructure and on-premise infrastructure that need to be migrated.

One of the biggest factors to consider when planning your migration is whether you plan to “cut over” to the cloud systems as a big bang, where all systems are migrated at once, or in a staged process. There are advantages and disadvantages to both approaches, so its worth considering in detail for your particular organisation. It is also possible (in fact likely) that there are some applications in your organisation which will be very costly (or possibly technically impossible) to migrate, so you need to plan for what you do with these – for example you may need to rewrite them in different technologies, or just age them off.

A cloud migration is not just a technical task, and there will be a number of business processes and strategies that will need to be reconsidered or rewritten from scratch. DR and business continuity plans, support processes, new account processes, etc, may all need to be rewritten with new ways of doing things.

Avoid common pitfalls

There are a number of common pitfalls that people come across, that can result in a cloud migration project not delivering its benefits, or not giving the anticipated savings. 

The main cause of this is underestimating in some way the complexity of the task, or trying to rush it and making early mistakes. It is not uncommon to find undocumented interfaces in a system that is to be migrated (for example, reliance on shared file systems or authentication providers) that are not covered on infrastructure diagrams, and so get forgotten in migration planning.

Another key cause of failure is not planning for the business and process change needed with adopting a cloud provider, leaving staff forced to accept changes that they don’t understand, and without the necessary skills to perform their jobs.

All of the above pitfalls can be avoided with good planning and an understanding of the complexities of a cloud migration project, allowing you to deliver the very real benefits and cost savings that a cloud infrastructure can provide. 

If you're considering the cloud as part of your infrastructure strategy and would like to discuss your project with Matt, contact us on 0845 539457 to arrange a conference call.

14 July 2016

Monitoring Tomcat with JavaMelody

In this post, troubleshooting specialist, Andy Overton describes an on-premise monitoring solution he deployed for a customer using a small number of Tomcat instances for a business transformation project. Using a step-by-step approach, he walks through his JavaMelody configuration and how he implements alerts in tandem with Jenkins.


Whilst working with a customer recently I was looking for a simple, lightweight monitoring solution for monitoring a couple of Tomcat instances when I came across JavaMelody.


After initial setup - which is as simple as adding a couple of jar files to your application - you immediately get a whole host of information readily available with no configuration whatsoever.

Playing about with it for a while and being impressed, I decided to write this blog because I thought I might be able to use my experiences to throw some light on a few of the more complex configurations (e-mail notifications, alerts etc.).

Technology Landscape

I’m going to start from scratch so you can follow along. To begin with, all of this was done on a VM with the following software versions:
  • OS – Ubuntu 16.04
  • Tomcat – 8.0.35
  • JDK - 1.8.0_77
  • JavaMelody – 1.59.0
  • Jenkins - 1.651.2

Tomcat Setup

Download from http://tomcat.apache.org/download-80.cgi

Add an admin user:
Add the following line to tomcat-users.xml:

<role rolename="manager-gui"/>
<user username="admin" password="admin" roles="manager-gui"/>

Start Tomcat by running <TOMCAT_DIR>/bin/startup.sh

The management interface should now be available at: http://localhost:8080/manager

JavaMelody Setup

Download from https://github.com/javamelody/javamelody/releases and unzip.

Add the files javamelody.jar and jrobin-x.jar to to the WEB-INF/lib directory of the war file you want to monitor.

I used a simple testapp used for testing clustering. Obviously we’re not testing clustering here but it doesn’t actually matter what the application does for our purposes.

Download the clusterjsp.war from here (or use your own application):

Drop the war file in the <TOMCAT_DIR>/webapps directory and it should auto-deploy.

Point a browser to http://localhost:8080/clusterjsp/monitoring and you should see a screen similar to this screen grab from github:

First Look

For new users, I'll just offer a quick run-down of my out-of-the-box experience. First thing you see are the graphs you have immediately available:

  • Used memory
  • CPU
  • HTTP Sessions
  • Active Threads
  • Active JDBC connections
  • Used JDBC connections
  • HTTP hits per minute
  • HTTP mean times (ms)
  • % of HTTP errors
  • SQL hits per minute
  • SQL mean times (ms)
  • % of SQL errors
You can access additional graphs for such things as garbage collection, threads, memory transfers and disk space via the 'Other Charts' link, and helpfully these can be easily expanded with a mouse click. Less helpfully, there's no auto-refresh so you do need to update the charts manually.

If you scroll down, you'll find that 'System Data' will make additional data available and here you can perform the following tasks:
  • Execute the garbage collector
  • Generate a heap dump
  • View a memory histogram
  • Invalidate http sessions
  • View http sessions
  • View the application deployment descriptor
  • View MBean data
  • View OS processes
  • View the JNDI tree

You can also view the debugging logs from this page - offering useful information on how JavaMelody is operating.

Reporting Configuration Guide

JavaMelody features a reporting mechanism that will produce a PDF report of the monitored application which can be generated on an ad-hoc basis or be scheduled for daily, weekly or monthly delivery.

To add this capability simply copy the file itext-2.1.7.jar, located in the directory src/test/test-webapp/WEB-INF/lib/ of the supplied javamelody.zip file to <TOMCAT_DIR>/lib and restart Tomcat.

This will add 'PDF' as a new option at the top of the monitoring screen.

Setting up an SMTP Server
In order to set up a schedule for those reports to be generated and sent via email, you first need to set up a Send Only SMTP server.

Install the software: sudo apt-get install mailutils

This will bring up a basic installation GUI and here you can select 'Internet Site' as the mail server configuration type. Then simply set the system mail name to the hostname of the server.

You'll then need to edit the configuration file /etc/postfix/main.cf and alter the following line from inet_interfaces = all to inet_interfaces = localhost

Restart postfix withsudo service postfix restart

You can test it with the following command (replacing the e-mail address):
echo "This is a test email" | mail -s "TEST" your_email_address

Scheduling the Report
With the email done, the next step is to schedule JavaMelody to send out daily e-mails of the PDF report. Firstly we need to download a couple of additional libraries.

When you have these, copy both files to <TOMCAT_DIR>/lib and add the following code to <TOMCAT_DIR>/conf/context.xml (replacing the e-mail address):

<Resource name="mail/MySession" auth="Container" type="javax.mail.Session"
<Parameter name="javamelody.admin-emails" value="your_email_address" override="false"/>
<Parameter name="javamelody.mail-session" value="mail/MySession" override="false"/>
<Parameter name="javamelody.mail-periods" value="day" override="false"/>

Once the server is started, you can send a test mail by calling this action:


Alerts (Using Jenkins)

Alerting takes a little more setting up and isn’t provided by JavaMelody itself. Instead, it's provided by Jenkins with a Monitoring add-on, so first of all, you'll need to download Jenkins from:

Use the following command to run Jenkins (we need to run on a different port as we have Tomcat running on the default 8080):  java -jar jenkins.war --httpPort=9090

Jenkins is now available at: http://localhost:9090

The nest step is to install the following plug-ins for Jenkins:
  • Monitoring – Needed for linking in with JavaMelody
  • Groovy – Needed to run Groovy code. This is required for setting up the alerts.
  • Email Extension – Needed to customise the e-mails Jenkins sends out

To install the monitoring plugin:
  1. Click 'Manage Jenkins'
  2. Select 'Manage Plugins'
  3. Select 'Available'
  4. Find and select the 'Monitoring Plugin'
  5. Click 'Install without restart'

Then follow the same procedure for Groovy and Email Extension. 

Groovy Configuration

Now, let's make sure the Groovy runtime is installed and configured by using sudo apt-get install groovy to install it to /usr/share/groovy

In order to run our Groovy scripts and call JavaMelody methods we'll need log4j and JavaMelody on the Groovy classpath. JavaMelody uses an old version for log4j (1.2.9) which can be downloaded from:

To configure Groovy:
  1. Go to Manage Jenkins, select 'Configure System'
  2. Under the Groovy section, select 'Groovy Installations'
  3. Add a name for your installation.
  4. Set GROOVY_HOME to /usr/share/groovy

Email Extension Plugin Configuration
  1. Go to Manage Jenkins, select 'Configure System'
  2. Under Jenkins location, set the URL to: http://hostname:9090 (replacing hostname with your hostname)
  3. Set the System Admin e-mail address to: donotreply@jenkins.com (or something similar – this is the address that alert e-mails will be sent from.
  4. Under the Extended E-mail Notification section, set SMTP server to localhost

Creating Alerts
Next up we'll set up a test alert, which triggers when there are more than 0 HTTP sessions - obviously not realistic, but good for demo and testing purposes.

From the main Jenkins menu:
  1. Select 'New Item'
  2. Select 'Freestyle' project
  3. Add the following details:
    • Name - High Session Count Alert
    • Description - Test alert triggered when there are more than 0 HTTP sessions
  4. Under 'Build Triggers', select 'Build' and 'Periodically'

    Now you can schedule how often to run your alert check. The syntax is exactly like a cronjob. Here we will set it to run our check every 10 minutes using the following: */10 * * * *
  5. Under 'Build', click 'Add build step'
  6. Select 'Execute Groovy' script
  7. Set the 'Groovy Version' to whatever you called it previously
  8. Add the following Groovy code:

import net.bull.javamelody.*;

url = "http://localhost:8080/clusterTest/monitoring";

sessions = new RemoteCall(url).collectSessionInformations(null);

if (sessions.size() > 0) throw new Exception("Oh No - More than zero sessions!!!");

This simple piece of code calls the URL of JavaMelody, retrieves the sessions information and then if that size is greater than zero, throws an Exception. Add javamelody.jar and log4j jar to the classpath (under Advanced) e.g.:


Under 'Post-Build Actions', select 'Add post build action', then select 'Email Notification', add the email address to send the alert to and finally, Save.


In order to test the alert triggers as required simply call your application e.g.

You should receive an e-mail with the subject 'Build failed in Jenkins', which looks something like this:

Started by user anonymous
Building in workspace <>
[workspace] $ /usr/share/groovy/bin/groovy -cp /home/andy/javamelody/javamelody.jar:/home/andy/logging-log4j-1.2.9/dist/lib/log4j-1.2.9.jar "<">
Caught: java.lang.Exception: Alert-Start
Oh No - More than zero sessions!!! Number of sessions: [SessionInformations[id=9BBFCF23C5126EDDBD44B371F1B11FD0, remoteAddr=, serializedSize=229]]
java.lang.Exception: Alert-Start
Oh No - More than zero sessions!!! Number of sessions: [SessionInformations[id=9BBFCF23C5126EDDBD44B371F1B11FD0, remoteAddr=, serializedSize=229]]
        at hudson4959397560302939243.run(hudson4959397560302939243.groovy:7)
Build step 'Execute Groovy script' marked build as failure

As Jenkins is generally used as a build tool, the outgoing e-mail isn’t the most user friendly when we’re looking to use it for alerting purposes. So, the final thing we will look at is altering the outgoing e-mail into something more legible.

Editing the Outgoing Email

First of all we will alter the Groovy script so that we can strip out the stack trace and additional information that we don’t need as we’re alerting on a specific condition of our app, not the underlying JavaMelody code.

In order to do so we will use Alert-Start and Alert-End to indicate the start and end of the alert message we want to put in the e-mail we will send out. Later we will use a regular expression to extract this from the whole Exception.

Go to the High Session Count Alert project and alter the last line of the Groovy script, changing it from:

if (sessions.size() > 0) throw new Exception("Oh No - More than zero sessions!!!");


if (sessions.size() > 0) throw new Exception("Alert-Start\nOh No - More than zero sessions!!! Number of sessions: " + sessions.size() + “\nAlert-End);

  1. Click Configure
  2. Delete the e-mail notification post-build action
  3. Add a new one - Editable Email Notification
  4. Set Project Recipient List, add your e-mail address
  5. Set the Default Subject to - JavaMelody - High Session Count ALERT
  6. Set the Default Content to the following:

Build URL : ${BUILD_URL}


Description: ${JOB_DESCRIPTION}

${BUILD_LOG_EXCERPT, start="^.*Alert-Start.*$", end="^.*Alert-End.*$"}

This will result in an e-mail containing the following:

Build URL :

Alert : High Session Count Alert

Description: Test alert triggered when there are more than 0 HTTP sessions

Oh No - More than zero sessions!!! Number of sessions: 1

The key thing here is the BUILD_EXCERPT. This takes in 2 regular expressions to indicate the start and end lines within the build log. This is where we strip out all of the extraneous stack trace info and just get the message between the Alert-Start and Alert-End tags.

To see a list of all available email tokens and what they display, you can click the "?" (question mark) next to the Default Content section.


Hopefully, this blog has given you a good starting point for using JavaMelody and Jenkins to monitor your Tomcat instances. There is a lot more that I haven’t covered but I’ll leave that as an exercise for the reader to dig a little deeper.

I’ve been impressed by it as a simple to set up, free monitoring tool. Configuring the alerts is a bit more of an effort but it’s nothing too difficult and it’s a tool I’d certainly recommend.

6 July 2016

Planning for a JBoss EAP 7 Migration

by Brian Randell

In my previous post, I had a ‘First Look’ at Red Hat JBoss EAP 7 and highlighted a few fundamental changes from EAP 6. This post has been written to dive deeper under the covers, and aims to examine the key differences between the two versions, looking primarily at the impact of migrating to this version from EAP 6.

I want to consider whether there are any operational considerations regarding migration, and further expand on some of the points raised when I first opened the EAP 7 box.

Support Timeline

Full Support for JBoss EAP 6 finishes at the end of June 2016 - which means there will be no minor releases or software enhancement to the EAP 6 code base from then on. Maintenance support ends June 2019 and Extended Life Support ends in June 2022.

So, if you're happy with the features you have and the system's stability, then bug fixes will still be provided for a while to come. However, if you're looking to use newer features and take advantage of those provided by Java EE 7, for example, then it's worth starting the evaluation cycle for JBoss EAP 7 now, so that when the first point release arrives (which is historically the more stable release) you are ready to implement into production.


As is the nature of new releases, some of the older technologies are not supported or are untested - and hence unverified whether they work. For JBoss EAP 7, it is only supported for Java 1.8+ and has not been tested on RHEL 5 or Microsoft 2008 server (Note: it has been tested on Microsoft 2008 R2 server).  

Some of the notable untested database integrations include Oracle 11g R1, Microsoft SQL Server 2008 R2 SP2 and PostgreSQL 9.2 – though I would expect these to be added to over time if there is demand. One addition to the database integration testing has been for MariaDB. Fundamentally, though, the support and testing is in line with previous versions of EAP and what you would expect.

Looking at the Java EE standards supported in EAP 7, JAX-RPC is now not available with preference to use JAX-WS. The standards that have been updated are:

  • Java EE
  • Java Servlet
  • JSF (JavaServer Faces)
  • JSP (JavaServer Pages)
  • JTA (Java Transaction API)
  • EJB (Enterprise Java Beans)
  • Java EE Connector Architecture
  • JavaMail
  • JMS (Java Message Service)
  • JPA (Java Persistence)
  • Common annotations for the Java Platform
  • JAX-RS (Java API for RESTful Web Services)
  • CDI (Contexts and Dependency Injection)
  • Bean validation

The major updates here are primarily for Java EE, JMS and JAX-RS that all have major version changes.

Corresponding to the standards updates, notable component changes from EAP 6 are :

  • JBoss AS has been replaced with the Wildfly Core
  • JBoss Web has been replaced with Undertow
  • HornetQ has been replaced with Active MQ Artemis (though the HornetQ module is retained for backwards compatibility)
  • Apache Web Server has been removed
  • jBeret has been added
  • JBoss Modules has been removed
  • JBoss WS-Native has been removed
  • JSF has been removed
  • The Jacorb subsystem has been removed and switched to OpenJDK ORB
  • The JBoss OSGi framework has been removed

With the standards, components and module changes, you can see that there are a lot of areas that will need to be checked, reconfigured and tested before using EAP 7 with existing code.


There should always be care consideration given to migrating major versions of an application. In all cases, full evaluation and testing should be undertaken to reduce the risk when deploying to the new environment. 

There are a significant number of changes between EAP 6 and EAP 7 with the updated standards used - deprecated APIs, modules and components and modified configuration structure. However there are also a number of compatibilities and interoperabilities provided in EAP 7 that should make the migration easier with proper planning and testing.

Migration tasks should be thought of from various points of view. The main ones I think about when migrating are:

1. Environment
  • Do I need to modify CPU, Memory, Storage, Network, Architecture for the new solution?
  • Can I upgrade inline or side by side, all at once or some servers at a time?

2. Code
  • Are there deprecated APIs that are used that need to be updated?
  • Do current API calls behave in the same way?

3. Server Configuration
  • Are there server configuration settings that need to be changed?
  • Are the CLI commands the same or are there new ones?

4. Monitoring
  • Is the monitoring you have in place compatible with the new solution?
  • Are there new configurations to add or amend for the updated components and modules?
  • Does the logging behave in the same way?

5. Process / Procedure
  • Are your procedures for operational tasks the same or do they need amending?
  • Are your operational scripts still fit for purpose?

6. Testing
  • Functional, Integration and Performance testing is required to ensure the application behaves within agreed thresholds.


From a code perspective, as mentioned, there are a number of deprecated features and updated standards, so the code will need to be checked and verified to understand whether any code will need changing to ensure compatibility with the new and updated modules.

For this there is a tool called Windup which is part of the Red Hat JBoss Migration Toolkit that provides an analysis of your code and what will need to be changed.

Some areas that the developers need to be aware of that haven’t already been mentioned are:

  • RESTEasy has a number of deprecated classes
  • Hibernate Search changes
  • JBoss logging annotations change

There are a lot of areas to check in the code, so as a first pass it is sensible to use the Windup tool.

Server Configuration

For the server configuration there are several approaches:

The recommended approach is to use the JBoss server migration tool - but this is currently in alpha (and hence unsupported) and only currently works against EAP 6.4 standalone servers. This is currently being developed though, and I expect it to be expanded to work across more versions and be a full release.

An alternative is to use the EAP 6 configuration as the configuration for EAP 7 and use the inbuilt CLI commands for migrating the subsystems of messaging, web and jacorb over to the new subsystems. This though does not update all the required server configuration so you are caught with still potentially having to make changes in order to achieve a finalised configuration.

I personally will always keep CLI scripts to configure the server so if a new server is required I can easily run these scripts and the server will be configured. These can be run on a newly installed version of EAP 7 and amended as required to use the new subsystems and configuration structure.

None of the ways described are clean and simple solutions, so there will need to be close attention paid to ensuring the configuration is correct.

Some of the areas that you need to be aware of are:

  • The ‘web’ subsystem is now the ‘undertow’ subsystem
  • Valves are not supported and need to be migrated to Undertow handlers
  • Rewrite sub-filters need to be converted to expression-filters
  • The ‘messaging’ subsystem is now the ‘messaging-activemq’ subsystem and rather having a ‘hornetq-server’ it is now simply ‘server’
  • To allow EAP6 JMS connections ‘add-legacy-entries’ needs to set to true when migrating via CLI
  • The threads subsystem has been removed and now each subsystem uses its own thread management

It should be noted that the list of other changes that will affect the server configuration is large and too numerous to be listed here. This highlights how much care will need to be taken to get the server configuration right. When the JBoss Server Migration Tool is fully available then this will be a good option.


There are also architecture concerns you need to be aware of when planning your migration. Some notable ones are :

  • Clusters must be the same version of EAP (So you will need to upgrade an entire cluster at a time)
  • JGroups now use a private interface (as opposed to public) as best practice means they should use a separate network interface.
  • Infinispan now uses distributed ASYNC caches for its default clustered caches rather than replicated
  • Messaging directory and file structure has changed. They are now named with reference to ActiveMQ rather than HornetQ.
  • Log messages are newly prefixed with the WFLY project codes

There are significant enough differences here to revisit the architecture design of your environment and verify it still fits for EAP 7.


There are a significant number of changes between JBoss EAP 6 and JBoss EAP 7, with a number of modules and components being updated to cater for the updated standards - resulting in API deprecation and configuration changes. 

This means the migration path may not be simple, the architecture may need to be reconsidered and the operational procedures may need to be modified. This can be eased by the use of Windup for code analysis, and the JBoss Migration Toolkit.

However there is still a lot to verify, reconfigure and test both from a code perspective and a server configuration/architecture perspective.


EAP 7 Supported Configurations -> https://access.redhat.com/articles/2026253
EAP 7 Included modules (requires a RedHat subscription account) -> https://access.redhat.com/articles/2158031
EAP 7 Component Details -> https://access.redhat.com/articles/112673