29 August 2014

Alternative Logging Frameworks for Application Servers: WebLogic

Welcome to the second in our blog series of using alternative logging frameworks with application servers. This entry will focus on WebLogic, specifically 12c, and configuring it to use Log4j and SLF4J with Logback.

If you missed the first part of this series, find it here: Part 1 - GlassFish

11 August 2014

New Features and Changes in BPM 12c

Last week at the Oracle BPM 12c summer camp in Lisbon, I had a chance to deep-dive into world of Oracle BPM suite 12c which went GA at the end of June. In this blog, I will discuss which I believe are the most notable changes in the BPM 12c product, some of which also impact SOA suite 12c since the BPM suite shares some components with the SOA suite including the human workflow and business rules engine among others as we can see from the diagram below. Furthermore, both the BPEL and BPMN service engines share fundamentally the same codebase.

There have been a wide variety of changes in this new major release which will affect a number of different BPM project stakeholders including the bpm architect, process analyst and bpm developers. The key new features and changes in 12c which we will discuss are:

  • The new BAM server runtime architecture
  •  New developer features

New BAM Server Architecture

There has been a number of notable changes to the architecture of the BAM server and its associated components in 12c in comparison to 11g. In 11g, we had an Active Data Cache component which acted as a cache for BAM data objects used by BAM dashboards. 
In 12c, the ADC component has been replaced with Oracle coherence and the event engine in 11g has been further developed via the Continuous Query Service. Once data objects have been updated in the Persistence engine, this events data is passed to the Continuous Query Service (CQS), which is a query engine that has the ability to listen to a data stream. Every time a change occurs, the CQS investigates which queries are affected by the change, and related to the CQS which dashboards need to be updated, and pushes the information to the report caching engine which in turn pushes the result to relevant views which are then displayed in the associated dashboards. In 12c, the BAM composer and viewer now supports multiple browser types since the BAM front end components now use ADF rather than Microsoft VML which tied these BAM web components to Internet explorer in 11g. There have been further improvements to BAM Server which include the following:

  • Ability to display business data in over 30 different business view types including treemap, bubble, scatter and geo-map (preview only) view types.
  • Due to the underlying architectural changes noted above, the BAM server now supports active-active cluster mode.
  • Finer grained security is enabled: Query, View and Dashboard and row level security
  • There are numerous preassembled BPM process analytics dashboards which come out of the box when the BPM suite is deployed. Note you need to enable process metrics to be collected by modifying the mbean property DisableProcessMetrics to true in the Fusion middleware control console for the BAM server.

New Features for Developers

There have been a number of new features introduced in BPM 12c which will aid those involved in the technical development of BPM projects and those attempting to diagnose BAM runtime issues including:

  •  BPM Development Installer
  • JDeveloper Debugger Utilities
  •  Detailed Diagnostics Tools for BAM

The 12c release provides users with a quickstart installer which allows one to install BPM 12c via a simplified installer. The installer contains an embedded java DB to minimize the memory utilized by the BPM runtime and also JDeveloper. JDeveloper also now includes an integrated debugger utility which allows one to debug at runtime bpm projects and their associated process graphical components. The standard debugger features such as being able to step in, step over, step out and resume are part of the debugger utility.
To allow BPM project stakeholders to diagnose project issues on the BAM server, 12c provides a comprehensive BAM diagnostics framework which allows one to diagnose different parts of the BAM server including diagnostics in the report cache, data control, composer and continuous query engine among others. We can enable diagnostics level to be enabled along with specific components by setting the mbean properties DiagnosticEnabled, DiagnosticLevel and DiagnosticComponents to appropriate values. One can also monitor viewsets and the performance of continuous query service using the BAM composer.

In this blog, we have discussed some of the new features and changes which have been introduced as part of BPM 12c, however there are many other changes featured in this release including the introduction of user friendly business rules (verbal rules), integration of excel with the Business rules editor and the integration of some business architecture modelling features within bpm composer among others. For further details on BPM 12c, please visit http://www.oracle.com/technetwork/middleware/bpm/documentation/documentation-154306.html and  https://blogs.oracle.com/bpm/entry/oracle_bpm_12c_now_ga

1 August 2014

Securing JBoss EAP 6 - Implementing SSL

Security is one of the most important features while running a JBoss server in a production environment. Implementing SSL and securing communications is a must do, to avoid malicious use.

This blogs details the steps you could take to secure JBoss EAP 6 running in Domain mode. These are probably documented by RedHat but the documentation seems a bit scattered. The idea behind this blog is to put together everything in one place.

In Order to enhance security in JBoss EAP 6, SSL/encryption can be implemented for the following
  • Admin console access – enable https access for admin console
  • Domain Controller – Host controller communication – Communication between the main domain controller and all the other host controllers should be secured.
  • Jboss CLI – enable ssl for the command line interface

The below example uses a single keystore being both the key and truststore and also uses CA signed certificates. 

You could use self-signed certificates and/or separated keystores and truststores if required.
  1. Create the keystores (certificates for each of the servers)
      • keytool -genkeypair -alias testServer.prd -keyalg RSA -keysize 2048 -validity 730 -keystore testServer.prd.jks
  2. Generate a certificate signing request (CSR) for the Java keystore
      • keytool -certreq -alias testServer.prd -keystore testServer.prd.jks -file testServer.prd.csr
  3. Get the CSR signed by the Certificate Authorities
  4.  Import a root or intermediate CA certificate to the existing Java keystore
      • keytool -import -trustcacerts -alias root -file rootCA.crt -keystore testServer.prd.jks
  5. Import the signed primary certificate to the existing Java keystore.
      • Keytool -importcert -keystore testServer.prd.jks -trustcacerts -alias testServer.prd -file  testServer.prd.crt
  6. Repeat steps 1-6 for each of the servers.

In order to establish trust between the master and slave hosts,
  1. Import the signed certificates of all the (slave) servers that the Domain Controller must trust onto the Domain Controllers Keystore
      • keytool -importcert -keystore testServer.prd.jks  -trustcacerts -alias slaveServer.prd -file slaveServers.prd.crt
      •  repeat step for all slave hosts.
  2. Import the signed certificate of the Domain controller onto the slave hosts
      •  keytool -importcert -keystore slaveServer.prd.jks  -trustcacerts -alias testServer.prd -file testServer.prd.crt
      • repeat steps for all slave hosts

This has be to done because (as per RedHat’s Documentation)

There is a problem with this methodology when trying to configure one way SSL between the servers, because there the HC's and the DC (depending on what action is being performed) switch roles (client, server). Because of this one way SSL configuration will not work and it is recommended that if you need SSL between these two endpoints that you configure two way SSL

Once this is done, we now have signed certificates loaded onto the java keystore.

In Jboss EAP 6 , the http-interface which provides access to the admin console, by default uses the ManagementRealm to provide file based authentication. (mgmt.-users.properties).The next step is to modify the configurations in the host.xml, to make the ManagementRealm use the certificates we created above.

The host.xml should be modified to look like

            <security-realm name="ManagementRealm">
                    <ssl protocol="TLSv1">
                         <keystore path="testServer.prd.jks" relative-to="jboss.domain.config.dir" keystore-password="xxxx" alias="testServer.prd"/>
                    <truststore path="testServer.prd.jks" relative-to="jboss.domain.config.dir" keystore-password="xxxx"/>
                    <local default-user="$local"/>
                    <properties path="mgmt-users.properties" relative-to="jboss.domain.config.dir"/>

            <native-interface security-realm="ManagementRealm">
                <socket interface="management" port="${jboss.management.native.port:9999}"/>
            <http-interface security-realm="ManagementRealm">
                <socket interface="management" secure-port="9443"/>

On the Slave hosts, In addition to the above configuration, the following needs to be changed

   <remote host="testServer" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/>"

Once you make the above changes and restart the servers, you should be able to access the admin console via https.


Finally, in order to secure cli authentication

 Modify /opt/jboss/jboss-eap-6.1/bin/jboss-cli.xml for each server and add

       <key-store-password>xxxx </key-store-password>
       <trust-store-password>xxxx </trust-store-password>

30 July 2014

Alternative Logging Frameworks for Application Servers: GlassFish


Sometimes the default logger just isn't enough...
Welcome to the first instalment in what will be a four-part series on configuring Application Servers to use alternative Logging Frameworks. The first in this series will cover GlassFish, and how to configure it to make use of Log4j, and SLF4J with Logback.

28 July 2014

Getting the most out of WLDF Part 4: The Monitoring Dashboard

Read Part 1: "What is the WLDF?" here
Read Part 2: "Watches" here
Read Part 3: "Notifications" here

This is going to be a fairly short post, because there isn’t a huge amount to go into that we haven’t already covered!

The WLDF monitoring dashboard gives a visual representation of available metrics from WebLogic MBeans. If you know how to drag-and-drop, then you have all the technical ability you need.

In this blog post, I will refer to an annotated image with colour coded headings so you can see which part I’m talking about.

25 July 2014

MIDDLEWARE INSIGHT - C2B2 Newsletter Issue 18

Featured News
What's New in Oracle SOA Suite 12c? - read more 
What's Happening with Java EE? - read more 

What's Happening with Java EE? Short interview with David Delabassee, see here 
Java Magazine: The Java Virtual Machine, see more on the Oracle Blog 
It's time to begin JMS 2.1! Read more here 
Java EE Concurrency API Tutorial, read the article by Francesco Marchioni 
HornetQ and ActiveMQ: Messaging - the next generation, find out more on Jaxenter.com 
Spring Boot 1.1.4 supports the first stable Tomcat 8 release, read more on Jaxenter.com 
RxJava + Java8 + Java EE 7 + Arquillian = Bliss , read the article by Alex Soto 
The 5 Best New Features of the Raspberry Pi Model B+, read more on the Life Hacker website 
Spotlight on GlassFish 4.0.1: #2 Simplifying GF distributions , read more on the Aquarium blog 
Jersey SSE Capability in GlassFish 4.0.1 read the article by Abhishek Gupta 

SOA Suite 12c is available for download , find out more on the SOA Community Blog 
‘What's New in Oracle SOA Suite 12c?’ read the blog post by Andrew Pielage here  
‘What's New in Oracle SOA Suite 12c?’ Register for the C2B2 launch event in London on the 12th of September 
Oracle urges users to adhere 113 patches pronto, read more on Jaxenter.com 
Docker, Java EE 7, and Maven with WebLogic 12.1.3, read the article by Bruno Borges
'Testing Java EE Applications on WebLogic Using Arquillian' with Reza Rahman, join the Oracle Webcast on the 29th of July

Red Hat JBoss Data Grid 6.3 is now available! , read more on Arun Gupta’s Blog 
JBoss-Docker shipping continues with launch of microsite, read more on Jaxenter.com 
Your tests assume that JBoss is up and running, read the article by Antonio Goncalves  
Rule the World - Practical Rules & BPM Development join London JBUG Event on the 7th of August, fing out more and register here 
Red Hat JBoss BRMS & JBoss BPM Suite 6.0.2.GA released into the wild, read more on Eric Schabell’s blog  
Hibernate Hidden Gem: The Pooled-Lo Optimizer, read the article by Vlad Mihalcea 
Camel on JBoss EAP with Custom Modules, read the article by Christian Posta 
Red Hat JBoss Fuse - Getting Started, Home Loan Demo Part , read the article by Christina Lin 

Processing on the Grid, read the article by Steve Millidge
James Governor In-Memory Data Grid: Less Disruptive NoSQL, see more on the Hazelcast Blog 
Designing a Data Architecture to Support both Fast and Big Data, read more on  Dzone 
Scaling Big Data fabrics, read the article by Mike Bushong 
Industry Analyst Insight on How Big Data is Bigger Than Data, read more on the Pivotal blog 

18 July 2014

Processing on the Grid

If you ever have the luxury of designing a brand new Java application there are many, new, exciting and unfamiliar technologies to choose from. All the flavours of NoSQL stores; Data Grids; PaaS and IaaS; JEE7; REST; WebSockets; an alphabet soup of opportunity combined with many programming frameworks both on the server side and client side adds up to a tyranny of choice.

However, if like me, you have to architect large scale, server-side, Java applications that support many thousands of users then there are a number of requirements that remain constant. The application you design must be high-performance, highly available, scalable and reliable. 

It doesn’t matter how fancy your new lovingly crafted JavaScript Web2.0 user interface is, if it is slow or simply not available nobody is going to use it. In this article I will try and demystify one of your choices, the Java Data Grid and show how this technology can meet those constant non-functional requirements while at the same time taking advantage of the latest trends in hardware.

Latency: The performance killer

When building large scale Java applications the most likely cause of performance problems in your application is latency. Latency is defined as the time delay between requesting an operation, like retrieving some data to process, and the operation occurring. Typical causes of latency in a distributed Java application are:

• IO latency pulling data from disk
• IO latency pulling data across the network
• Resource contention for example a distributed lock
• Garbage Collection pauses

For example typical ping times across a network range from; 57 μs on a local machine; 300 μs on a local LAN segment through to 100 ms from London to New York. When these ping times are combined
with typical network data transfer rates; 25 MB–30 MB/s for 1 Gb Ethernet; 250 MB/s–350 MB/s for 10 Gb Ethernet a careful trade-off between operation frequency and data granularity must be made to achieve acceptable performance. Ie. if you have 100 MB of data to process the decision between making 100 calls across the network each retrieving 1 MB, or 1 call retrieving the full 100 MB will depend on the network topology. Network latency is normally the cause of the developer cry, “It was fast on my machine!” Latency due to disk IO is also a problem, a typical SSD when combined with a SATA 3.0 interface can only deliver data at a sustained data rate of 500–600 MB/s so if you have Gigabytes of data to process disk latency will impact your application performance.

The hardware component with the lowest latency is memory, typical main memory bandwidth, ignoring Cache hits, is around 3–5 GB/s and scales with the number of CPUs. If you have 2 processors you will get 10 GB/s and with 4 CPUs 20 GB/s etc. John McCalpin at Virginia maintains a memory benchmark called STREAM (http://www.cs.virginia.edu/stream/) which measures the memory throughput of many computers with some achieving TB/s with large numbers of CPUs. In conclusion:

Memory is FAST: And therefore, for high performance, you should process data in memory.
Network is SLOW: Therefore for high performance minimise network data transfer. 

The question then becomes is it feasible to process many Gigabytes of data in memory? With the costs of memory dropping it is now possible to buy single servers with 1 TB of memory for only a few £30K–£40K and the latest SPARC servers are shipping supporting 32 TB of RAM so Big Memory is here. The other fundamental shift in hardware at the moment is the processing power of single hardware threads is starting to reach a plateau with manufactures moving more into providing CPUs with many cores and many hardware threads. This trend forces us to design our Java applications in a fashion that can utilise the large number of hardware threads appearing in modern chips.
Parallel is the Future: For maximum performance and scalability you must support many hardware threads.

Data Grids

You may wonder what all this has to do with Java Data Grids. Well, Java Data Grids are designed to take advantage of these facts of modern computing and enable you to store many 100 s of GB of Java objects in memory and enable parallel processing of this data for high performance.

A Java Data Grid is essentially a distributed key value store where the key space is split across a cluster of JVMs and each Java object stored within the grid has a primary object on one of the JVMs and a secondary copy of the object on a different JVM. These duplicates ensure High Availability as if a single JVM in the grid fails then no Java objects will be lost.

The key benefits of the partitioned key space in a Data Grid when compared to fully replicated clustered Cache are that the more JVMs you add the more data you can store and access times for individual keys are independent of the number of JVMs in the grid.

For example, if we have 20 JVM nodes in our Grid each with 4 GB of free heap available for the storage of objects then we can store, when taking into account duplicates, 40 GB of Java objects. If we add a further 20 JVM nodes then we can store 80 GB. Access times are constant to read/write objects as the grid will go directly to the JVM which owns the primary key space for the object we require.
JSR 107 defines a standards based API to data grids which is very similar to the java.util.Map API as shown in Listing 1. Many Data Grids also make use of Java NIO to store Java objects “off heap” in Java NIO buffers. This has the advantage that we can increase the memory available for storage without increasing the latency from garbage collection pause times.
Listing 1
public static void main( String[] args )
CacheManager CacheManager = Caching.getCachingProvider().
MutableConfiguration<String, String> config = new MutableConfiguration<String,
Cache Cache = CacheManager.getCache("C2B2");
Cache.put("Key", "Value");

Parallel processing on the Grid

The problem arises when we store many 10 s of GB of Java objects across the Grid in many JVMs and then want to run some processing across the data set. For example, we may store objects representing hotels and their availability on dates. What happens when we want to run a query like “find all the hotels in Paris with availability on Valentines day 2015”? If we follow the simple Map API approach we would need to run code like that shown in Listing 2.

However the problem with this approach, when accessing a Data Grid, is that the objects are distributed according to their keys across a large number of JVMs and every “get” call needs to serialize the object over the network to the requesting JVM. Using the listing above this could pull 10s of GB of data over the network which as we saw earlier is slow.

Thankfully most Java Data Grid products allow you to turn the processing on its head and instead of pulling the data over to the code to process they send the code to each of the Grid JVMs hosting the data and execute it in parallel in the local JVMs. As typically the code is very small in size only a few KB of data needs to be sent across the network.

Processing is run in parallel across all the JVMs making use of all the CPU cores in parallel. Example code, which runs the Paris query across the Grid, for Oracle Coherence,a popular Data Grid product is shown in Listing 3 and 4.

Listing 3 shows the code for a Coherence EntryProcessor which is the code that will be serialized across all the nodes in the data grid.

This EntryProcessor will check each hotel as before to see if there is availability for Valentine’s day but unlike in Listing 2 it will do so in each JVM on local in-memory data. JSR107 also has the concept of an EntryProcessor so the approach is common to all Data Grid products.

Listing 4 shows the Oracle Coherence code needed to send this processor across the Data Grid to execute in parallel in all the grid JVMs. Processing data using EntryProcessors as shown in Listings 3 and 4 will result in much greater performance on a Data Grid than access via the simple Cache API. As only a small amount of data will be sent across the network and all CPU cores across all the JVMs will be used to process the search.

Fast Data: Parallel processing on the Grid

As we’ve seen, using a Data Grid in your next application will enable you to store large volumes of Java objects in memory for high performance access in a highly available fashion. This will also give you large scale parallel processing capabilities that utilise all the CPU cores in the Grid to crunch through processing Java objects in parallel. Take a look at Data Grids next time you have a latency problem or you have the luxury of designing a brand new Java application.

Listing 2
public static void main( String[] args )
CacheManager CacheManager = Caching.getCachingProvider().
MutableConfiguration<String, String> config = new MutableConfiguration<String,
Cache hotelCache = CacheManager.getCache("ParisHotels");
Date valentinesDay = new Date(2015,2,14); // I know it is deprecated
for (String hotelName : hotelNames ) {
Hotel hotel = (Hotel)hotelCache.get(hotelName);
if (hotel.isAvailable(valentinesDay)){
System.out.println("Hotel is available" + hotel);
Listing 3
Public class HotelSearch implements EntryProcessor {
HotelSearch(Date availability) {
this.availability = availability;
Map processAll(Set hotels) {
Map mapResults = new ListMap();
for (Entry entry : hotels) {
Hotel hotel = (Hotel)entry.getValue();
if (hotel.isAvailable(this.availability)) {

Listing 4
public static void main( String[] args )
NamedCache hotelCache = CacheFactory.getCache("ParisHotels");
Date valentinesDay = new Date(2015,2,14); // I know it is deprecated
Map results = hotelCache.processAll((Filter)null, new

This article was originally published in JaxMagazine #35 Issue Janauary 2014