5 May 2016

Oracle WebLogic Work Managers - A Practical Overview

by Andy Overton

In this post, Andy Overton presents an insight into Oracle WebLogic Work Managers, going through the basics of what they are, how they are used, and providing deeper configuration and deployment advice. Using a test project, he examines the practical application of work managers, and looks at the control you can get over request handling and prioritisation.

How to configure and test WebLogic Work Managers


So, first of all, what are Work Managers?

Prior to WebLogic 9, Execute Queues were used to handle thread management. You created thread-pools to determine how workload was handled. Different types of work were executed in different queues based on priority and order requirements. The issue was that it is very difficult to determine the correct number of threads required to achieve the throughput your application requires and avoid deadlocks.

Work Managers are much simpler. All managers share a common thread pool and priority is determined by a priority-based queue. The thread pool size is dynamically adjusted in order to maximise throughput and avoid deadlocks. In order to differentiate and prioritise between different applications, you state objectives via constraints and request classes (e.g. fair share or response time). 

More on this later!

Why use Work Managers?

If you don’t set up your own Work Managers, the default will be used. This gives all of your applications the same priority and they are prevented from monopolising threads. Whilst this is often sufficient, it may be that you want to ensure that:

  • Certain applications have higher priority over others.
  • Certain applications return a response within a certain time.
  • Certain customers or users get a better quality of service.
  • A minimum thread constraint is set in order to avoid deadlock.

Types of Work Manager

  • Default – Used if no other Work Manager is configured. All applications are given an equal priority.
  • Global – Domain-scoped and are defined in config.xml. Applications use the global Work Manager as a blueprint and create their own instance. The work each application does can then be distinguished from other applications.
  • Application – Application-scoped and are applied only to a specific application. Specified in either weblogic-application.xml, weblogic-ejb-jar.xml, or weblogic.xml.

Constraints and Request Classes

A constraint defines the minimum and maximum number of threads allocated to execute requests and the total number of requests that can be queued or executing before the server begins rejecting requests. Constraints can be shared by several Work Managers.

Request classes define how requests are prioritised and how threads are allocated to requests. They can be used to ensure that high priority applications are scheduled before low priority ones, requests complete within a given response time or certain users are given priority over others. Each Work Manager may specify one request class.

Types of Constraint

  • Max threadsDefault, unlimited.
    The maximum number of threads that can concurrently execute requests. Can be set based on the availability of a resource the request depends on e.g. a connection pool.
  • Min threads – Default, zero.
    The minimum number of threads to allocate to requests. Useful for preventing deadlocks.
  • Capacity – Default, -1 (never reject requests).
    The capacity (including queued and executing) at which the server starts rejecting requests.

Types of Request Class

  • Fair Share – Defines the average thread-use time. Specified as a relative value, not a percentage.
  • Response Time – Defines the requested response time (in milliseconds).
  • Context – Allows you to specify request classes based on contextual information such as the user or user group.

Initial Setup

For this blog the following versions of software were used:

  • Ubuntu 14
  • JDK 1.8.0_73
  • WebLogic Server 12.2.1
  • JMeter 2.13
  • NetBeans 8.1

So, first of all, install WebLogic and set up a very basic domain (test_domain) with just an Admin server.

Register the server with IDE:
  1. Open the Services window
  2. Right-click the Servers node and choose 'Add Server'
  3. Select Oracle WebLogic Server and click 'Next'
  4. Click 'Browse' and locate the directory that contains the installation of the server, then Click 'Next'. The IDE will automatically identify the domain for the server instance.
  5. Type the username and password for the domain.

Creating the test project

Select New Project: Java EE - Enterprise Application
Name: WorkManagerTest
Server: Oracle WebLogic Server

Under WorkManagerTest-war, right click 'Web Pages' and select 'New JSP'.
File Name: test.jsp

Change the body to:

    <h1>Work manager test OK</h1>
        Thread.sleep(1000); // sleep for 1 second

Right click on WorkManagerTest-war, select 'Deploy' and then go to: http://localhost:7001/WorkManagerTest-war/test.jsp where you should see your page displayed.

Now create another application, this time called WorkManagerTest-2. 
This will be identical to test 1 but change the JSP code to: Name - test-2.jsp

    <h1>Work manager test 2 OK</h1>
        Thread.sleep(1000); // sleep for 1 second

Go to the WebLogic console: http://localhost:7001/console

Go to Deployments - click on WorkManagerTest-war, select Monitoring, Workload. Here you can see the work managers, constraints and request classes associated with your application. As we haven’t yet set anything up the app is currently using the default Work Manager.

Creating the JMeter test

Right click on Test Plan and select Add Threads (Users) > Thread Group

Name: Work Manager Test
Number of Threads (users): 10
Ramp-Up Period (in seconds): 10

This will call your application once per second for 10 seconds.

Right click on your new Thread Group and select: Add > Sampler > HTTP Request

Name: test.jsp
Server Name: localhost
Port: 7001
Path: WorkManagerTest-war/test.jsp

Right click on Test - 10 users
Add, Listener, View Results in Table
Add, Listener, View Results Tree

Create another HTTP request as follows:
Name: test-2.jsp
Server Name: localhost
Port: 7001
Path: WorkManagerTest-2-war/test-2.jsp

Right click on Test - 10 users
Add, Listener, View Results in Table
Add, Listener, View Results Tree

Save your test plan and then run it.

Click on results tree and table. With tree, you can view request and response data; obviously not very interesting in our case, but handy if you want to see what's being returned from an app. More useful is View Results in Table. This is very handy for quickly seeing response times. You should see that each of your JSPs/applications was called 10 times and each time it took just over a second to return a response.

Creating the work managers

In the WebLogic admin console:

  • Environment: Work Managers
  • New: Work Manager
  • Name: WorkManager1
  • Target: AdminServer

Create another the same but name it 'WorkManager2'

Using Fair Share request classes

In the WebLogic admin console

  • Environment: Work Managers
  • New: Fair Share Request Class
  • Name: FairShareReqClass-80
  • Fair Share: 80
  • Target: AdminServer

Create another with name FairShareReqClass-20, Fair Share 20

Now we need to associate the request classes with the Work Managers.

  • Select WorkManager1, under Request Class select FairShareReqClass-80 and save.
  • Select WorkManager2, under Request Class select FairShareReqClass-20 and save.

For the changes to take effect you will need to restart the server.

Alter web.xml in both of the applications. This can be found under WEB-INF.






Now, when you run the JMeter test again, you should see results similar to the following:

What we are seeing is that test1.jsp is using the Work Manager with a Fair Share request class set to 80, whereas test2.jsp is using one set to 20.

There is an 80% (80/100) chance that the next free thread will perform work for jsp1. There is a 20% (20/100) chance it will next service jsp2.

As mentioned previously, the values used aren’t a percentage, although in our case they happen to add up to 100.

If you were to add another jsp, also using the Fair Share request class set to 20 the figures would be different: jsp1 would have a 66.6% chance (80/120), and jsp 2 and 3 would both have a 16.6% chance (20/120).

Using Response Time request classes

Next we will take a look at using response time request classes. There is no need to alter either JSP - we will just create new request classes and set our work managers to use those.

In the WebLogic console, go to Environment – Work Managers

  • Select New: Response Time Request Class
  • Name: ResponseTime-1second
  • Goal: 1000
  • Target: AdminServer

Create another but with the following values:

  • Name: ResponseTime-5seconds
  • Goal: 5000
  • Target: AdminServer

Finally, alter the two work managers:

Alter WorkManager1 to use the ResponseTime-1second response class and WorkManager2 to use the ResponseTime-5seconds response class.
Then restart the server.

Now, alter your JMeter test so that it loops forever.

Run it again and you should see that to begin with it takes a little while for the work managers to take effect. After a while, however you should see that the responses to both the apps start to even out and take around a second.

This is described in the Oracle documentation: “Response time goals are not applied to individual requests. Instead, WebLogic Server computes a tolerable waiting time for requests with that class by subtracting the observed average thread use time from the response time goal, and schedules requests so that the average wait for requests with the class is proportional to its tolerable waiting time.”

Context request classes

Context request classes are compound request classes that provide a mapping between the request context and a request class. This is based upon the current user or the current user’s group.

So it’s possible to specify different request classes for the same servlet invocation depending on the user or group associated with the invocation.

I won’t create one as a part of this blog as they simply utilise the other request class types.

Using constraints

Constraints define the minimum and maximum number of threads allocated to execute requests and the total number of requests that can be queued or executing before WebLogic Server begins rejecting requests.

As they can cause requests to be queued up or even worse, rejected, they should be used with caution. A typical use case of maximum threads constraint is to take a data source connection pool size as the max constraint. That way you don’t attempt to handle a request where a database connection is required but cannot be got.

There are 3 types of constraint:

  • Minimum threads
  • Maximum threads
  • Capacity
The minimum threads constraint ensures that the server will always allocate this number of threads,  the maximum threads constraint defines the maximum number of concurrent requests allowed, and the capacity constraint causes the server to reject requests when it has reached its capacity.

To see how this works in action, let’s create some constraints. 

Under Work Managers in the WebLogic console create the following, all targeted to the AdminServer:

New Max Threads Constraint:

  • Name - MaxThreadsConstraint-3
  • Count – 3

New Capacity Constraint:

  • Name - CapacityConstraint-10
  • Count 10

Next, create a new Work Manager called ConstraintWorkManager, add the two constraints to it and then restart WebLogic.

Now, alter the Test1 application and change the Work Manager in web.xml from WorkManager1 to ConstraintWorkManager. Also, alter the sleep time from 1 second to 5 and then re-deploy your application.

Next, create a new JMeter test with the following parameters:

  • Number of Threads (users) – 10
  • Ramp-Up Period – 0

Run this test and you should see results similar to the following:

So, what’s happening here?

(Remember, we set the maximum threads to 3). W
e send in 10 concurrent requests, and 3 of those begin to be processed immediately, whilst the others are put in a queue. So, we get the following:

At the start:

After 5 seconds:

After 10 seconds:

After 15 seconds:

Next, change the JMeter test. Raise the number of users to 13 and run the test again. This time you will see that 3 of the requests fail. This is due to the Capacity Constraint being set to 10. This means that only 10 requests can be either processing or queued and the others are rejected.

If you call the application from your browser whilst the test is running you will see that you receive a 503--Service Unavailable error (this can be replaced with your own error page).

Take care when setting up thread constraints - you don’t want to be limiting what your server can process without good reason and you certainly don’t want to be rejecting requests without very good reason.


Hopefully, this overview of WebLogic Work Managers has given you an insight into what they are used for and how you can go about setting them up.

WebLogic does a good job of request handling itself out of the box but sometimes you will find that you need more control over which applications should take priority or what should happen in times of heavy load. 

In that case, Work Managers can prove very useful, although as with all such things – make sure you are certain of what you are trying to achieve, then test, test some more and then test again!

Knowing how you want your server to run and being sure how it is running are two very different things. Ensure you test for all potential loads and understand what will happen in all cases.

More popular WebLogic posts from our technical blog...

Installing WebLogic with Chef
Alan Fryer shows you how to create a simple WebLogic Cluster on a virtual machine with two managed servers using Chef.

Basic clustering with WebLogic 12c and Apache Web Server
Mike Croft demonstrates WebLogic’s clustering capabilities and shows you how to utilise the WebLogic Apache plugin to use the Apache Web Server as a proxy to forward requests to the cluster.

Alternative Logging Frameworks for Application Servers: WebLogic
Andrew Pielage  focuses on WebLogic, specifically 12c, and configuring it to use Log4j and SLF4J with Logback.

WebLogic 12c Does WebSockets - Getting Started
In this post, Steve demonstrates how to write a simple websockets echo example using 12.1.2

Weblogic - Dynamic Clustering in practice
In this blog post Andy looks at setting up a dynamic cluster on 2 machines with 4 managed servers (2 on each). He then deploys an application to the cluster and shows how to expand the cluster.

Getting the most out of WLDF Part 1: What is the WLDF?
The WebLogic Diagnostic Framework (WLDF) is an often overlooked feature of WebLogic which can be very powerful when configured properly.In this blog series, Mike Crosft points out some of the low-hanging fruit so you can get to know enough of the basics that you’ll be able to make use of some of the features, while having enough of a knowledge of the framework to take things further yourself.

23 March 2016

Using X-Forwarded Proto to troubleshoot GlassFish and Apache Protocols

by Claudio Salinitro

C2B2 consultant, Claudio Salinitro looks at a GlassFish configuration solution implemented for a client suffering from HTTP timeout errors. Using X-Forwarded Proto to troubleshoot the protocols being used by the Apache web server and the GlassFish application server.

GlassFish Troubleshooting - C2B2

The Case

One of our clients recently got in touch with me after changing the Apache proxy connector to use HTTP instead of the AJP protocol, and found that they were experiencing repeated HTTP request timeout errors. Tracking the requests made by the browser, it was evident that these requests were trying to connect to the application in HTTP instead HTTPS - and the HTTP port was not open on the firewall.


To understand the issue resolution, it is important to understand the underlying architecture…

Removing all the components not relevant to understanding the issue, the architecture was basically composed of a Firewall as entry point for the clients with SSL termination, an Apache Web server acting as a reverse proxy, and a GlassFish application server.

GlassFish Architecture - C2B2

The problem

When an application on the server side has to build an URI, if not instructed otherwise, it will use the same protocol used by the application server (GlassFish). In this case, the protocol used by the client (https) is different to the one used by GlassFish (http), and this explains why the redirects sent by GlassFish were built with the wrong scheme.

Replicating the environment

The safest way to understand the problem and find a solution was to replicate it on a local environment where I could 'play around' and try different solutions.

For this purpose, I used a virtual machine with HAProxy as a substitute for the firewall, and a virtual machine with Apache Web server and GlassFish.

Step 1: Create the HTTPS certificate
I configured HAProxy with a self-generated certificate to use HTTPS:

openssl req -new -x509 -days 1460 -keyout server.key -out server.crt -nodes
cat server.crt server.key > serverHA.pem

Note: HAProxy requires the public and private key to be in a single PEM file. For this reason, I merged the two keys in a single serverHA.pem file.

Step 2: HAProxy configuration
I added the following to the haproxy.cfg configuration file:

frontend localhost
    bind *:443 ssl crt /apps/httpd/conf/serverHA.pem
    mode http
    default_backend nodes

backend nodes
    mode http
    server web01

I bind to the port 443, presenting the self-generated certificate, and proxy all the requests to the Apache listening on the IP port 80.

Step 3: Apache configuration

I added the following to the httpd.conf configuration file:

ProxyPass /clusterjsp
ProxyPassReverse /clusterjsp

Here I reverse proxy all the requests starting with /clusterjsp (our test web application) to GlassFish (on the same node, listening on port 8080)

Step 4: Test application
I used a test web application (clusterjsp) and added two jsp. The first, index.jsp, is the landing page and contains only the link to a second page, redirectBack.jsp. This will redirect back to the index using HttpServletResponse.sendRedirect to build the URI.


    <head><title>Test page</title></head>
        <a href='redirectBack.jsp' >redirect</a>


<% response.sendRedirect("index.jsp"); %>

I deployed the war file on a GlassFish instance listening in http on port 8080.

Going to the https://mydomain/clusterjsp/index.jsp page and clicking the 'redirect' link, I experienced exactly the same behaviour as our client:


To resolve the issue we have to tell GlassFish which protocol is used externally on the firewall - the de-facto standard being to use the HTTP header X-Forwarded-Proto.

For this purpose, we need to:

1. Set the header in Apache (using mod_headers)
2. Tell GlassFish which header is bringing the scheme information.

C2B2 - Delivering GlassFish Solutions

Step 1: Apache configuration
I modified the httpd.conf as below:

RequestHeader set X-Forwarded-Proto "https"
ProxyPreserveHost On

ProxyPass /clusterjsp
ProxyPassReverse /clusterjsp

The first directive adds the HTTP header X-Forwarded-Proto with the value “https”.

The second directive will pass the Host: line from the incoming request to the proxied host, instead of the hostname specified in the ProxyPass line.

Step 2: GlassFish configuration
Using asadmin we set the scheme mapping for the http connector of the GlassFish instance serving the web application:

asadmin set server.network-config.protocols.protocol.http-listener-1.http.scheme-mapping=X-Forwarded-Proto

The same setting can be applied using the GlassFish admin web interface in the settings for the HTTP protocol of the connector.

Testing once again using our test web application, I can see that I now have the correct behaviour!

24 February 2016

Using the Vagrant-Env Plugin for AWS Collaboration

by Mike Croft

I've been gradually integrating Vagrant into my workflow for a while now. I love how it gives me the chance to try something totally new out in a completely separated environment that I can then just bin if I get it all wrong - and I know that nothing in my host system has been contaminated. Docker can achieve basically the same thing, but Vagrant fits my workflow very well.
Vagrant is quite extensible and has plugins for VMWare, Microsoft Azure and Amazon Web Services as well as the default VirtualBox, so the same provisioning script can be used among your development team as well as in production in the cloud or on your self-hosted VMWare platforms. The only thing you'll need to keep the same is the OS that you want to provision - configured by Vagrant boxes.

Switching from VirtualBox to AWS

I've recently needed to use the AWS plugin for a talk for the West Midlands JUG in a demo, and this presented me with a problem. The example in the README of the Vagrant plugin looks like this:
Vagrant.configure("2") do |config|
  config.vm.box = "dummy"

  config.vm.provider :aws do |aws, override|
    aws.access_key_id = "YOUR KEY"
    aws.secret_access_key = "YOUR SECRET KEY"
    aws.session_token = "SESSION TOKEN"
    aws.keypair_name = "KEYPAIR NAME"

    aws.ami = "ami-7747d01e"

    override.ssh.username = "ubuntu"
    override.ssh.private_key_path = "PATH TO YOUR PRIVATE KEY"
For anyone reading this who is not familiar with AWS, the two important bits to note are the aws.access_key_id and aws.secret_access_key. If the word "secret" wasn't enough to tell you it shouldn't really be shared freely on Github, the reason you shouldn't be spreading that around is that that ID/Key pair gives anyone with those access to your Amazon account. They can be revoked very easily, but it's absolutely not the sort of security breach you want.
So now, the Vagrantfile which previously enabled us to share specific configurations among our teams and the community can now no longer be shared. Which is certainly a problem, when that is precisely the reason why you want to use it!

Enter the Vagrant-Env Plugin

Ideally, what I wanted to do was to be able to use placeholder variables that I could store in a separate file added to my .gitignore file. Then, I could just reference these variables and be confident that they wouldn't be uploaded to a public Github repository.
What I found was the vagrant-env plugin, which does exactly what I wanted:
Vagrant.configure("2") do |config|
  config.vm.provider :aws do |aws, override|
    aws.access_key_id = ENV['AWS_ACCESS_KEY']
    aws.secret_access_key = ENV['AWS_SECRET_KEY']
After making sure the plugin was installed:
$ vagrant plugin install vagrant-env
I added the actual values in a file called .env and then added that to my .gitignore. The README for the plugin does say that you need to specifically enable the plugin with config.env.enable in the Vagrantfile, but I left that out and found that it still worked fine.
I haven't used Microsoft Azure, yet, but I would expect a similar use case would require the use of the vagrant-env plugin, but in any case, the plugin is incredibly versatile, despite how simple it is.

12 February 2016

JBoss Logging and Best Practice for EAP 6.4 (Part 1 of 3)

By Brian Randell

If you’ve ever treated logging as an afterthought, or only given it serious consideration when the actual troubleshooting begins, this three-part guide will give you everything you need to implement a JBoss EAP 6.4 production logging configuration from the word go. Written from an administrator point of view, I’ll take you step-by-step through the best practices for your production environment and make troubleshooting a far easier proposition!

Logging is crucial to your environment; it can assist you greatly in understanding your system, helps detail items that are either a cause for concern now, or that might be in the future, and is a perfect tool for root cause analysis if fatal errors occur. Because of this you need log files that are readable, clean, and show what is useful to see. I have seen lots of JBoss log files where the log is spewing out so many errors with full stack traces and thousands upon thousands of INFO messages, that your ability to find the actual nub of the problem takes a lot of time - if you can decipher the log at all!

So, here are some questions to ask about logging:
  • What do we get out of the box in EAP 6.4?
  • How can we configure it?
  • What do we need in our Production environments?
  • What do we need to ask of our developers?

A lot of the decisions you make here will be specific to your environment. For example, how critical the applications are, your monitoring configuration, and the ease of troubleshooting are key to how you want your logging to be configured. These are decisions only you can make about the environment you administer and support.

For this article, I will be looking at JBoss EAP 6.4.0 running on CentOS 7.1.1503 and as such, this post will take a Linux slant. The reason I am using 6.4.x is that there are a few enhancements to the logging introduced in this version that I wanted to include.

Note: When I use $JBOSS_HOME I mean the directory in which JBoss is installed.

Note: When I use $JBOSS_LOG_DIR I mean the directory in which the logs are being stored. For Standalone this is usually in $JBOSS_HOME/standalone/logs. For Domain mode this will be in $JBOSS_HOME/domain/logs for the process controller and host logs, and $JBOSS_HOME/domain/server/<server>/logs for the server log.

Out of the box

As of JBoss EAP 6.4.0 the following logging is set by default:

GC Log

For a standalone server GC logging is enabled and is defined in:


The following options are given to the JVM:

-verbose:gc –Xloggc:”$JBOSS_LOG_DIR/gc.log” –XX:+PrintGCDetails –XX:+PrintGCDateStamps –XX:+UseGCLogFileRotation –XX:NumberOfGCLogFiles=5 –XX:GCLogFileSize=3M –XX:-TraceClassUnloading

For a domain server GC logging is not enabled for the Process Controller, Host Controller or the Servers. You will need to enable these yourself through JVM properties on the Domain Controller for the level you want (i.e. Server level, Server Group level, Host level, Domain level)

Boot Log
For a standalone server the boot log file is defined as:


This is specified in the standalone.sh file but is also defined in the logging.properties file.

For a domain server the boot log file is defined as going to a different log depending on the process running.

For the Process Controller it is:


For the Host Controller it is:


This is specified in the domain.sh file but also in the logging.properties file.

The logging.properties file provides the configuration definitions.

For a standalone server the logging.properties file is defined as:


This is specified in the standalone.sh file

The logging.properties file for the standalone server contains the default information as to what is being logged, telling us the log categories configured and their log levels, the handler configurations and any formatters.

It is the logging.properties file that defines the FILE handler as going to:


...and the CONSOLE handler as going to: 


The logging properties file for a domain server is in:


This is specified in the domain.sh

This file contains boot logging configuration only for the Process Controller and the Host Controller.

There is a default server configuration file (which is the same as the standalone logging.properties file) in:


NOTE: The logging.properties file is only active until the logging subsystem is loaded.  You will notice by looking at the logging subsystem in the standalone.xml or the domain.xml that it is the same configuration as you see in the logging.properties file.

Console Log
The Console Log is used when running the scripts in $JBOSS_HOME/bin/init.d which are used when installing JBoss as a service, otherwise it logs to the screen if you are running JBoss from the standalone.sh or domain.sh scripts.

Both the jboss-as-domain.sh and jboss-as-standalone.sh files define the console log to be stored in:


Log Levels
Whilst JBoss supports all log levels there are 6 main ones that get used (This information is taken from the Admin and config guide).

Log Level
Used for messages providing detailed information about the running state of an application
Used for messages that indicate the progress of individual requests or activities of an application.
Used for messages that indicate the overall progress of an application.
Used to indicate a situation that is not in error but is not considered ideal.  May indicate circumstances that may lead to errors in the future.
Used to indicate an error that has occurred that could prevent the current activity or request from completing but will not prevent the application running
Used to indicate events that could cause critical service failure.

NOTE: VERBOSE is not a log level that JBoss supports.

JBoss CLI Logging

By default the JBoss CLI logging is turned off. The configuration for this is in:


So, to sum up the first of my blogs about JBoss logging, I have looked at the default configuration we have when first installing JBoss. The second part will look at  how we can configure the logging from these default settings, and then I’ll examine some best practice in the final article.