24 May 2016

JBoss EAP 7 - A First Look

by Brian Randell


In this post, Brian Randell takes a peek at the new JBoss EAP 7.0.0 release and gives his impression as a JBoss consultant and administrator. He'll dig deeper under the cover and a throw little light on some of the new features and enhancements in future posts, but for his first look he'll reveal what he sees when getting it up-and-running for the first time.





The first version of JBoss EAP 7 was released on 10th May 2016 (Red Hat JBoss EAP 7.0.0). It's based on WildFly 10 (http://wildlfy.org/and uses Java SE 1.8 implementing Java EE 7 Full Platform and Web Profile Standards. The full list of supported configurations are listed here (please note that access requires a Red Hat subscription)https://access.redhat.com/articles/2026253 


...I could see a number of areas that were of immediate interest to me:

  • The replacement of HornetQ with Active MQ Artemis(https://activemq.apache.org/artemis/index.html)
  • The replacement of JBoss Web with Undertow (http://undertow.io/)
  • Ability to use JBoss as a Load Balancer
  • Server Suspend Mode
  • Offline Management CLI
  • Profile Hierarchies
  • Datasource Capacity policies
  • Port Reduction
  • Backwards compatibility with EAP 6 and some interoperability with EAP 5

There are many more enhancements and features listed in the release notes and I am sure you will have others spring out at you as items you want to investigate more. Putting these aside for now let’s get it installed.

When I'm looking at a new system, I like to dive in and get it running, then investigate it from a first look (where I concentrate on normal operation), through to a more detailed investigation on those areas that are of interest to me.

For my first look at JBoss EAP 7, I used an Amazon EC2 t2.medium tier shared host running Red Hat Linux 7.2 (Maipo) with 4GB Ram and 2vCPUs.  I downloaded the Oracle Java JDK 8u92 (http://www.oracle.com/technetwork/java/javase/downloads/index.html ) and JBoss EAP 7.0.0 (http://developers.redhat.com/products/eap/download/) (requires Red Hat subscription) zip files and extracted them into /opt/java and /opt/jboss directories respectively. I then created users java and jboss and chown’d the respective files. I set up JAVA_HOME environment variable and I was good to go.

Fundamentally – running JBoss EAP 7 is the same as running EAP 6. You install it in the same way and run it in a similar way. The only difference for me was running on RHEL 7.2 where if you set up JBoss to run as a service you run the command without the ‘.sh’ for the script. Placing the script jboss-eap7.sh in /etc/init.d/ and then registering it through chkconfig the service is run by the command:


service jboss-eap7 start

whereas on RHEL 6 you run it as:


service jboss-eap7.sh start

The first difference I noticed when running JBoss EAP 7 for the first time is that as the libraries are based on WildFly rather than jboss-as, the logs show WFLY references rather than JBAS references. For any of us that look for the references to search for certain log entries and have monitoring set up against them this will be a big change. For example the JBoss start message is now under reference WFLYSRV0025 (whereas it used to be JBAS015874)


INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: JBoss EAP 7.0.0.GA (WildFly Core 2.1.2.Final-redhat-1) started in 4142ms - Started 306 of 591 services (386 services are lazy, passive or on-demand)
INFO  [org.jboss.as] (Controller Boot Thread) JBAS015874: JBoss EAP 6.4.0.GA (AS 7.5.0.Final-redhat-21) started in 3441ms - Started 192 of 229 services (68 services are lazy, passive or on-demand)

You will also notice here that even though I am running the same configuration file (standalone-full.xml) the new EAP 7 server starts a lot more services which making it start slower that EAP 6. On average (over 10 starts) EAP 7 took 4180ms, whereas EAP 6 took 3573ms

We can also compare with using the standalone.xml and you can see that also starts a lot more services for EAP 7 which means it starts slower than EAP 6. An average of 3289ms for EAP 7 and 2667ms for EAP 6.


INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: JBoss EAP 7.0.0.GA (WildFly Core 2.1.2.Final-redhat-1) started in 3227ms - Started 267 of 553 services (371 services are lazy, passive or on-demand)
INFO  [org.jboss.as] (Controller Boot Thread) JBAS015874: JBoss EAP 6.4.0.GA (AS 7.5.0.Final-redhat-21) started in 2652ms - Started 153 of 191 services (57 services are lazy, passive or on-demand)

The next difference I noticed was on the admin console where the layout has been changed. There are the same high level options as from the 6.4 console but when navigating into the sections the layout changes become more noticeable.




Figure 1 - JBoss EAP 7 Admin Console



Figure 2 - JBoss EAP 7 Subsystem Navigation



Figure 3 - JBoss EAP 7 Subsystem Settings

This unfortunately means more clicks to navigate to the same point you would have got to with 6.4, and as the settings encompass a whole screen you have to click back before you can navigate elsewhere. On first look this could become a frustration of using the console.

When using the CLI there are some other differences to be seen. The default port for connection to the CLI has changed from 9999 to 9990. Looking at the port configuration you can see a limited range of ports configured in EAP 7. This is because the http and management ports are used for a variety of protocols.



You can see there are no management-native or messaging ports.
It is also worth noting that the default management-https port is now 9993 rather than 9443 as it was before.

There are also some new CLI commands that can be used, such as set and unset to assign variables, unalias so you can turn off a defined alias and connection-info to show details of the connection.

There are also some new CLI operations that can be used such as list-add, list-get, list-clear, list-remove, map-get, map-clear, map-put, map-remove and query which can add and set attributes to an entity. These aren’t very well documented and will need further investigation.

There is also suspend and resume which suspends the server enabling it to complete its tasks gracefully without accepting new requests at which point you can resume it again.

Future blog posts will delve into the technology changes, features and enhancements, but from an initial first look at JBoss EAP 7 and before we do any deep investigation there are some immediate differences that will need to be thought of and evaluated when converting production systems from EAP 6.

References


JBoss EAP 7 download (requires Red Hat subscription)
http://developers.redhat.com/products/eap/download/


Red Hat EAP 7 – supported configurations (requires Red Hat subscription)
https://access.redhat.com/articles/2026253

Undertow - http://undertow.io/











19 May 2016

JBoss Logging and Best Practice for EAP 6.4 (Part 3 of 3)


by Brian Randell

So far in this series of posts about JBoss logging and best practice, we have seen what JBoss EAP 6.4.0 provides out of the box and how you might go about changing that configuration. As you may realise by now, there are a lot of areas you can configure and customise. This post takes a look at what you need to be thinking about when deciding what you want to implement in a production environment.

The areas I want to look at in this, the third and final part of the series are:
  • What we need in our Production environments?
  • What we need to ask of our developers?





Production Implementation

For a JBoss deployment to be production ready from a logging perspective we need to think about several key areas:
  • What areas are the priority for us to monitor
  • What housekeeping should be in place
  • What can we do to troubleshoot issues when they arise


Log monitoring

For most organisations monitoring solutions are in place that can be configured to connect to the server (usually through an agent,) read the log and alert on keywords such as ERROR and FATAL. You could also set up the monitoring solution to be more specific, and alert on certain phrases only.

It therefore makes sense for any JBoss server that the log being monitored by a monitoring solution is a single log that contains all messages for these log levels- and that can be easily parsed. From an administrative point of view this is also what I would want to see. One log that contains all I need to know about the current running of the system.

By default, when installing JBoss as a service we get two logs. We get a console log and a server log. The console log shows everything that has happened since the last restart, the server log shows everything that has happened. For me only one of these logs is required and it’s the server log.

This is the mainstay of your information about the system and should be the only one you need to worry about. So for me – I ignore and limit the information sent to the console log when running JBoss as a service, and concentrate on the server log.

Another thought here is to copy daily server logs to a central server. This can be useful if any trend analysis is required or if you are troubleshooting across a domain.

This may sound obvious, but as the monitoring will alert – the log needs to be clear of errors when you first start monitoring it in production. It is never sensible to start with errors already occurring.


Log housekeeping

If you do not have any log rotation or housekeeping and endlessly keep logs then eventually disk space will be an issue.

There is generally little point in keeping logs in production for more than 14 days and more often for more than 7 days.  If you are monitoring the system effectively then the alerts will be seen immediately and dealt with.  If any logs need to be kept for Problem Management or Root Cause Analysis scenarios then these can be moved away manually.

One thing to realise here is that if JBoss is running, then if you remove the active log file (moving it to an archive directory perhaps) it won’t automatically regenerate. The best practice is to copy it and empty it whilst in situ.

Luckily JBoss provides a number of different Log Handlers for us to use to make the housekeeping easy. There are several handlers that can be used to rotate the log on size or time. Now in 6.4 there is also a handler (Periodic Size) that can do either – and acts on whichever triggers the rotation first.


Log troubleshooting

If there is an issue on the server that we need to look into more closely then we have the ability to add specific log categories and raise the logging as we need them. This will take affect dynamically so we can turn up logging when the system is exhibiting a problem to troubleshoot and then turn it down when finished. This is particularly beneficial so that we do not swamp the logs with messages we don’t care about which can also cause performance headaches and could cause logs to increase in size substantially that could cause disk space issues.

We also have the potential here to log specific log categories to a different handler and hence a different log file so we can see our troubleshooting messages in a different file outside of the standard logging mechanism which then won’t interfere with normal monitoring.

Personally I like to troubleshoot against a separate debug log and have a Log Handler previously set up that I can utilise if and when required. This way you can place that log elsewhere, perhaps on a different file system or disk so it interferes less with the normal running of the system.

For this you would create a new Handler and use that handler for specific log categories when required.

See the examples In the previous blog in this series for how to create a handler and associate a log category with that handler.

For boot errors EAP 6.4.0 has a introduced a CLI command read-boot-errors.  It is a Management command and can be used to monitor the boot errors.

/core-service=management:read-boot-errors

This allows a script to be used to see if any boot errors have occurred.  Particularly useful if you are starting up a number of servers at the same time.


Other Logging

As we have seen in the previous posts in this series, the Management Interface Logging is turned off by default.
I like this to be turned on.  If you are running a large environment it provides another avenue for troubleshooting and auditing.  It could be that a problem occurred due to the wrong CLI command being issued.  This could have been ad hoc at the time or may be a script being run automatically. Hence to see all activity on the server at the time an issue may have occurred is invaluable.

Developer guidelines

When we talk about Log Levels (as defined previously) and the types of messages that should fall into each level it’s the developers that often don’t adhere quite as strictly as they should.
I am not a developer, but as an administrator who is the first point of call if issues are flagged in the log, I want to look at the logs on Jboss when an application is running and ask these questions :


How noisy is the log ?

Try the log at different log levels and see whether each level has what you would consider the right information for that level.  For example, do INFO messages look like they should be INFO or perhaps really be DEBUG ? 
I have seen many applications that are ‘noisy’ and that make the log virtually unreadable and very difficult to diagnose when issues are occurring.


Stack Traces

If Stack Traces are logged for an error – are they useful for the context of the error?

Stack Traces can be large so you don’t want too many of them cluttering your ability to read the log. You only want stack traces shown when there is an ERROR level message or at a TRACE level (and potentially DEBUG, though I would not like to see them at this level either). For INFO level messages there should be no need for Stack Traces.

We also need to see whether we are getting multiple Stack Traces for the same error at different levels of the stack. One error should only need one Stack Trace.

And finally on Stack Traces, are they necessary anyway? Can the ERROR description define the issue enough that you don’t need to see the entire Stack Trace.

Don’t be afraid to push these issues back to the developers to change. If it affects your ability to properly monitor and troubleshoot a production application then it isn’t production ready in my eyes.

Summary

Hopefully some aspects of this series of posts have given you pause for thought and helped you along your way for implementing a production logging configuration that provides an environment that is well monitored and has easier troubleshooting. JBoss has a lot of flexibility where monitoring is concerned and you can get lost in the plethora of options available.

My advice :  Keep it simple, straightforward and uncluttered. Let it work for you not against you.

References



Part OnePart Two





12 May 2016

JBoss Logging and Best Practice for EAP 6.4 (Part 2 of 3)

By Brian Randell

Following on from Brian's previous post in the series, which showed you the default logging configuration for JBoss EAP 6.4.0, this post takes a look at how you can configure some of the core components. I'll be taking a standard common approach for the configuration purposes of this post and will leave more advanced configuration for future posts.

For this post we will primarily look at the configuration for a standalone deployment.






Configuration

GC Log

The GC Log can be configured in the standalone.conf for standalone servers and in JVM properties for the domain servers.

For the standalone server these can be overridden as a whole by updating the JAVA_OPTS in the standalone.conf file.  (Note – you will need *all* the options you require)

The standalone.sh script checks for the presence of a ‘-verbose:gc’ entry in JAVA_OPTS.  So if this exists in the standalone.conf file then it will bypass the GC configuration in the standalone.sh.

An example additional line in the standalone.conf is :


#
# Specify options to pass to the Java VM.
#
if [ "x$JAVA_OPTS" = "x" ]; then
   JAVA_OPTS="-Xms1303m -Xmx1303m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true"
   JAVA_OPTS="$JAVA_OPTS -Djboss.modules.system.pkgs=$JBOSS_MODULES_SYSTEM_PKGS -Djava.awt.headless=true"
   JAVA_OPTS="$JAVA_OPTS -Djboss.modules.policy-permissions=true"
   JAVA_OPTS="$JAVA_OPTS -verbose:gc -Xloggc:/opt/jboss/jboss-eap-6.4/standalone/log/gctest.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=3M -XX:-TraceClassUnloading"
else
   echo "JAVA_OPTS already set in environment; overriding default settings with values: $JAVA_OPTS"
fi

Note: I have changed the name of the log to gctest.log.


We can then see these options shown in the process :


$ ps -ef | grep ja
jboss     4438  4355 16 10:42 pts/0    00:00:07 java -D[Standalone] -server -XX:+UseCompressedOops -Xms1303m -Xmx1303m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -Djboss.modules.policy-permissions=true -verbose:gc -Xloggc:/opt/jboss/jboss-eap-6.4/standalone/log/gctest.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=3M -XX:-TraceClassUnloading -Dorg.jboss.boot.log.file=/opt/jboss/jboss-eap-6.4/standalone/log/server.log -Dlogging.configuration=file:/opt/jboss/jboss-eap-6.4/standalone/configuration/logging.properties -jar /opt/jboss/jboss-eap-6.4/jboss-modules.jar -mp /opt/jboss/jboss-eap-6.4/modules -jaxpmodule javax.xml.jaxp-provider org.jboss.as.standalone -Djboss.home.dir=/opt/jboss/jboss-eap-6.4 -Djboss.server.base.dir=/opt/jboss/jboss-eap-6.4/standalone


Boot Log

For this post, rather than show how to modify the boot logging, it is worth mentioning the new CLI command introduced in 6.4 – ‘read-boot-errors’.

This is part of the management core service and looks at the log, and reports back errors relating to the start of the server. This is very useful as it can be scripted using CLI to look at numerous servers and check them, pulling the information centrally.


To test this using the standalone server, I renamed the h2 directory so the server could not find the h2 module :


$ pwd
/opt/jboss/jboss-eap-6.4/modules/system/layers/base/com/h2database
$ mv h2 h2old


I then started the JBoss server and ran the cli command :


$ ./jboss-cli.sh --connect
[standalone@localhost:9999 /] /core-service=management:read-boot-errors
{
    "outcome" => "success",
    "result" => [
        {
            "failed-operation" => {
                "operation" => "add",
                "address" => [
                    ("subsystem" => "datasources"),
                    ("jdbc-driver" => "h2")
                ]
            },
            "failure-timestamp" => 1460370253333L,
            "failure-description" => "JBAS010441: Failed to load module for driver [com.h2database.h2]"
        },
        {
            "failed-operation" => {
                "operation" => "add",
                "address" => [
                    ("subsystem" => "datasources"),
                    ("data-source" => "ExampleDS")
                ]
            },
            "failure-timestamp" => 1460370254540L,
            "failure-description" => "{\"JBAS014771: Services with missing/unavailable dependencies\" => [\"jboss.data-source.java:jboss/datasources/ExampleDS is missing [jboss.jdbc-driver.h2]\",\"jboss.driver-demander.java:jboss/datasources/ExampleDS is missing [jboss.jdbc-driver.h2]\"]}",
            "services-missing-dependencies" => [
                "jboss.data-source.java:jboss/datasources/ExampleDS is missing [jboss.jdbc-driver.h2]",
                "jboss.driver-demander.java:jboss/datasources/ExampleDS is missing [jboss.jdbc-driver.h2]"
            ]
        },
        {
            "failed-operation" => {
                "operation" => "enable",
                "address" => [
                    ("subsystem" => "datasources"),
                    ("data-source" => "ExampleDS")
                ]
            },
            "failure-timestamp" => 1460370254542L,
            "failure-description" => "{\"JBAS014879: One or more services were unable to start due to one or more indirect dependencies not being available.\" => {\"Services that were unable to start:\" => [\"jboss.data-source.reference-factory.ExampleDS\",\"jboss.naming.context.java.jboss.datasources.ExampleDS\"],\"Services that may be the cause:\" => [\"jboss.jdbc-driver.h2\"]}}",
            "missing-transitive-dependency-problems" => {
                "Services that were unable to start:" => [
                    "jboss.data-source.reference-factory.ExampleDS",
                    "jboss.naming.context.java.jboss.datasources.ExampleDS"
                ],
                "Services that may be the cause:" => ["jboss.jdbc-driver.h2"]
            }
        }
    ]
}

You can see the boot errors are shown and pinpoint the area you need to investigate.


Console Log

As mentioned in the previous post, the console log gets used by default when using the jboss-as-standalone.sh or jboss-as-domain.sh scripts.  The file is placed in the /var/log/jboss-as/ directory.

When setting up JBoss to run as a service you will use the jboss-as.conf script.  The easiest way to modify where the console log goes is to modify this script which feeds the configuration into the jboss-as-standalone.sh and jboss-as-domain.sh scripts.

Edit the jboss-as.conf file and uncomment the JBOSS_CONSOLE_LOG configuration, and modify as appropriate.

In my example below I have uncommented the line and changed the filename to test.log.


# General configuration for the init.d scripts,
# not necessarily for JBoss AS itself.

# The username who should own the process.
#
JBOSS_USER=jboss

# The amount of time to wait for startup
#
# STARTUP_WAIT=30

# The amount of time to wait for shutdown
#
# SHUTDOWN_WAIT=30

# Location to keep the console log
#
# JBOSS_CONSOLE_LOG=/var/log/jboss-as/console.log
JBOSS_CONSOLE_LOG=/var/log/jboss-as/test.log

When I now stop and start the service you can then see in my directory the new filename alongside the old.


# pwd
/var/log/jboss-as
# ll
total 16
-rw-r--r--. 1 root root 5679 Apr 11 12:28 console.log
-rw-r--r--. 1 root root 4776 Apr 11 12:35 test.log


Handlers

There are 7 types of Handlers you can create and you can create multiple handlers of each type. For this example we will create a new ‘Size’ Handler Type. We will do this through the CLI and see the results in the Console.

To start – our server is running and we have connected using the CLI. To add a new Handler we use the add command for the new handler name.  For the most part we will keep the default values :


[standalone@localhost:9999 /] /subsystem=logging/size-rotating-file-handler=NEWSIZE:add(file={"path"=>"newsize.log", "relative-to"=>"jboss.server.log.dir"},level="DEBUG",enabled=true, append=false, rotate-size=5m,max-backup-index=10,rotate-on-boot=true,suffix=".yyyy-MM-dd-HH")
{"outcome" => "success"}

We have created a handler called ‘NEWSIZE’ that will write to the file ‘newsize.log’ at DEBUG level and rotate if the file gets to 5Mb and keep a backup of 10 files.

We can check the values for the handler we have created :


[standalone@localhost:9999 /] /subsystem=logging/size-rotating-file-handler=NEWSIZE:read-resource
{
    "outcome" => "success",
    "result" => {
        "append" => false,
        "autoflush" => true,
        "enabled" => true,
        "encoding" => undefined,
        "file" => {
            "path" => "newsize.log",
            "relative-to" => "jboss.server.log.dir"
        },
        "filter" => undefined,
        "filter-spec" => undefined,
        "formatter" => "%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n",
        "level" => "DEBUG",
        "max-backup-index" => 10,
        "name" => "NEWSIZE",
        "named-formatter" => undefined,
        "rotate-on-boot" => true,
        "rotate-size" => "5m",
        "suffix" => ".yyyy-MM-dd-HH"
    }
}


In the Console we can see the Handler added :



We can see our new log created on the file system :


[root@localhost init.d]# ll /opt/jboss/jboss-eap-6.4/standalone/log/
total 156
-rw-rw-r--. 1 jboss jboss   1669 Apr 11 09:50 backupgc.log.current
-rw-rw-r--. 1 jboss jboss   1500 Apr 11 10:05 gc.log.0.current
-rw-rw-r--. 1 jboss jboss   1494 Apr 11 12:35 gctest.log.0.current
-rw-r--r--. 1 jboss jboss      0 Apr 11 12:59 newsize.log
-rw-rw-r--. 1 jboss jboss 133362 Apr 11 12:35 server.log
-rw-rw-r--. 1 jboss jboss  10419 Feb  4 19:50 server.log.2016-02-04

If we want to modify an entry we can use the write-attribute command. So if we want to change the size of the files to 10Mb we can use the following:


[standalone@localhost:9999 /] /subsystem=logging/size-rotating-file-handler=NEWSIZE:write-attribute(name=rotate-size,value=10m)

If we want to remove the handler entirely, we can use the remove command:

[standalone@localhost:9999 /] /subsystem=logging/size-rotating-file-handler=NEWSIZE:remove

Log Categories

You can define a log category against a particular handler and level of message you want to see. This is useful when troubleshooting if you know the area you want to analyse, and want to see a higher level of logging just for that area.

For this example we will add a log category for org.apache.coyote and attach it to our NEWSIZE handler we have just created.

To add a new log category we need to use the add command with the new category:


[standalone@localhost:9999 /] /subsystem=logging/logger=org.apache.coyote:add(category=org.apache.coyote,level=DEBUG,handlers=[NEWSIZE])
{"outcome" => "success"}
We can check the new category :
[standalone@localhost:9999 /] /subsystem=logging/logger=org.apache.coyote:read-resource 
{
    "outcome" => "success",
    "result" => {
        "category" => "org.apache.coyote",
        "filter" => undefined,
        "filter-spec" => undefined,
        "handlers" => ["NEWSIZE"],
        "level" => "DEBUG",
        "use-parent-handlers" => true
    }
}


We can see this new category in the console:




















If we want to modify an entry we can use the write-attribute command.  So if we want to change the log level we can use the following:


[standalone@localhost:9999 /] /subsystem=logging/logger=org.apache.coyote:write-attribute(name=level, value=TRACE)

If we want to remove the handler entirely we can use the remove command:


/subsystem=logging/logger=org.apache.coyote:remove


CLI Logging

To log activity through the CLI and through the Console, you can easily enable the Management Interface logging using a CLI command.


[standalone@localhost:9999 /] /core-service=management/access=audit/logger=audit-log:write-attribute(name=enabled,value=true)

This produces a management audit log file created at $JBOSS_HOME/standalone/data/audit-log.log

You can also modify the $JBOSS_HOME/bin/jboss-cli-logging.properties file for just the CLI logging.  Change the log level to INFO and uncomment the handler.


# Additional logger names to configure (root logger is always configured)
loggers=org,org.jboss.as.cli
logger.org.level=OFF
# assign a lower level to enable CLI logging
logger.org.jboss.as.cli.level=INFO

# Root logger level
logger.level=${jboss.cli.log.level:INFO}
# Root logger handlers
# uncomment to enable logging to the file
logger.handlers=FILE

Once done and CLI restarted then the file jboss-cli.log will be created with CLI information stored.


Advanced Configuration

As mentioned earlier, there are a number of more advanced logging configurations that could be achieved. As these are less standard and commonplace, they have been left for future blog posts.
  • Logging Profiles and their Configuration
  • SysLog Handlers
  • Log Category Filtering
  • Asynchronous logging


Summary

To summarise this blog series so far: We have seen what the default logging configuration is in JBoss EAP 6.4.0 and now know how to reconfigure the most common aspects for different types of logging.

Part three will look at the recommendations for which configuration changes you should make.



References


5 May 2016

Oracle WebLogic Work Managers - A Practical Overview

by Andy Overton

In this post, Andy Overton presents an insight into Oracle WebLogic Work Managers, going through the basics of what they are, how they are used, and providing deeper configuration and deployment advice. Using a test project, he examines the practical application of work managers, and looks at the control you can get over request handling and prioritisation.



How to configure and test WebLogic Work Managers



Overview

So, first of all, what are Work Managers?

Prior to WebLogic 9, Execute Queues were used to handle thread management. You created thread-pools to determine how workload was handled. Different types of work were executed in different queues based on priority and order requirements. The issue was that it is very difficult to determine the correct number of threads required to achieve the throughput your application requires and avoid deadlocks.

Work Managers are much simpler. All managers share a common thread pool and priority is determined by a priority-based queue. The thread pool size is dynamically adjusted in order to maximise throughput and avoid deadlocks. In order to differentiate and prioritise between different applications, you state objectives via constraints and request classes (e.g. fair share or response time). 

More on this later!


Why use Work Managers?

If you don’t set up your own Work Managers, the default will be used. This gives all of your applications the same priority and they are prevented from monopolising threads. Whilst this is often sufficient, it may be that you want to ensure that:

  • Certain applications have higher priority over others.
  • Certain applications return a response within a certain time.
  • Certain customers or users get a better quality of service.
  • A minimum thread constraint is set in order to avoid deadlock.



Types of Work Manager

  • Default – Used if no other Work Manager is configured. All applications are given an equal priority.
  • Global – Domain-scoped and are defined in config.xml. Applications use the global Work Manager as a blueprint and create their own instance. The work each application does can then be distinguished from other applications.
  • Application – Application-scoped and are applied only to a specific application. Specified in either weblogic-application.xml, weblogic-ejb-jar.xml, or weblogic.xml.


Constraints and Request Classes

A constraint defines the minimum and maximum number of threads allocated to execute requests and the total number of requests that can be queued or executing before the server begins rejecting requests. Constraints can be shared by several Work Managers.

Request classes define how requests are prioritised and how threads are allocated to requests. They can be used to ensure that high priority applications are scheduled before low priority ones, requests complete within a given response time or certain users are given priority over others. Each Work Manager may specify one request class.


Types of Constraint

  • Max threadsDefault, unlimited.
    The maximum number of threads that can concurrently execute requests. Can be set based on the availability of a resource the request depends on e.g. a connection pool.
  • Min threads – Default, zero.
    The minimum number of threads to allocate to requests. Useful for preventing deadlocks.
  • Capacity – Default, -1 (never reject requests).
    The capacity (including queued and executing) at which the server starts rejecting requests.



Types of Request Class


  • Fair Share – Defines the average thread-use time. Specified as a relative value, not a percentage.
  • Response Time – Defines the requested response time (in milliseconds).
  • Context – Allows you to specify request classes based on contextual information such as the user or user group.



Initial Setup

For this blog the following versions of software were used:

  • Ubuntu 14
  • JDK 1.8.0_73
  • WebLogic Server 12.2.1
  • JMeter 2.13
  • NetBeans 8.1

So, first of all, install WebLogic and set up a very basic domain (test_domain) with just an Admin server.


Register the server with IDE:
  1. Open the Services window
  2. Right-click the Servers node and choose 'Add Server'
  3. Select Oracle WebLogic Server and click 'Next'
  4. Click 'Browse' and locate the directory that contains the installation of the server, then Click 'Next'. The IDE will automatically identify the domain for the server instance.
  5. Type the username and password for the domain.


Creating the test project


Select New Project: Java EE - Enterprise Application
Name: WorkManagerTest
Server: Oracle WebLogic Server

Under WorkManagerTest-war, right click 'Web Pages' and select 'New JSP'.
File Name: test.jsp


Change the body to:


<body>
    <h1>Work manager test OK</h1>
    <% 
        Thread.sleep(1000); // sleep for 1 second
    %>
</body>

Right click on WorkManagerTest-war, select 'Deploy' and then go to: http://localhost:7001/WorkManagerTest-war/test.jsp where you should see your page displayed.

Now create another application, this time called WorkManagerTest-2. 
This will be identical to test 1 but change the JSP code to: Name - test-2.jsp


<body>
    <h1>Work manager test 2 OK</h1>
    <% 
        Thread.sleep(1000); // sleep for 1 second
    %>
</body>


Go to the WebLogic console: http://localhost:7001/console


Go to Deployments - click on WorkManagerTest-war, select Monitoring, Workload. Here you can see the work managers, constraints and request classes associated with your application. As we haven’t yet set anything up the app is currently using the default Work Manager.


Creating the JMeter test

Right click on Test Plan and select Add Threads (Users) > Thread Group

Name: Work Manager Test
Number of Threads (users): 10
Ramp-Up Period (in seconds): 10

This will call your application once per second for 10 seconds.

Right click on your new Thread Group and select: Add > Sampler > HTTP Request

Name: test.jsp
Server Name: localhost
Port: 7001
Path: WorkManagerTest-war/test.jsp

Right click on Test - 10 users
Add, Listener, View Results in Table
Add, Listener, View Results Tree

Create another HTTP request as follows:
Name: test-2.jsp
Server Name: localhost
Port: 7001
Path: WorkManagerTest-2-war/test-2.jsp

Right click on Test - 10 users
Add, Listener, View Results in Table
Add, Listener, View Results Tree

Save your test plan and then run it.

Click on results tree and table. With tree, you can view request and response data; obviously not very interesting in our case, but handy if you want to see what's being returned from an app. More useful is View Results in Table. This is very handy for quickly seeing response times. You should see that each of your JSPs/applications was called 10 times and each time it took just over a second to return a response.



Creating the work managers

In the WebLogic admin console:


  • Environment: Work Managers
  • New: Work Manager
  • Name: WorkManager1
  • Target: AdminServer

Create another the same but name it 'WorkManager2'


Using Fair Share request classes

In the WebLogic admin console


  • Environment: Work Managers
  • New: Fair Share Request Class
  • Name: FairShareReqClass-80
  • Fair Share: 80
  • Target: AdminServer

Create another with name FairShareReqClass-20, Fair Share 20

Now we need to associate the request classes with the Work Managers.

  • Select WorkManager1, under Request Class select FairShareReqClass-80 and save.
  • Select WorkManager2, under Request Class select FairShareReqClass-20 and save.

For the changes to take effect you will need to restart the server.

Alter web.xml in both of the applications. This can be found under WEB-INF.

WorkManagerTest-war

Add:


<servlet>
        <servlet-name>Test1</servlet-name>
        <jsp-file>test.jsp</jsp-file>
        <init-param>                     
            <param-name>wl-dispatch-policy</param-name>
            <param-value>WorkManager1</param-value>
        </init-param>
    </servlet>


WorkManagerTest-2-war


<servlet>
    <servlet-name>Test2</servlet-name>
    <jsp-file>test-2.jsp</jsp-file>
    <init-param>                     
        <param-name>wl-dispatch-policy</param-name>
        <param-value>WorkManager2</param-value>
    </init-param>
</servlet>




Now, when you run the JMeter test again, you should see results similar to the following:






















What we are seeing is that test1.jsp is using the Work Manager with a Fair Share request class set to 80, whereas test2.jsp is using one set to 20.

There is an 80% (80/100) chance that the next free thread will perform work for jsp1. There is a 20% (20/100) chance it will next service jsp2.

As mentioned previously, the values used aren’t a percentage, although in our case they happen to add up to 100.

If you were to add another jsp, also using the Fair Share request class set to 20 the figures would be different: jsp1 would have a 66.6% chance (80/120), and jsp 2 and 3 would both have a 16.6% chance (20/120).


Using Response Time request classes


Next we will take a look at using response time request classes. There is no need to alter either JSP - we will just create new request classes and set our work managers to use those.

In the WebLogic console, go to Environment – Work Managers

  • Select New: Response Time Request Class
  • Name: ResponseTime-1second
  • Goal: 1000
  • Target: AdminServer

Create another but with the following values:

  • Name: ResponseTime-5seconds
  • Goal: 5000
  • Target: AdminServer

Finally, alter the two work managers:

Alter WorkManager1 to use the ResponseTime-1second response class and WorkManager2 to use the ResponseTime-5seconds response class.
Then restart the server.

Now, alter your JMeter test so that it loops forever.

Run it again and you should see that to begin with it takes a little while for the work managers to take effect. After a while, however you should see that the responses to both the apps start to even out and take around a second.

This is described in the Oracle documentation: “Response time goals are not applied to individual requests. Instead, WebLogic Server computes a tolerable waiting time for requests with that class by subtracting the observed average thread use time from the response time goal, and schedules requests so that the average wait for requests with the class is proportional to its tolerable waiting time.”


Context request classes

Context request classes are compound request classes that provide a mapping between the request context and a request class. This is based upon the current user or the current user’s group.

So it’s possible to specify different request classes for the same servlet invocation depending on the user or group associated with the invocation.

I won’t create one as a part of this blog as they simply utilise the other request class types.


Using constraints

Constraints define the minimum and maximum number of threads allocated to execute requests and the total number of requests that can be queued or executing before WebLogic Server begins rejecting requests.

As they can cause requests to be queued up or even worse, rejected, they should be used with caution. A typical use case of maximum threads constraint is to take a data source connection pool size as the max constraint. That way you don’t attempt to handle a request where a database connection is required but cannot be got.

There are 3 types of constraint:

  • Minimum threads
  • Maximum threads
  • Capacity
The minimum threads constraint ensures that the server will always allocate this number of threads,  the maximum threads constraint defines the maximum number of concurrent requests allowed, and the capacity constraint causes the server to reject requests when it has reached its capacity.

To see how this works in action, let’s create some constraints. 

Under Work Managers in the WebLogic console create the following, all targeted to the AdminServer:

New Max Threads Constraint:

  • Name - MaxThreadsConstraint-3
  • Count – 3

New Capacity Constraint:


  • Name - CapacityConstraint-10
  • Count 10

Next, create a new Work Manager called ConstraintWorkManager, add the two constraints to it and then restart WebLogic.

Now, alter the Test1 application and change the Work Manager in web.xml from WorkManager1 to ConstraintWorkManager. Also, alter the sleep time from 1 second to 5 and then re-deploy your application.

Next, create a new JMeter test with the following parameters:

  • Number of Threads (users) – 10
  • Ramp-Up Period – 0

Run this test and you should see results similar to the following:













So, what’s happening here?

(Remember, we set the maximum threads to 3). W
e send in 10 concurrent requests, and 3 of those begin to be processed immediately, whilst the others are put in a queue. So, we get the following:


At the start:




After 5 seconds:










After 10 seconds:







After 15 seconds:







Next, change the JMeter test. Raise the number of users to 13 and run the test again. This time you will see that 3 of the requests fail. This is due to the Capacity Constraint being set to 10. This means that only 10 requests can be either processing or queued and the others are rejected.



If you call the application from your browser whilst the test is running you will see that you receive a 503--Service Unavailable error (this can be replaced with your own error page).

Take care when setting up thread constraints - you don’t want to be limiting what your server can process without good reason and you certainly don’t want to be rejecting requests without very good reason.

Conclusion

Hopefully, this overview of WebLogic Work Managers has given you an insight into what they are used for and how you can go about setting them up.

WebLogic does a good job of request handling itself out of the box but sometimes you will find that you need more control over which applications should take priority or what should happen in times of heavy load. 

In that case, Work Managers can prove very useful, although as with all such things – make sure you are certain of what you are trying to achieve, then test, test some more and then test again!

Knowing how you want your server to run and being sure how it is running are two very different things. Ensure you test for all potential loads and understand what will happen in all cases.



More popular WebLogic posts from our technical blog...


Installing WebLogic with Chef
Alan Fryer shows you how to create a simple WebLogic Cluster on a virtual machine with two managed servers using Chef.

Basic clustering with WebLogic 12c and Apache Web Server
Mike Croft demonstrates WebLogic’s clustering capabilities and shows you how to utilise the WebLogic Apache plugin to use the Apache Web Server as a proxy to forward requests to the cluster.


Alternative Logging Frameworks for Application Servers: WebLogic
Andrew Pielage  focuses on WebLogic, specifically 12c, and configuring it to use Log4j and SLF4J with Logback.


WebLogic 12c Does WebSockets - Getting Started
In this post, Steve demonstrates how to write a simple websockets echo example using 12.1.2


Weblogic - Dynamic Clustering in practice
In this blog post Andy looks at setting up a dynamic cluster on 2 machines with 4 managed servers (2 on each). He then deploys an application to the cluster and shows how to expand the cluster.


Getting the most out of WLDF Part 1: What is the WLDF?
The WebLogic Diagnostic Framework (WLDF) is an often overlooked feature of WebLogic which can be very powerful when configured properly.In this blog series, Mike Crosft points out some of the low-hanging fruit so you can get to know enough of the basics that you’ll be able to make use of some of the features, while having enough of a knowledge of the framework to take things further yourself.