3 June 2016

How to Configure Oracle Wallet with Tomcat

by Claudio Salinitro

In my last post, I explained how I used Oracle Wallet with PHP as a way of offering my client a way of preventing the PHP developers gaining password access to the Oracle database. In this, a follow-on post, I'll be showing you how to configure Oracle Wallet with Tomcat.

Note: The wallet is created in exactly the same way as I showed in the first post. If you refer back to those instructions, we can then move quickly on to the software you'll need:

  • Tomcat
  • Oracle database 10g Release 2+
  • Oracle jar files:
    • oraclepki.jar
    • osdt_core.jar
    • osdt_cert.jar
    • ojdbc6.jar if you are using jdk6
    • ojdbc7.jar if you are using jkd7 or jdk8
Except for the ojdbcX.jar file, it seems you can find the other jars only inside the Oracle full client directories (not in Oracle instantclient) or in the Oracle database server directories.

Oracle Wallet configuration

Transfer the wallet on the Tomcat machine and create a file named tnsnames.ora inside the wallet directory with the following content:

connection_string =  
    (ADDRESS = (PROTOCOL = TCP)(HOST = database_host)(PORT = database_port))    
    (CONNECT_DATA = (SID = database_sid))    

  • connection_string = the connection string related to the credentials stored in the wallet
  • database_host = the database hostname
  • database_port = the database listen port
  • database_sid = the sid of the database

An entry for each credential that is configured in the wallet and wants to be used by the application should be created.

Create a file named sqlnet.ora inside the wallet directory with the following content:

   (SOURCE =     
     (METHOD = FILE)     
     (METHOD_DATA =     
       (DIRECTORY = [wallet path])     


Note. The user running Tomcat needs to have read permission on the wallet files.

Tomcat configuration

Add the jar files (ojdbc drivers + oraclepki + osdt_core and osdt_cert) in the $CATALINA_BASE/lib directory.

Set the JAVA option -Doracle.net.tns_admin=wallet_path

export CATALINA_OPTS=-Doracle.net.tns_admin=/apps/tomcat/conf/wallet

Finally restart Tomcat.

Test JSP page

If everything is OK, the configuration should already be working - but we need a test web application to to confirm it.

For testing purposes, we can use the ROOT web application that is already in place, and which is shipped with any Tomcat distribution. Create a walletTest.jsp file in your $CATALINA_BASE/webapps/ROOT folder, with the following content:

<%@page import="java.sql.*, javax.sql.*, javax.naming.*, java.io.*"%>

    <title>DB Test</title>

    Context context = new InitialContext();
    Context envCtx = (Context) context.lookup("java:comp/env");
    DataSource ds = (DataSource)envCtx.lookup("jdbc/myoracle");
    PrintWriter res = response.getWriter();
    Connection conn = ds.getConnection();
    Statement stmt = conn.createStatement();
    ResultSet result = stmt.executeQuery("SELECT 1 FROM DUAL");
    while (result.next()) {
      String code = result.getString(1);


And add the following configuration to the $CATALINA_BASE/conf/context.xml file, between the <context></context> tags:

<Resource name="jndi_name" auth="Container" type="javax.sql.DataSource" driverClassName="oracle.jdbc.OracleDriver" url="jdbc:oracle:thin:/@connection_string" connectionProperties="oracle.net.wallet_location=wallet_path"/>


  • wallet_path is the path to the directory containing the wallet - in my case    /apps/tomcat/conf/wallet
  • connection_string is a text string that will be used by our application to connect to the database with the related credentials - in my case db_credentials1
  • jndi_name is the jndi name used by the applications to use the datasource - in my case jdbc/myoracle

Security considerations

As it was for Apache, also for Tomcat or any other application server, the wallet security relies on the OS file permissions. Any users who have access to the wallet files will have access to the database data.

Also keep in mind that you cannot configure more than one wallet on the same Tomcat instance. This means that any applications running on the same Tomcat instance can potentially access the data of other applications once it discovers the connection string.

How to Configure PHP to use the Oracle Wallet

by Claudio Salinitro

In today's post, I'll be looking at a problem one of our clients encountered when trying to prevent the PHP application developers from gaining access to their Oracle database passwords. Whilst the obvious answer might be to move the connection parameters to an environment-specific file, the best solution I found for our client was to configure PHP to use the Oracle Wallet. Here, I'll go through the configuration process I undertook step-by-step.

The problem

As I suggested in my introduction, the first obvious solution to the problem might be to move the connection parameters to an environment-specific file, but this will have 2 consequences: 

1. It will demand a change to their deployment procedure

2. The password would be stored in clear text on the filesystem

The solution I wanted would have to overcome these issues, and what I chose to do was configure PHP to use the Oracle wallet.

What is Oracle Wallet?

Well, from the Oracle documentation...

Oracle Wallet provides a simple and easy method to manage database credentials across multiple domains. It allows you to update database credentials by updating the Wallet instead of having to change individual datasource definitions. This is accomplished by using a database connection string in the datasource definition that is resolved by an entry in the wallet.

Exactly what we need!

Software needed

  • Apache configured with mod_php or php-fpm (compiled with oci8)
  • Oracle instant client (basic + sdk)
  • Oracle database 10g Release 2+

Create the Oracle Wallet

Create a wallet with the following command:

mkstore -wrl "wallet_path" -create
Enter password:

Enter password again:

This will create an empty password protected container (the wallet) to store your database credentials. The wallet is composed of two files, cwallet.sso and ewallet.p12, and these will be created in the [wallet path] location.

Let’s start adding some credentials to the newly created wallet, execute:

mkstore -wrl "wallet_path" -createCredential connection_string username password


  • wallet_path is the path to the directory containing the wallet
  • connection_string is a text string that will be used by our application to connect to the database with the related credentials
  • username and password are the database credentials that will be used for the connection

You can add all the credentials you want in the same wallet, as soon as they have different connection string, and you can list the credentials stored in the wallet with the following command:

mkstore -wrl "wallet_path" -listCredential

Oracle instantclient

On the machine running php, we need an Oracle client to connect to the database. Usually on the web server there is no need to install the full Oracle client, and for this reason I prefer to stay light and use the Oracle instantclient. It's a lightweight version, with no installation needed - and  is more than enough for our needs.

Download the following Oracle instantclient files from the Oracle website:

  • Instant Client Package - Basic: All files required to run OCI, OCCI, and JDBC-OCI applications

  • Instant Client Package - SDK: Additional header files and an example makefile for developing Oracle applications with Instant Client

Then unzip both packages so that you have the sdk folder inside the basic instant client folder:


Create the following symbolic links:

cd /opt/oracle/instantclient_12_1
ln -s libclntsh.so.12.1 libclntsh.so
ln -s libocci.so.12.1 libocci.so

Set the environment variable LD_LIBRARY_PATH:

export LD_LIBRARY_PATH=/opt/oracle/instantclient_12_1:$LD_LIBRARY_PATH

Compile PHP with the option:

Apache/PHP configuration

Transfer the wallet on the Apache web server machine, and create a file named tnsnames.ora inside the wallet directory with the following content:

connection_string =  
    (ADDRESS = (PROTOCOL = TCP)(HOST = database_host)(PORT = database_port))    
    (CONNECT_DATA = (SID = database_sid))    


  • connection_string = the connection string related to the credentials stored in the wallet
  • database_host = the database hostname
  • database_port = the database listen port
  • database_sid = the sid of the database
An entry for each credential that is configured in the wallet and wants to be used by the application should be created.
Create a file named sqlnet.ora inside the wallet directory with the following content:

   (SOURCE =     
     (METHOD = FILE)     
     (METHOD_DATA =     
       (DIRECTORY = [wallet path])     


Note: The user running Apache needs to have read permission on the wallet files.

Configuration for mod_php:

Add the following environment variables to the apache startup:

export ORACLE_HOME=/opt/instantclient   
export LD_LIBRARY_PATH=/opt/instantclient   
export TNS_ADMIN=/opt/wallet  

I usually add these variables at the beginning of the apachectl startup script, but any other way is fine.

Stop Apache, and start it again. The apachectl restart option doesn’t work in this case because the master process will not be restarted, and will not see the new variables.

Configuration for php-fpm

If you are using php-fpm, you have to add the following settings in your pool configuration section:

env[ORACLE_HOME] = /apps/instantclient
env[LD_LIBRARY_PATH] = /apps/instantclient
env[TNS_ADMIN] = /apps/httpd/conf/wallet

Restart php-fpm processes.

Check if everything works

Create a PHP file in the DocumentRoot of your web server to test the connection:

$conn = oci_connect("/", "", "[connection string]", null, OCI_CRED_EXT);

$statement = oci_parse($conn, 'select 1 from dual');
$row = oci_fetch_array($statement, OCI_ASSOC+OCI_RETURN_NULLS);


And from the browser, call that page. Everything is working if the output is something like this:

Array ( [1] => 1 )

Security considerations

Keep in mind that the wallet security relies on the OS file permissions. Any users who have access to the wallet files will have access to the database data.

Also keep in mind that with mod_php you can have one wallet per Apache installation. This means that, in a shared environment, any applications running under the same Apache can potentially access to the data of other applications once it discovers the connection string.

With php-fpm you can have a different wallet and different configuration for each fpm pool.

Next Time...

Hopefully, the solution I found for this client will work for you, and in my next post, I'm going to continue on the same theme, but this time look at using the Oracle Wallet with Tomcat.

24 May 2016

JBoss EAP 7 - A First Look

by Brian Randell

In this post, Brian Randell takes a peek at the new JBoss EAP 7.0.0 release and gives his impression as a JBoss consultant and administrator. He'll dig deeper under the cover and a throw little light on some of the new features and enhancements in future posts, but for his first look he'll reveal what he sees when getting it up-and-running for the first time.

The first version of JBoss EAP 7 was released on 10th May 2016 (Red Hat JBoss EAP 7.0.0). It's based on WildFly 10 (http://wildlfy.org/and uses Java SE 1.8 implementing Java EE 7 Full Platform and Web Profile Standards. The full list of supported configurations are listed here (please note that access requires a Red Hat subscription)https://access.redhat.com/articles/2026253 

...I could see a number of areas that were of immediate interest to me:

  • The replacement of HornetQ with Active MQ Artemis(https://activemq.apache.org/artemis/index.html)
  • The replacement of JBoss Web with Undertow (http://undertow.io/)
  • Ability to use JBoss as a Load Balancer
  • Server Suspend Mode
  • Offline Management CLI
  • Profile Hierarchies
  • Datasource Capacity policies
  • Port Reduction
  • Backwards compatibility with EAP 6 and some interoperability with EAP 5

There are many more enhancements and features listed in the release notes and I am sure you will have others spring out at you as items you want to investigate more. Putting these aside for now let’s get it installed.

When I'm looking at a new system, I like to dive in and get it running, then investigate it from a first look (where I concentrate on normal operation), through to a more detailed investigation on those areas that are of interest to me.

For my first look at JBoss EAP 7, I used an Amazon EC2 t2.medium tier shared host running Red Hat Linux 7.2 (Maipo) with 4GB Ram and 2vCPUs.  I downloaded the Oracle Java JDK 8u92 (http://www.oracle.com/technetwork/java/javase/downloads/index.html ) and JBoss EAP 7.0.0 (http://developers.redhat.com/products/eap/download/) (requires Red Hat subscription) zip files and extracted them into /opt/java and /opt/jboss directories respectively. I then created users java and jboss and chown’d the respective files. I set up JAVA_HOME environment variable and I was good to go.

Fundamentally – running JBoss EAP 7 is the same as running EAP 6. You install it in the same way and run it in a similar way. The only difference for me was running on RHEL 7.2 where if you set up JBoss to run as a service you run the command without the ‘.sh’ for the script. Placing the script jboss-eap7.sh in /etc/init.d/ and then registering it through chkconfig the service is run by the command:

service jboss-eap7 start

whereas on RHEL 6 you run it as:

service jboss-eap7.sh start

The first difference I noticed when running JBoss EAP 7 for the first time is that as the libraries are based on WildFly rather than jboss-as, the logs show WFLY references rather than JBAS references. For any of us that look for the references to search for certain log entries and have monitoring set up against them this will be a big change. For example the JBoss start message is now under reference WFLYSRV0025 (whereas it used to be JBAS015874)

INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: JBoss EAP 7.0.0.GA (WildFly Core 2.1.2.Final-redhat-1) started in 4142ms - Started 306 of 591 services (386 services are lazy, passive or on-demand)
INFO  [org.jboss.as] (Controller Boot Thread) JBAS015874: JBoss EAP 6.4.0.GA (AS 7.5.0.Final-redhat-21) started in 3441ms - Started 192 of 229 services (68 services are lazy, passive or on-demand)

You will also notice here that even though I am running the same configuration file (standalone-full.xml) the new EAP 7 server starts a lot more services which making it start slower that EAP 6. On average (over 10 starts) EAP 7 took 4180ms, whereas EAP 6 took 3573ms

We can also compare with using the standalone.xml and you can see that also starts a lot more services for EAP 7 which means it starts slower than EAP 6. An average of 3289ms for EAP 7 and 2667ms for EAP 6.

INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: JBoss EAP 7.0.0.GA (WildFly Core 2.1.2.Final-redhat-1) started in 3227ms - Started 267 of 553 services (371 services are lazy, passive or on-demand)
INFO  [org.jboss.as] (Controller Boot Thread) JBAS015874: JBoss EAP 6.4.0.GA (AS 7.5.0.Final-redhat-21) started in 2652ms - Started 153 of 191 services (57 services are lazy, passive or on-demand)

The next difference I noticed was on the admin console where the layout has been changed. There are the same high level options as from the 6.4 console but when navigating into the sections the layout changes become more noticeable.

Figure 1 - JBoss EAP 7 Admin Console

Figure 2 - JBoss EAP 7 Subsystem Navigation

Figure 3 - JBoss EAP 7 Subsystem Settings

This unfortunately means more clicks to navigate to the same point you would have got to with 6.4, and as the settings encompass a whole screen you have to click back before you can navigate elsewhere. On first look this could become a frustration of using the console.

When using the CLI there are some other differences to be seen. The default port for connection to the CLI has changed from 9999 to 9990. Looking at the port configuration you can see a limited range of ports configured in EAP 7. This is because the http and management ports are used for a variety of protocols.

You can see there are no management-native or messaging ports.
It is also worth noting that the default management-https port is now 9993 rather than 9443 as it was before.

There are also some new CLI commands that can be used, such as set and unset to assign variables, unalias so you can turn off a defined alias and connection-info to show details of the connection.

There are also some new CLI operations that can be used such as list-add, list-get, list-clear, list-remove, map-get, map-clear, map-put, map-remove and query which can add and set attributes to an entity. These aren’t very well documented and will need further investigation.

There is also suspend and resume which suspends the server enabling it to complete its tasks gracefully without accepting new requests at which point you can resume it again.

Future blog posts will delve into the technology changes, features and enhancements, but from an initial first look at JBoss EAP 7 and before we do any deep investigation there are some immediate differences that will need to be thought of and evaluated when converting production systems from EAP 6.


JBoss EAP 7 download (requires Red Hat subscription)

Red Hat EAP 7 – supported configurations (requires Red Hat subscription)

Undertow - http://undertow.io/

19 May 2016

JBoss Logging and Best Practice for EAP 6.4 (Part 3 of 3)

by Brian Randell

So far in this series of posts about JBoss logging and best practice, we have seen what JBoss EAP 6.4.0 provides out of the box and how you might go about changing that configuration. As you may realise by now, there are a lot of areas you can configure and customise. This post takes a look at what you need to be thinking about when deciding what you want to implement in a production environment.

The areas I want to look at in this, the third and final part of the series are:
  • What we need in our Production environments?
  • What we need to ask of our developers?

Production Implementation

For a JBoss deployment to be production ready from a logging perspective we need to think about several key areas:
  • What areas are the priority for us to monitor
  • What housekeeping should be in place
  • What can we do to troubleshoot issues when they arise

Log monitoring

For most organisations monitoring solutions are in place that can be configured to connect to the server (usually through an agent,) read the log and alert on keywords such as ERROR and FATAL. You could also set up the monitoring solution to be more specific, and alert on certain phrases only.

It therefore makes sense for any JBoss server that the log being monitored by a monitoring solution is a single log that contains all messages for these log levels- and that can be easily parsed. From an administrative point of view this is also what I would want to see. One log that contains all I need to know about the current running of the system.

By default, when installing JBoss as a service we get two logs. We get a console log and a server log. The console log shows everything that has happened since the last restart, the server log shows everything that has happened. For me only one of these logs is required and it’s the server log.

This is the mainstay of your information about the system and should be the only one you need to worry about. So for me – I ignore and limit the information sent to the console log when running JBoss as a service, and concentrate on the server log.

Another thought here is to copy daily server logs to a central server. This can be useful if any trend analysis is required or if you are troubleshooting across a domain.

This may sound obvious, but as the monitoring will alert – the log needs to be clear of errors when you first start monitoring it in production. It is never sensible to start with errors already occurring.

Log housekeeping

If you do not have any log rotation or housekeeping and endlessly keep logs then eventually disk space will be an issue.

There is generally little point in keeping logs in production for more than 14 days and more often for more than 7 days.  If you are monitoring the system effectively then the alerts will be seen immediately and dealt with.  If any logs need to be kept for Problem Management or Root Cause Analysis scenarios then these can be moved away manually.

One thing to realise here is that if JBoss is running, then if you remove the active log file (moving it to an archive directory perhaps) it won’t automatically regenerate. The best practice is to copy it and empty it whilst in situ.

Luckily JBoss provides a number of different Log Handlers for us to use to make the housekeeping easy. There are several handlers that can be used to rotate the log on size or time. Now in 6.4 there is also a handler (Periodic Size) that can do either – and acts on whichever triggers the rotation first.

Log troubleshooting

If there is an issue on the server that we need to look into more closely then we have the ability to add specific log categories and raise the logging as we need them. This will take affect dynamically so we can turn up logging when the system is exhibiting a problem to troubleshoot and then turn it down when finished. This is particularly beneficial so that we do not swamp the logs with messages we don’t care about which can also cause performance headaches and could cause logs to increase in size substantially that could cause disk space issues.

We also have the potential here to log specific log categories to a different handler and hence a different log file so we can see our troubleshooting messages in a different file outside of the standard logging mechanism which then won’t interfere with normal monitoring.

Personally I like to troubleshoot against a separate debug log and have a Log Handler previously set up that I can utilise if and when required. This way you can place that log elsewhere, perhaps on a different file system or disk so it interferes less with the normal running of the system.

For this you would create a new Handler and use that handler for specific log categories when required.

See the examples In the previous blog in this series for how to create a handler and associate a log category with that handler.

For boot errors EAP 6.4.0 has a introduced a CLI command read-boot-errors.  It is a Management command and can be used to monitor the boot errors.


This allows a script to be used to see if any boot errors have occurred.  Particularly useful if you are starting up a number of servers at the same time.

Other Logging

As we have seen in the previous posts in this series, the Management Interface Logging is turned off by default.
I like this to be turned on.  If you are running a large environment it provides another avenue for troubleshooting and auditing.  It could be that a problem occurred due to the wrong CLI command being issued.  This could have been ad hoc at the time or may be a script being run automatically. Hence to see all activity on the server at the time an issue may have occurred is invaluable.

Developer guidelines

When we talk about Log Levels (as defined previously) and the types of messages that should fall into each level it’s the developers that often don’t adhere quite as strictly as they should.
I am not a developer, but as an administrator who is the first point of call if issues are flagged in the log, I want to look at the logs on Jboss when an application is running and ask these questions :

How noisy is the log ?

Try the log at different log levels and see whether each level has what you would consider the right information for that level.  For example, do INFO messages look like they should be INFO or perhaps really be DEBUG ? 
I have seen many applications that are ‘noisy’ and that make the log virtually unreadable and very difficult to diagnose when issues are occurring.

Stack Traces

If Stack Traces are logged for an error – are they useful for the context of the error?

Stack Traces can be large so you don’t want too many of them cluttering your ability to read the log. You only want stack traces shown when there is an ERROR level message or at a TRACE level (and potentially DEBUG, though I would not like to see them at this level either). For INFO level messages there should be no need for Stack Traces.

We also need to see whether we are getting multiple Stack Traces for the same error at different levels of the stack. One error should only need one Stack Trace.

And finally on Stack Traces, are they necessary anyway? Can the ERROR description define the issue enough that you don’t need to see the entire Stack Trace.

Don’t be afraid to push these issues back to the developers to change. If it affects your ability to properly monitor and troubleshoot a production application then it isn’t production ready in my eyes.


Hopefully some aspects of this series of posts have given you pause for thought and helped you along your way for implementing a production logging configuration that provides an environment that is well monitored and has easier troubleshooting. JBoss has a lot of flexibility where monitoring is concerned and you can get lost in the plethora of options available.

My advice :  Keep it simple, straightforward and uncluttered. Let it work for you not against you.


Part OnePart Two

12 May 2016

JBoss Logging and Best Practice for EAP 6.4 (Part 2 of 3)

By Brian Randell

Following on from Brian's previous post in the series, which showed you the default logging configuration for JBoss EAP 6.4.0, this post takes a look at how you can configure some of the core components. I'll be taking a standard common approach for the configuration purposes of this post and will leave more advanced configuration for future posts.

For this post we will primarily look at the configuration for a standalone deployment.


GC Log

The GC Log can be configured in the standalone.conf for standalone servers and in JVM properties for the domain servers.

For the standalone server these can be overridden as a whole by updating the JAVA_OPTS in the standalone.conf file.  (Note – you will need *all* the options you require)

The standalone.sh script checks for the presence of a ‘-verbose:gc’ entry in JAVA_OPTS.  So if this exists in the standalone.conf file then it will bypass the GC configuration in the standalone.sh.

An example additional line in the standalone.conf is :

# Specify options to pass to the Java VM.
if [ "x$JAVA_OPTS" = "x" ]; then
   JAVA_OPTS="-Xms1303m -Xmx1303m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true"
   JAVA_OPTS="$JAVA_OPTS -Djboss.modules.system.pkgs=$JBOSS_MODULES_SYSTEM_PKGS -Djava.awt.headless=true"
   JAVA_OPTS="$JAVA_OPTS -Djboss.modules.policy-permissions=true"
   JAVA_OPTS="$JAVA_OPTS -verbose:gc -Xloggc:/opt/jboss/jboss-eap-6.4/standalone/log/gctest.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=3M -XX:-TraceClassUnloading"
   echo "JAVA_OPTS already set in environment; overriding default settings with values: $JAVA_OPTS"

Note: I have changed the name of the log to gctest.log.

We can then see these options shown in the process :

$ ps -ef | grep ja
jboss     4438  4355 16 10:42 pts/0    00:00:07 java -D[Standalone] -server -XX:+UseCompressedOops -Xms1303m -Xmx1303m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -Djboss.modules.policy-permissions=true -verbose:gc -Xloggc:/opt/jboss/jboss-eap-6.4/standalone/log/gctest.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=3M -XX:-TraceClassUnloading -Dorg.jboss.boot.log.file=/opt/jboss/jboss-eap-6.4/standalone/log/server.log -Dlogging.configuration=file:/opt/jboss/jboss-eap-6.4/standalone/configuration/logging.properties -jar /opt/jboss/jboss-eap-6.4/jboss-modules.jar -mp /opt/jboss/jboss-eap-6.4/modules -jaxpmodule javax.xml.jaxp-provider org.jboss.as.standalone -Djboss.home.dir=/opt/jboss/jboss-eap-6.4 -Djboss.server.base.dir=/opt/jboss/jboss-eap-6.4/standalone

Boot Log

For this post, rather than show how to modify the boot logging, it is worth mentioning the new CLI command introduced in 6.4 – ‘read-boot-errors’.

This is part of the management core service and looks at the log, and reports back errors relating to the start of the server. This is very useful as it can be scripted using CLI to look at numerous servers and check them, pulling the information centrally.

To test this using the standalone server, I renamed the h2 directory so the server could not find the h2 module :

$ pwd
$ mv h2 h2old

I then started the JBoss server and ran the cli command :

$ ./jboss-cli.sh --connect
[standalone@localhost:9999 /] /core-service=management:read-boot-errors
    "outcome" => "success",
    "result" => [
            "failed-operation" => {
                "operation" => "add",
                "address" => [
                    ("subsystem" => "datasources"),
                    ("jdbc-driver" => "h2")
            "failure-timestamp" => 1460370253333L,
            "failure-description" => "JBAS010441: Failed to load module for driver [com.h2database.h2]"
            "failed-operation" => {
                "operation" => "add",
                "address" => [
                    ("subsystem" => "datasources"),
                    ("data-source" => "ExampleDS")
            "failure-timestamp" => 1460370254540L,
            "failure-description" => "{\"JBAS014771: Services with missing/unavailable dependencies\" => [\"jboss.data-source.java:jboss/datasources/ExampleDS is missing [jboss.jdbc-driver.h2]\",\"jboss.driver-demander.java:jboss/datasources/ExampleDS is missing [jboss.jdbc-driver.h2]\"]}",
            "services-missing-dependencies" => [
                "jboss.data-source.java:jboss/datasources/ExampleDS is missing [jboss.jdbc-driver.h2]",
                "jboss.driver-demander.java:jboss/datasources/ExampleDS is missing [jboss.jdbc-driver.h2]"
            "failed-operation" => {
                "operation" => "enable",
                "address" => [
                    ("subsystem" => "datasources"),
                    ("data-source" => "ExampleDS")
            "failure-timestamp" => 1460370254542L,
            "failure-description" => "{\"JBAS014879: One or more services were unable to start due to one or more indirect dependencies not being available.\" => {\"Services that were unable to start:\" => [\"jboss.data-source.reference-factory.ExampleDS\",\"jboss.naming.context.java.jboss.datasources.ExampleDS\"],\"Services that may be the cause:\" => [\"jboss.jdbc-driver.h2\"]}}",
            "missing-transitive-dependency-problems" => {
                "Services that were unable to start:" => [
                "Services that may be the cause:" => ["jboss.jdbc-driver.h2"]

You can see the boot errors are shown and pinpoint the area you need to investigate.

Console Log

As mentioned in the previous post, the console log gets used by default when using the jboss-as-standalone.sh or jboss-as-domain.sh scripts.  The file is placed in the /var/log/jboss-as/ directory.

When setting up JBoss to run as a service you will use the jboss-as.conf script.  The easiest way to modify where the console log goes is to modify this script which feeds the configuration into the jboss-as-standalone.sh and jboss-as-domain.sh scripts.

Edit the jboss-as.conf file and uncomment the JBOSS_CONSOLE_LOG configuration, and modify as appropriate.

In my example below I have uncommented the line and changed the filename to test.log.

# General configuration for the init.d scripts,
# not necessarily for JBoss AS itself.

# The username who should own the process.

# The amount of time to wait for startup

# The amount of time to wait for shutdown

# Location to keep the console log
# JBOSS_CONSOLE_LOG=/var/log/jboss-as/console.log

When I now stop and start the service you can then see in my directory the new filename alongside the old.

# pwd
# ll
total 16
-rw-r--r--. 1 root root 5679 Apr 11 12:28 console.log
-rw-r--r--. 1 root root 4776 Apr 11 12:35 test.log


There are 7 types of Handlers you can create and you can create multiple handlers of each type. For this example we will create a new ‘Size’ Handler Type. We will do this through the CLI and see the results in the Console.

To start – our server is running and we have connected using the CLI. To add a new Handler we use the add command for the new handler name.  For the most part we will keep the default values :

[standalone@localhost:9999 /] /subsystem=logging/size-rotating-file-handler=NEWSIZE:add(file={"path"=>"newsize.log", "relative-to"=>"jboss.server.log.dir"},level="DEBUG",enabled=true, append=false, rotate-size=5m,max-backup-index=10,rotate-on-boot=true,suffix=".yyyy-MM-dd-HH")
{"outcome" => "success"}

We have created a handler called ‘NEWSIZE’ that will write to the file ‘newsize.log’ at DEBUG level and rotate if the file gets to 5Mb and keep a backup of 10 files.

We can check the values for the handler we have created :

[standalone@localhost:9999 /] /subsystem=logging/size-rotating-file-handler=NEWSIZE:read-resource
    "outcome" => "success",
    "result" => {
        "append" => false,
        "autoflush" => true,
        "enabled" => true,
        "encoding" => undefined,
        "file" => {
            "path" => "newsize.log",
            "relative-to" => "jboss.server.log.dir"
        "filter" => undefined,
        "filter-spec" => undefined,
        "formatter" => "%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n",
        "level" => "DEBUG",
        "max-backup-index" => 10,
        "name" => "NEWSIZE",
        "named-formatter" => undefined,
        "rotate-on-boot" => true,
        "rotate-size" => "5m",
        "suffix" => ".yyyy-MM-dd-HH"

In the Console we can see the Handler added :

We can see our new log created on the file system :

[root@localhost init.d]# ll /opt/jboss/jboss-eap-6.4/standalone/log/
total 156
-rw-rw-r--. 1 jboss jboss   1669 Apr 11 09:50 backupgc.log.current
-rw-rw-r--. 1 jboss jboss   1500 Apr 11 10:05 gc.log.0.current
-rw-rw-r--. 1 jboss jboss   1494 Apr 11 12:35 gctest.log.0.current
-rw-r--r--. 1 jboss jboss      0 Apr 11 12:59 newsize.log
-rw-rw-r--. 1 jboss jboss 133362 Apr 11 12:35 server.log
-rw-rw-r--. 1 jboss jboss  10419 Feb  4 19:50 server.log.2016-02-04

If we want to modify an entry we can use the write-attribute command. So if we want to change the size of the files to 10Mb we can use the following:

[standalone@localhost:9999 /] /subsystem=logging/size-rotating-file-handler=NEWSIZE:write-attribute(name=rotate-size,value=10m)

If we want to remove the handler entirely, we can use the remove command:

[standalone@localhost:9999 /] /subsystem=logging/size-rotating-file-handler=NEWSIZE:remove

Log Categories

You can define a log category against a particular handler and level of message you want to see. This is useful when troubleshooting if you know the area you want to analyse, and want to see a higher level of logging just for that area.

For this example we will add a log category for org.apache.coyote and attach it to our NEWSIZE handler we have just created.

To add a new log category we need to use the add command with the new category:

[standalone@localhost:9999 /] /subsystem=logging/logger=org.apache.coyote:add(category=org.apache.coyote,level=DEBUG,handlers=[NEWSIZE])
{"outcome" => "success"}
We can check the new category :
[standalone@localhost:9999 /] /subsystem=logging/logger=org.apache.coyote:read-resource 
    "outcome" => "success",
    "result" => {
        "category" => "org.apache.coyote",
        "filter" => undefined,
        "filter-spec" => undefined,
        "handlers" => ["NEWSIZE"],
        "level" => "DEBUG",
        "use-parent-handlers" => true

We can see this new category in the console:

If we want to modify an entry we can use the write-attribute command.  So if we want to change the log level we can use the following:

[standalone@localhost:9999 /] /subsystem=logging/logger=org.apache.coyote:write-attribute(name=level, value=TRACE)

If we want to remove the handler entirely we can use the remove command:


CLI Logging

To log activity through the CLI and through the Console, you can easily enable the Management Interface logging using a CLI command.

[standalone@localhost:9999 /] /core-service=management/access=audit/logger=audit-log:write-attribute(name=enabled,value=true)

This produces a management audit log file created at $JBOSS_HOME/standalone/data/audit-log.log

You can also modify the $JBOSS_HOME/bin/jboss-cli-logging.properties file for just the CLI logging.  Change the log level to INFO and uncomment the handler.

# Additional logger names to configure (root logger is always configured)
# assign a lower level to enable CLI logging

# Root logger level
# Root logger handlers
# uncomment to enable logging to the file

Once done and CLI restarted then the file jboss-cli.log will be created with CLI information stored.

Advanced Configuration

As mentioned earlier, there are a number of more advanced logging configurations that could be achieved. As these are less standard and commonplace, they have been left for future blog posts.
  • Logging Profiles and their Configuration
  • SysLog Handlers
  • Log Category Filtering
  • Asynchronous logging


To summarise this blog series so far: We have seen what the default logging configuration is in JBoss EAP 6.4.0 and now know how to reconfigure the most common aspects for different types of logging.

Part three will look at the recommendations for which configuration changes you should make.