20 May 2015

GeeCon 2015 Impressions

So, GeeCon has finally happened and like all good things it flew by. The entire organising committee
did a fantastic job and their passion for the conference really showed through.

They took great care of all us speakers and ensured that we had everything we needed to deliver great sessions. The team cannot be thanked enough for their efforts.

On to my session big thanks to all that attended and also for your participation in my demo.
It can be challenging to build demos relevant to the topic and also involve the audience, but I think we made a good job of it. Thanks to Sanne Grinovero for the ideas for the demo and helping hack it together. In the end those late nights paid off!

My talk and demo was on running queries on Infinispan. We built a mock election webapp which we ran on WildFly 9 on EC2. The specific query features we looked at were applying facets to result sets along with filtering them further. Once videos and slides come up we will blog and tweet about it, so that anyone interested further can watch it again. 

The two big buzzwords around GeeCon this year were reactive and microservices. The Typescript crew made themselves known to all and got plenty of buzz around their Scala tech. Sam Newman had a great session and packed his room completely.


The great thing about GeeCon is that everyone is incredibly passionate about their trade. All of the sessions I attended were of a great standard and I really hope that all attendees went home happy. After all we do these talks for the community (or the greater good) so ensuring that attendees get some value is of paramount importance to other speakers.

Thank you all who came, who helped and put months of toil to make GeeCon a success again. 

Hopefully we will see you all again next year!



Navin Surtani, Expert Support Consultant
Follow @navssurtanView Navin's profile on LinkedIn 

14 May 2015

MIDDLEWARE INSIGHT - C2B2 Newsletter Issue 21

Welcome to another issue of Middleware Insight – a newsletter that brings you only the most recent, most important and most interesting news from the middleware industry.

Apologies for a slight delay with delivering the next issue to you - as you can see, we've used this time to upgrade and revamp our newsletter so that you can now read the news in a better, clearer and more user-friendly format. From now on
a full version of the Middleware Insight will be available for you to read on a separate landing page to make sure you can get the most out of every news piece that we've carefully picked for you.
 
If you have any questions, feedback or suggestions about the newsletter – please don’t hesitate to contact us at info@c2b2.co.uk.
 
Enjoy!
See Full Newsletter Here


JavaEE Applications Supercharged: Using JCache with Payara
Payara’s latest Jenkins builds now have Hazelcast built in, disabled by default so there’s no installation required. Steve Millidge teamed up with Hazelcast to demonstrate the new feature in action. 
Payara
Java 9 Release Date Announced
With the JDK feature proposal process long over, the cement has dried on official plans for Java 9 features. The Java team is now committing to a tight release schedule for JDK 9 over the next 15 months.
Jaxenter
NetBeans Day is coming to the UK
NetBeans Day
29 May 2015, London. See a wide range of NetBeans related talks for both beginners and experts alike.
Read More
Multiple JAX-RS URIs in one WAR
JavaEE.jpg
Read more on Adam Bien's blog.
Adam Bien Blog
Performance Tuning Apps with WildFly AS
wildfly.jpg
Watch the latest London JBoss User Group event video - presentation by Jeremy Whiting.
London JBUG April 2015
Infinispan 7.2.0.Final is out!
Infinispan
This release contains bugfixes and optimizations to make your application faster. 
Infinispan
Managing a JavaEE App Server with Chef
Chef
Recipe is written using a Ruby based DSL to describe how to install and configure software(s) on a host
Chef
Microservices - SOA and the App Service Minibus
SOA
What are microservices, and why is everyone so excited by them? 
C2B2 Blog - SOA
Oracle SOA Suite 12c - Free Webinars
SOA Suite 12c
What you need to know, planning your upgrade, post upgrade advice and more!
SOA 12c Webinars
Call it 'microservices,' but it's still SOA
SearchSOA
Burying the SOA name, but not the principles
TechTarget
Dominika is the Marketing Manager at C2B2, ‘Middleware Insight’ newsletter editor and the main organiser of London JBoss User Group and Java EE & GlassFish User Group.
dominika.tasarz.c2b2.jpg
Dominika Tasarz
Marketing Manager, C2B2 Consulting


8 May 2015

Debugging JBoss Application Performance Using Java SE 8 (Hotspot) Utilities


In this short video Dave Winters - C2B2 Senior Middleware Consultant,  gives an overview of the different utilities which come with the JVM (HotSpot) which can be useful to monitor and debug JBoss application performance issues.




22 April 2015

JBoss EAP 6 CLI

In JBoss EAP 6 and WildFly, managing the server is a challenge. Especially with large teams, it’s important to know that everyone is working with the same server configuration, even when not all of us have access to tools like Docker or Vagrant yet.

Enter the JBoss CLI. The JBoss CLI is a batch script which can be found in %JBOSS_HOME%\bin\jboss-cli.bat (or .sh for Linux) and simplifies this task by giving a consistent way to keep configurations the same easily, both for development teams and production environments.

8 April 2015

Microservices - SOA and the Application Service Minibus

If you’re working in software architecture or development, there’s one phrase you will have heard repeated over and over for the last six months and that is “Microservices”. What are microservices, and why is everyone so excited by them? To understand this, let’s rewind the clock five years and take a look at the buzzword that everyone was using in 2010 – “Service Oriented Architecture”.

SOA is a set of principles that suggests breaking down your business into a number of services, which can be orchestrated together to provide end to end business processes. These services have defined interfaces and are self-contained, which makes them easily replaceable and scalable. This approach is hardly anything new – in 2005 we were talking about the same ideas as “Enterprise Application Integration”.

The concept of having a big problem and breaking it down into a larger number of smaller problems – each with a well-defined scope – has been “common sense” since at least the times of ancient Greek civilisation. In 2015 however, people will tell you that SOA failed as an approach, that it is irrelevant and “monolithic.” So what went wrong?

Read the full article by Matt Brasier, C2B2 Principal Consultant on Voxxed.com


26 March 2015

'We strive to achieve excellence' - C2B2 Tech days

At C2B2, one of the values that we conduct our business by is excellence.

‘We strive to achieve excellence in all areas of our expertise and deliver world class technical and customer service excellence.’ – And we’re not lying! 

Every few months we hold a Tech day at our head off here in Worcestershire which gives all of our technical Consultants a chance to get together and do what they do best, solve technical problems! 
Our most recent Tech day was led by one of our Expert Support Consultants, David Winters. 

What were you trying to achieve at March’s Tech day David?


The overall technical goal of this Tech day was to attempt to integrate ActiveMQ 5.11.0 http://activemq.apache.org/ with the latest Payara release http://payara.co.uk/upstream_builds and to test that the setup worked as anticipated.

How did you go about achieving this goal?


We split into two teams and set out to achieve the following:

• Install the latest Payara nightly build on standalone Amazon EC2.
• Install ActiveMQ 5.11 on a separate Amazon EC2 instance.
• Downloaded the generic JCA adapter 2.1 from https://genericjmsra.java.net/ and deployed and configured the JCA adapter on Payara so that Payara could use ActiveMQ as a JMS Provider.
• Configured the Amazon EC2 firewall rules as appropriate so that Payara and ActiveMQ instance could communicate on relevant ports.
• To verify that Payara and ActiveMQ were installed and configured correctly, we created a test JMS client to send messages to a test queue hosted on ActiveMQ and in Payara, we created and deployed a simple Message driven bean application. When messages were sent to the test queue hosted on ActiveMQ, the message listener associated with the message driven bean application would process these messages.

What was the outcome?


After some initial obstacles whereby we had not copied all the required ActiveMQ jar files so that they could be on Payara’s classpath, we managed to run some basic tests successfully. We were able to send test JMS messages from a remote JMS client to a queue running on ActiveMQ 5.11 which then triggered the message driven bean deployed on Payara to process these messages correctly.


So what do you do in your day-to-day role at C2B2?


I am an Expert Support Consultant at C2B2 and my main responsibility is to ensure that all of our customer's middleware environments are running smoothly at all times. Any problem, slow down or outage can have a disastrous impact on any business so we have to make sure that doesn't happen! 
I work closely with our Senior Consultants out on customer sites to make sure that any issues are fixed as quickly as possible so that customers can continue with the day-to-day running of their business. 

If you would like to read more about David’s expertise, take a look at some of his previous blog posts. 


Configuring JBoss management  authenticationwith LDAP over SSL
JBatch on Payara 4.1.151 now  supports 5 different database types
New features and  changes in BPM 12c







17 March 2015

Purging data from Oracle SOA Suite 11g

Part1: How can I purge data from Oracle SOA Suite 11g (PS6 11.1.1.7) using the purge script provided by Oracle?


Introduction


This blog will explain how to purge (remove unwanted data) data within Oracle SOA Suite 11g (PS6 11.1.1.7).

The series of blogs will cover the following:
  • Part 1: How can I purge data from Oracle SOA Suite 11g (PS6 11.1.1.7) using the purge script provided by Oracle? 
  • How does Oracle SOA Suite 11g (PS6 11.1.1.7) store data? 
  • What data does Oracle SOA Suite 11g (PS6 11.1.1.7) store? 
  • Why do you need to purge Oracle SOA Suite 11g (PS6 11.1.1.7) data? 
  • What are the purging options available for Oracle SOA Suite 11g (PS6 11.1.1.7)?
  • Which data will be purged by the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script? 
  • List of composite instance states that will be considered for purging by the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script 
  • How to install the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script? 
  • How to execute the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script? 
  • What is Looped purging (Oracle SOA Suite 11g (PS6 11.1.1.7) purge script)? 
  • What is Parallel purging (Oracle SOA Suite 11g (PS6 11.1.1.7) purge script)? 
  • Description of parameters used by the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script 
  • Example 1: Executing the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script for all composites 
  • Example 2: Executing the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script for a specific composite

Oracle SOA Suite 11g (PS6 11.1.1.7) data

How does Oracle SOA Suite 11g (PS6 11.1.1.7) store data?
SOA Suite uses a database schema called SOAINFRA (collection of database objects such as tables, views, procedures, functions etc.) to store data required for the running of SOA Suite applications. The SOAINFRA (SOA Infrastructure) schema is also referred to as the ‘dehydration store’ acting as the persistence layer for capturing SOA Suite data.

What data does Oracle SOA Suite 11g (PS6 11.1.1.7) store?

Composite instances utilising the SOA Suite Service Engines (BPEL, mediator, human task, rules, BPM, OSB, EDN etc.) will write data to tables residing within the SOAINFRA schema. Each of the engines will either write data to specific engine tables (e.g. the CUBE_INSTANCE table is used solely by the BPEL engine) or common tables that are shared by the SOA Suite engines such as the AUDIT_TRAIL table.

Few examples (below) of the type of data that is stored within the SOAINFRA schema:
  • Message payload (e.g. input, output) 
  • Scope (e.g. variables)
  • Auditing (e.g. data flow timestamps) 
  • Faults 
  • Deferred (messages that can be recovered) 
  • Metrics 

Why do you need to purge Oracle SOA Suite 11g (PS6 11.1.1.7) data?

Data within the Oracle SOA Suite database can grow to substantial levels in a short space of time. Payload sizes and volume of data will have an impact on available disk space which in turn will affect the performance of SOA Suite. For example, EM console can often become slow to navigate, increasing number of messages becoming stuck or requiring recovery, JTA transaction problems etc.

Purging itself, can become challenging if the data has not been maintained due to the large number of composite instances. Therefore, establishing a purge strategy and implementing it on a regular basis will help maintain the health of SOA Suite keeping the environment running efficiently.


What are the purging options available for Oracle SOA Suite 11g (PS6 11.1.1.7)?

Oracle provides three options for purging Oracle SOA Suite 11g data:
  • EM Console: Within the Enterprise Manager console the ‘Delete with Options’ can be used to manually delete many instances at once however, this may lead to transaction timeouts and is not recommended for large volumes. 
  • Purge Script: This is the process of deleting instances that are no longer required using stored procedures that are provided with Oracle SOA Suite 11g out of the box. 
  • Partitioning: Instances are segregated based on user defined criteria within the database, when a partition is not required it will be dropped freeing the disk space.

Oracle SOA Suite 11g (PS6 11.1.1.7) purge script


Which data will be purged by the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script?

The purge script will delete composite instances that are in the following states:

Completed
Faulted
Terminated by user
Stale
Unknown

The purge script will NOT delete composite instances that are in the following states:

Running (in-flight)
Suspended
Pending Recovery

List of composite instance states that will be considered for purging by the
Oracle SOA Suite 11g (PS6 11.1.1.7) purge script:


How to install the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script?
The following details will be required:

  • Database host details:
    - hostname (IP address)
    - username
    - password
  • SOA Database schema details:
    - prefix
    - password
  • Full path of the SOA Suite home folder
  • Full path of the directory where the Oracle purge script will write log information to (a folder on the database host) 
‘DEV' was the soainfra schema prefix used for the examples below.

a. Log into the Database host server.

b. Connect to the database as administrator using SQL*Plus:
sqlplus / as sysdba
c. Grant privileges to the soainfra (database) user that will be executing the scripts:
GRANT EXECUTE ON DBMS_LOCK TO _SOAINFRA;
GRANT CREATE JOB TO
_SOAINFRA;
GRANT CREATE EXTERNAL JOB TO
_SOAINFRA;
d. Exit SQL*Plus and go to the location of the Oracle purge script:
exit
$cd /rcu/integration/soainfra/sql/soa_purge/
e. Connect to the database as the soainfra user using SQL*Plus:
sqlplus _SOAINFRA/

@soa_purge_scripts.sql

Procedure created.
Function created.
Type created.
Type body created.
PL/SQL procedure successfully completed.
Package created.
Package body created
.
f. Exit SQL*Plus and create a directory where the log files (generated by the Oracle purge script) should be written to:
exit
$mkdir -p /PurgeLogs
g. Connect to the database with SQL*Plus as SYSDBA and declare the directory:
sqlplus / as sysdba

CREATE OR REPLACE DIRECTORY SOA_PURGE_DIR AS '/PurgeLogs';

GRANT READ, WRITE ON DIRECTORY SOA_PURGE_DIR TO
_SOAINFRA;

All the database objects required for purging data using the Oracle purge script are now loaded into the SOAINFRA schema ready for use.

How to execute the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script?

There are two options for running the purge script:
  • Looped
  • Parallel 
What is Looped purging (Oracle SOA Suite 11g (PS6 11.1.1.7) purge script)?

Looped purge is a single threaded PL/SQL script that will iterate through the SOAINFRA tables and delete instances matching the parameters specified.

What is Parallel purging (Oracle SOA Suite 11g (PS6 11.1.1.7) purge script)?

Parallel purge is essentially the same as the looped purge. It is meant to be more efficient as it uses the dbms_scheduler package to spawn multiple purge jobs all working on a distinct subset of data. There are 2 more parameters that can be specified in addition to the ones used by the looped purge. This is designed to purge large data volumes hosted on high-end database nodes with multiple CPUs and a good IO sub-system. A maintenance window should be used as it requires a lot of resources.


Example 1: Executing the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script to purge data for all composites

We are required to delete all composite instances which were created between 1st June 2010 and 30th June 2010. In addition, there is a requirement not to delete instances that have been modified after 30th June 2010. The script must finish running after an hour due to business hours resuming shortly afterwards.

min_creation_date = 1st June 2010
max_creation_date = 30 June 2010
retention_period = 1st July 2010

The above will in effect delete all "composite instances" where the created time of the instance is between 1st June 2010 and 30 June 2010 and the modified date of the BPEL instances is less than 1st July 2010.

a. looped
DECLARE

max_creation_date timestamp;
min_creation_date timestamp;
batch_size integer;
max_runtime integer;
retention_period timestamp;

BEGIN
min_creation_date := to_timestamp('2010-06-01','YYYY-MM-DD');
max_creation_date := to_timestamp('2010-06-30','YYYY-MM-DD');
max_runtime := 60;
retention_period := to_timestamp('2010-07-01','YYYY-MM-DD');
batch_size := 10000;

soa.delete_instances(
min_creation_date => min_creation_date,
max_creation_date => max_creation_date,
batch_size => batch_size,
max_runtime => max_runtime,
retention_period => retention_period);

END;
/
b. Parallel
DECLARE

max_creation_date timestamp;
min_creation_date timestamp;
batch_size integer;
max_runtime integer;
retention_period timestamp;
DOP integer;
max_count integer;
purge_partitioned_component boolean;

BEGIN
min_creation_date := to_timestamp('2010-06-01','YYYY-MM-DD');
max_creation_date := to_timestamp('2010-06-30','YYYY-MM-DD');
max_runtime := 60;
retention_period := to_timestamp('2010-07-01','YYYY-MM-DD');
batch_size := 10000;
DOP := 3;
max_count := 1000000;
purge_partitioned_component := false);


soa.delete_instances_in_parallel (
min_creation_date => min_creation_date,
max_creation_date => max_creation_date,
batch_size => batch_size,
max_runtime => max_runtime,
retention_period => retention_period,
DOP => DOP,
max_count => max_count,
purge_partitioned_component => purge_partitioned_component);

END;
/


Example 2: Executing the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script to purge data from a specific composite

Same as Example 1 Scenario but with an additional requirement of only purging data from the composite named OrderBookingComposite . No other composite data should be purged.

Composite details can be gathered by querying the COMPOSITE_INSTANCE table within the SOAINFRA schema. The column named COMPOSITE_DN (distinguished name) holds the details required by the purge script:

Format: <soa_partition name>/<composite name>!<composite_revision>
Example: default/OrderBookingComposite!1.0

a. Looped
DECLARE
min_creation_date timestamp;
max_creation_date timestamp;
batch_size number;
max_runtime number;
retention_period timestamp;
purge_partitioned_component boolean;
composite_name varchar2(200);
composite_revision varchar2(200);
soa_partition_name varchar2(200);

BEGIN
min_creation_date := to_timestamp('2010-06-01','YYYY-MM-DD');
max_creation_date := to_timestamp('2010-06-30','YYYY-MM-DD');
max_runtime := 60;
retention_period := to_timestamp('2010-07-01','YYYY-MM-DD');
batch_size := 10000;
purge_partitioned_component := true;
composite_name := 'OrderBookingComposite';
composite_revision := '1.0';
soa_partition_name := 'default';


soa.delete_instances(
min_creation_date => min_creation_date,
max_creation_date => max_creation_date,
batch_size => batch_size,
max_runtime => max_runtime,
retention_period => retention_period,
purge_partitioned_component => purge_partitioned_component,
composite_name => composite_name,
composite_revision => composite_revision,
soa_partition_name => soa_partition_name);

END;
/
b. Parallel
DECLARE
min_creation_date timestamp;
max_creation_date timestamp;
batch_size number;
max_runtime number;
retention_period timestamp;
DOP integer;
max_count integer;
purge_partitioned_component boolean;
composite_name varchar2(200);
composite_revision varchar2(200);
soa_partition_name varchar2(200);

BEGIN
min_creation_date := to_timestamp('2010-06-01','YYYY-MM-DD');
max_creation_date := to_timestamp('2010-06-30','YYYY-MM-DD');
max_runtime := 60;
retention_period := to_timestamp('2010-07-01','YYYY-MM-DD');
batch_size := 10000;
DOP := 3
max_count := 1000000;
purge_partitioned_component := true;
composite_name := 'OrderBookingComposite';
composite_revision := '1.0';
soa_partition_name := 'default';


soa.delete_instances(
min_creation_date => min_creation_date,
max_creation_date => max_creation_date,
batch_size => batch_size,
max_runtime => max_runtime,
retention_period => retention_period,
DOP => DOP,
max_count => max_count,
purge_partitioned_component => purge_partitioned_component,
composite_name => composite_name,
soa_partition_name => soa_partition_name);

END;
/

Conclusion

This blog has provided a basic understanding of the purge script contained within Oracle SOA Suite 11g (PS6 11.1.1.7).

A long term purging strategy needs to be implemented and in order to do so, a good understanding of the workings of the purge script is required along with an awareness of the issues related to the script.

Therefore, leading on from part 1 there will be few more blogs that will cover other the following:
  • Part 2: How does the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script work?
  • Part 3: How to establish a long term purge strategy for Oracle SOA Suite 11g (PS6 11.1.1.7)
Irfan Suleman, C2B2 Senior Consultant 

9 March 2015

How to Configure a Simple JBoss Cluster in Domain Mode

Clustering is a very important thing to master for any serious user of an application server. Clustering allows for high availability by making your application available on secondary servers when the primary instance is down or it lets you scale up or out by increasing the server density on the host, or by adding servers on other hosts. It can even help to increase performance with effective load balancing between servers based on their respective hardware.

Andy Overton has already covered how to set up a cluster of servers in standalone mode fronted by mod_cluster for load balancing, so in this post I'll cover clustering in domain mode. I won't rehash mod_cluster settings, so this will just cover the set up of a doman controller on one host, and the host controller and server instances on another host.

To follow along with this blog, you'll need to download either JBoss EAP 6.x or WildFly. I'll be using WildFly 8.2 on Xubuntu 14.04. I'll be using $WF_HOME to refer to your WildFly home directory.

4 March 2015

Installing Weblogic with Chef


In my previous blog I discussed using Chef to deploy Weblogic SOA suite, in this blog I will show you how to create a simple Weblogic Cluster on a virtual machine with two managed servers using Chef.  This solution uses a Chef Cookbook named weblogic which contains recipes and templates, an environment and roles to model the infrastructure as code.  Two recipes have been created, one to install the Weblogic binaries ‘install-wls.rb and the second ‘create-domain.rb’ which creates the Weblogic Domain with an Admin Server and two managed servers in a Cluster.  The recipes read attributes defined in the environment ‘weblogic_dev’

The Weblogic jar installer was downloaded from Oracle and stored in a Nexus repository.  The source files for the Cookbooks, Environment and Roles were created in the local Chef repository on the Chef Workstation in the following folder structure, and then uploaded to the Chef Server.


The source for the recipes, templates and environment is described below:

Recipes


install-wls.rb

This recipe installs the Weblogic binaries by:

Downloading the Weblogic installer jar file from a Nexus repository (2)
Creates the user oracle, group oinstaller and weblogic home directory (3)
Creates the Oracle Inventory and installer response files from the templates ora_inventory.rsp.erb and wls-12c.rsp.erb (3).  The templates have placeholders which are substituted with attributes read from the Node object.
Executes the Weblogic jar file in silent mode referencing the two response files created by running :

java -jar weblogic-12.1.3.jar -silent  -reponseFile responsefile -InvPtrLoc OraInventoryFile
The source for the recipe is listed below:

# (1) Get the attributes from the Node object on the server that the recipe is run on (Defined in environment)
os_user = node['weblogic']['os_user']
os_installer_group = node['weblogic']['os_installer_group']
user_home = File.join("/home", os_user)
nexus_url = node['weblogic']['nexus_url']
repository = node['weblogic']['repository']
group_id = node['weblogic']['group_id']
artifact_id = node['weblogic']['artifact_id']
version = node['weblogic']['version']
packaging = node['weblogic']['packaging']

# (2) Create the user/group used to install Weblogic and the WLS home directory
group os_installer_group do
action :create
append true
end

user os_user do
supports :manage_home => true
comment "Oracle user"
gid os_installer_group
home user_home
shell "/bin/bash"
end

# Create FMW Directory
directory node['weblogic']['oracle_home'] do
owner os_user
group os_installer_group
recursive true
action :create
end

# (3) Download the Weblogic installer from Nexus
installer_jar = File.join(user_home, "#{artifact_id}-#{version}.#{packaging}")
remote_file "download Oracle Weblogic Server" do
#source "#{nexus_url}?r=#{repository}&g=#{group_id}&a=#{artifact_id}&v=#{version}&p=#{packaging}"
source "file:///mnt/hgfs/vmwareData/Alan/SOA/fmw_12.1.3.0.0_wls.jar"
path installer_jar
owner os_user
group os_installer_group
end

# (4) Create OraInventory and Installer response files to allow silent install
ora_inventory_directory = File.join(user_home, "oraInventory")
ora_inventory_file = File.join( ora_inventory_directory, "ora_inventory.rsp")

directory ora_inventory_directory do
owner os_user
group os_installer_group
recursive true
action :create
end

template ora_inventory_file do
source "ora_inventory.rsp.erb"
owner os_user
group os_installer_group
variables(
ora_inventory_directory: ora_inventory_directory,
install_group: os_installer_group
)
owner os_user
group os_installer_group
end

# Create Response File
response_file = File.join(user_home, "wls-12c.rsp")
oracle_home = node['weblogic']['oracle_home']

template response_file do
source "wls-12c.rsp.erb"
variables(
oracle_home: oracle_home
)
owner os_user
group os_installer_group
end

# (5) Install Weblogic Server by executing the jar command defining the appropriate command line options to install silently
install_command = "#{node['weblogic']['java_home']}/bin/java -jar #{installer_jar} -silent -responseFile #{response_file} -invPtrLoc #{ora_inventory_file}"

execute install_command do
cwd user_home
user os_user
group os_installer_group
action :run
creates "#{oracle_home}/oraInst.loc"
end
 create-domain.rb

The main purpose of this recipe is to create a WLST script from the template create_domain.py.erb that configures the Weblogic domain offline.  The template defines the following WLST helper functions to create the respective Weblogic components to define the cluster:

createManagedServer(servername,  machinename, address, port)
createAdminServer(servername, address, port)
createMachine(machinename, address, port)
createCluster(clustername, address, port)
assignCluster(clustername, server)

The main function createCustomDomain calls the above functions to create and configure the domain using attributes defined in the environment as json objects.  A Weblogic domain requires one Admin server and can have multiple clusters, which each can contain one or more managed servers.  The managed servers can be located on one machine or across multiple machines, with each machine requiring a Node manager to be configured.  The configuration for the clusters, machines and managed servers is defined in a json object array in the environment.  The respective json object array for the Clusters, machines and managed servers is passed into the template and the appropriate block of code iterates through each item in the array to generate a call to helper function with the correct values passed as arguments.  The code snippet below, with some code moved for clarity, shows how the call to the createMachine helper function is generated by iterating through the items in the machines object array.
def createCustomDomain():
print 'Creating Domain... ' + domain;
readTemplate('<%= @wl_home %>/common/templates/wls/wls.jar', domain_mode)

setOption('ServerStartMode', start_mode)
. . . .

<% @machines.each do |machine| -%>
createMachine('<%= machine['name'] %>', '<%= machine['nm_address'] %>',
<%= machine['nm_port'] %>)
<% end -%>

. . . .
writeDomain(domain_path)
closeTemplate()
The source for the recipe is listed below:
# (1) Get the attributes from the Node object of the server recipe is run on # (Defined in environment)
os_user = node['weblogic']['os_user']
os_installer_group = node['weblogic']['os_installer_group']
middleware_home = node['weblogic']['oracle_home']
weblogic_home = "#{middleware_home}/wlserver"
common_home = "#{middleware_home}/oracle_common"
domains_path = File.join(middleware_home, "domains")
domain_name = node['wls_domain']['name']
domain_py = File.join(middleware_home, "create_domain.py")

# (2) Create the WLS Dommains directory
directory domains_path do
owner os_user
group os_installer_group
recursive true
action :create
end

# (3) Create the WLST script to create the domain, passing in variables read from # the node's attribute hash map. Save the script to the server for execution
template domain_py do
source "create_domain.py.erb"
variables(
domain_mode: node['wls_domain']['mode'],
domains_path: domains_path,
domain: domain_name,
start_mode: node['wls_domain']['start_mode'],
crossdomain_enabled: node['wls_domain']['crossdomain_enabled'],
username: node['wls_domain']['admin_username'],
password: node['wls_domain']['admin_password'],
wl_home: weblogic_home,
machines: node['wls_domain']['machines'],
admin_server: node['wls_domain']['admin_server'],
managed_servers: node['wls_domain']['managed_servers'],
clusters: node['wls_domain']['clusters']
)
owner os_user
group os_installer_group
end

# (4) Run the WLST script to create the domain offline
ENV['ORACLE_HOME'] = middleware_home

execute "#{weblogic_home}/common/bin/wlst.sh #{domain_py}" do
environment "CONFIG_JVM_ARGS" => "-Djava.security.egd=file:/dev/./urandom"
user os_user
group os_installer_group
action :run
creates "#{domains_path}/#{domain_name}/config/config.xml"
end

Templates


The recipe install-wls uses two templates, ora_inventory.rsp.erb and wls-12c.rsp.erb to create the response file used for the silent install and the file oraInst.loc which specifies the location of the Oracle Inventory directory. The template create_domain.py.erb is called by the recipe create_domain and defines the WLST script which is run to create the domain. The recipe passes in a json object array for the Clusters, Machines and Managed servers to the template, a code block is defined which iterates through each object array creating a call to the helper functions to create a call to the methods createMachine, createManagedServer and createMachine for each item in the respective array.

create_domain.py.erb
domain_mode='<%= @domain_mode %>'
domain_path='<%= @domains_path %>/<%= @domain %>'
domain='<%= @domain %>'
start_mode='<%= @start_mode %>'
crossdomain_enabled=<%= @crossdomain_enabled %>
admin_username='<%= @username %>'
admin_password='<%= @password %>'

def createManagedServer(servername, machinename, address, port):
print 'Creating Managed Server Configuration... ' + servername;
cd("/")
create(servername, "Server")
cd("/Servers/" + servername)

if machinename:
set('Machine', machinename)

set('ListenAddress', address)
set('ListenPort', int(port))

def createAdminServer(servername, address, port):
print 'Creating Admin Server Configuration... ' + servername;
cd("/")

cd("/Servers/" + servername)
set('ListenAddress', address)
set('ListenPort', int(port))
cd('/')
cd('Security/base_domain/User/weblogic')
set('Name',admin_username)
cmo.setPassword(admin_password)

def createMachine(machinename, address, port):
print 'Creating Machine Configuration... ' + machinename;

try:
cd('/')
create(machinename, 'Machine')
except BeanAlreadyExistsException:
print 'Machine ' + machinename + ' already exists';

cd('Machine/' + machinename)
create(machinename, 'NodeManager')
cd('NodeManager/' + machinename)
set('ListenAddress', address)
set('ListenPort', int(port))

def createCluster(clustername, address, port):
print 'Creating Cluster Configuration... ' + clustername;
cd('/')
create(clustername, 'Cluster')
cd('Clusters/' + clustername)
set('MulticastAddress', address)
set('MulticastPort', port)
set('WeblogicPluginEnabled', 'true')

def assignCluster(clustername, server):
print 'Assigning server ' + server + ' to Cluster ' + clustername;
cd('/')
assign('Server', server, 'Cluster', clustername)

def createCustomDomain():
print 'Creating Domain... ' + domain;
readTemplate('<%= @wl_home %>/common/templates/wls/wls.jar', domain_mode)

setOption('ServerStartMode', start_mode)

createAdminServer('<%= @admin_server['name'] %>',
'<%= @admin_server['address'] %>', <%= @admin_server['port'] %>)

<% @clusters.each do |cluster| -%>
createCluster('<%= cluster['name'] %>', '<%= cluster['multicast_address'] %>',
<%= cluster['multicast_port'] %>)
<% end -%>

<% @machines.each do |machine| -%>
createMachine('<%= machine['name'] %>', '<%= machine['nm_address'] %>',
<%= machine['nm_port'] %>)
<% end -%>

<% @managed_servers.each do |managed_server| -%>
createManagedServer('<%= managed_server['name'] %>',
'<%= managed_server['machine_name'] %>',
'<%= managed_server['address'] %>',
<%= managed_server['port'] %>)

assignCluster('<%= managed_server['cluster_name'] %>',
'<%= managed_server['name'] %>')
<% end -%>

writeDomain(domain_path)
closeTemplate()

createCustomDomain()
dumpStack()
print('Exiting...')
exit()

Environment


The environment defines all the attributes referenced by the recipes.

weblogic_dev.json

{
"name": "weblogic_dev",
"description": "",
"cookbook_versions": {},
"json_class": "Chef::Environment",
"chef_type": "environment",
"default_attributes": {
"weblogic": {
"nexus_url": "http://chefserver01.c2b2.co.uk:8081/nexus/service/local/artifact/maven/redirect",
"repository": "C2B2",
"group_id": "com.oracle",
"artifact_id": "weblogic",
"version": "12.1.3",
"packaging": "jar",
"os_user": "oracle",
"os_installer_group": "orainstall",
"wls_version": "12.1.3",
"oracle_home": "/home/oracle/c2b2/middleware/product/fmw",
"java_home": "/opt/jdk1.8.0_25",
"installer_jar": "/home/oracle/fmw_12.1.3.0.0_wls.jar"

},
"wls_domain": {
"name": "c2b2-domain",
"mode": "Compact",
"start_mode": "dev",
"crossdomain_enabled": "true",
"admin_username": "weblogic",
"admin_password": "welcome1",
"admin_server": {
"name": "AdminServer",
"machine_name": "wls1",
"address": "wls1.c2b2.co.uk",
"port": "7001"

},
"managed_servers": [
{ "name": "node1", "machine_name": "wls1", "address": "wls1.c2b2.co.uk",
"port": "8001", "cluster_name": "c2b2-cluster"},
{ "name": "node2", "machine_name": "wls1", "address": "wls1.c2b2.co.uk",
"port": "9001", "cluster_name": "c2b2-cluster"}
],
"machines": [
{ "name": "wls1", "nm_address": "wls1.c2b2.co.uk", "nm_port": "5556"}
],
"clusters": [
{"name": "c2b2-cluster", "multicast_address": "237.0.0.101",
"multicast_port": "9200"}
]
}
},
"override_attributes": {}
}
The source files are uploaded using the knife tool using the following commands:
knife cookbook upload weblogic
knife environment from file /ahs1/chef-repo/weblogic/environments/weblogic_dev.json
knife role from file /ahs1/chef-repo/weblogic/roles/weblogic_domain.json
knife role from file /ahs1/chef-repo/weblogic/roles/weblogic_install.json
The virtual machine hostname wls1.c2b2.co.uk is bootstrapped which installs the chef-client and registers as a Node with the Chef server.  Using the console, the Node was edited and the environment weblogic_dev is assigned to it and the recipes weblogic_install and weblogic_domain added to the Nodes run list.


To install Weblogic and configure the domain, the chef-client is run on the node wls1.c2b2.co.uk by executing the following knife command:

knife ssh -x afryer “chef_environment:weblogic_dev” “sudo -u root chef-client -l info”

The knife ssh command queries the Chef Server returning a list of matching nodes in weblogic_dev environment, in this case wls1.c2b2.co.uk. An ssh session is started on this node as the user ‘afryer’ and runs the Chef client with sudo access. The Chef client connects to the Chef server and updates the attributes in the node’s hash map and executes the recipes defined in the weblogic_install and weblogic_domain Role’s run-list. The recipes read the attributes from the nodes hash map (defined in the environments) and performs the operations in the recipes, installing and configuring a Weblogic domain on the node.

Hopefully this blog has given you an insight into how you can automate a Weblogic installation using Chef. Extending the techniques used above should give you a good basis for a reusable and extensible set of scripts to make your installations quicker and easier.