26 March 2015

'We strive to achieve excellence' - C2B2 Tech days

At C2B2, one of the values that we conduct our business by is excellence.

‘We strive to achieve excellence in all areas of our expertise and deliver world class technical and customer service excellence.’ – And we’re not lying! 

Every few months we hold a Tech day at our head off here in Worcestershire which gives all of our technical Consultants a chance to get together and do what they do best, solve technical problems! 
Our most recent Tech day was led by one of our Expert Support Consultants, David Winters. 

What were you trying to achieve at March’s Tech day David?


The overall technical goal of this Tech day was to attempt to integrate ActiveMQ 5.11.0 http://activemq.apache.org/ with the latest Payara release http://payara.co.uk/upstream_builds and to test that the setup worked as anticipated.

How did you go about achieving this goal?


We split into two teams and set out to achieve the following:

• Install the latest Payara nightly build on standalone Amazon EC2.
• Install ActiveMQ 5.11 on a separate Amazon EC2 instance.
• Downloaded the generic JCA adapter 2.1 from https://genericjmsra.java.net/ and deployed and configured the JCA adapter on Payara so that Payara could use ActiveMQ as a JMS Provider.
• Configured the Amazon EC2 firewall rules as appropriate so that Payara and ActiveMQ instance could communicate on relevant ports.
• To verify that Payara and ActiveMQ were installed and configured correctly, we created a test JMS client to send messages to a test queue hosted on ActiveMQ and in Payara, we created and deployed a simple Message driven bean application. When messages were sent to the test queue hosted on ActiveMQ, the message listener associated with the message driven bean application would process these messages.

What was the outcome?


After some initial obstacles whereby we had not copied all the required ActiveMQ jar files so that they could be on Payara’s classpath, we managed to run some basic tests successfully. We were able to send test JMS messages from a remote JMS client to a queue running on ActiveMQ 5.11 which then triggered the message driven bean deployed on Payara to process these messages correctly.


So what do you do in your day-to-day role at C2B2?


I am an Expert Support Consultant at C2B2 and my main responsibility is to ensure that all of our customer's middleware environments are running smoothly at all times. Any problem, slow down or outage can have a disastrous impact on any business so we have to make sure that doesn't happen! 
I work closely with our Senior Consultants out on customer sites to make sure that any issues are fixed as quickly as possible so that customers can continue with the day-to-day running of their business. 

If you would like to read more about David’s expertise, take a look at some of his previous blog posts. 


Configuring JBoss management  authenticationwith LDAP over SSL
JBatch on Payara 4.1.151 now  supports 5 different database types
New features and  changes in BPM 12c







17 March 2015

Purging data from Oracle SOA Suite 11g

Part1: How can I purge data from Oracle SOA Suite 11g (PS6 11.1.1.7) using the purge script provided by Oracle?


Introduction


This blog will explain how to purge (remove unwanted data) data within Oracle SOA Suite 11g (PS6 11.1.1.7).

The series of blogs will cover the following:
  • Part 1: How can I purge data from Oracle SOA Suite 11g (PS6 11.1.1.7) using the purge script provided by Oracle? 
  • How does Oracle SOA Suite 11g (PS6 11.1.1.7) store data? 
  • What data does Oracle SOA Suite 11g (PS6 11.1.1.7) store? 
  • Why do you need to purge Oracle SOA Suite 11g (PS6 11.1.1.7) data? 
  • What are the purging options available for Oracle SOA Suite 11g (PS6 11.1.1.7)?
  • Which data will be purged by the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script? 
  • List of composite instance states that will be considered for purging by the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script 
  • How to install the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script? 
  • How to execute the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script? 
  • What is Looped purging (Oracle SOA Suite 11g (PS6 11.1.1.7) purge script)? 
  • What is Parallel purging (Oracle SOA Suite 11g (PS6 11.1.1.7) purge script)? 
  • Description of parameters used by the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script 
  • Example 1: Executing the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script for all composites 
  • Example 2: Executing the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script for a specific composite

Oracle SOA Suite 11g (PS6 11.1.1.7) data

How does Oracle SOA Suite 11g (PS6 11.1.1.7) store data?
SOA Suite uses a database schema called SOAINFRA (collection of database objects such as tables, views, procedures, functions etc.) to store data required for the running of SOA Suite applications. The SOAINFRA (SOA Infrastructure) schema is also referred to as the ‘dehydration store’ acting as the persistence layer for capturing SOA Suite data.

What data does Oracle SOA Suite 11g (PS6 11.1.1.7) store?

Composite instances utilising the SOA Suite Service Engines (BPEL, mediator, human task, rules, BPM, OSB, EDN etc.) will write data to tables residing within the SOAINFRA schema. Each of the engines will either write data to specific engine tables (e.g. the CUBE_INSTANCE table is used solely by the BPEL engine) or common tables that are shared by the SOA Suite engines such as the AUDIT_TRAIL table.

Few examples (below) of the type of data that is stored within the SOAINFRA schema:
  • Message payload (e.g. input, output) 
  • Scope (e.g. variables)
  • Auditing (e.g. data flow timestamps) 
  • Faults 
  • Deferred (messages that can be recovered) 
  • Metrics 

Why do you need to purge Oracle SOA Suite 11g (PS6 11.1.1.7) data?

Data within the Oracle SOA Suite database can grow to substantial levels in a short space of time. Payload sizes and volume of data will have an impact on available disk space which in turn will affect the performance of SOA Suite. For example, EM console can often become slow to navigate, increasing number of messages becoming stuck or requiring recovery, JTA transaction problems etc.

Purging itself, can become challenging if the data has not been maintained due to the large number of composite instances. Therefore, establishing a purge strategy and implementing it on a regular basis will help maintain the health of SOA Suite keeping the environment running efficiently.


What are the purging options available for Oracle SOA Suite 11g (PS6 11.1.1.7)?

Oracle provides three options for purging Oracle SOA Suite 11g data:
  • EM Console: Within the Enterprise Manager console the ‘Delete with Options’ can be used to manually delete many instances at once however, this may lead to transaction timeouts and is not recommended for large volumes. 
  • Purge Script: This is the process of deleting instances that are no longer required using stored procedures that are provided with Oracle SOA Suite 11g out of the box. 
  • Partitioning: Instances are segregated based on user defined criteria within the database, when a partition is not required it will be dropped freeing the disk space.

Oracle SOA Suite 11g (PS6 11.1.1.7) purge script


Which data will be purged by the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script?

The purge script will delete composite instances that are in the following states:

Completed
Faulted
Terminated by user
Stale
Unknown

The purge script will NOT delete composite instances that are in the following states:

Running (in-flight)
Suspended
Pending Recovery

List of composite instance states that will be considered for purging by the
Oracle SOA Suite 11g (PS6 11.1.1.7) purge script:


How to install the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script?
The following details will be required:

  • Database host details:
    - hostname (IP address)
    - username
    - password
  • SOA Database schema details:
    - prefix
    - password
  • Full path of the SOA Suite home folder
  • Full path of the directory where the Oracle purge script will write log information to (a folder on the database host) 
‘DEV' was the soainfra schema prefix used for the examples below.

a. Log into the Database host server.

b. Connect to the database as administrator using SQL*Plus:
sqlplus / as sysdba
c. Grant privileges to the soainfra (database) user that will be executing the scripts:
GRANT EXECUTE ON DBMS_LOCK TO _SOAINFRA;
GRANT CREATE JOB TO
_SOAINFRA;
GRANT CREATE EXTERNAL JOB TO
_SOAINFRA;
d. Exit SQL*Plus and go to the location of the Oracle purge script:
exit
$cd /rcu/integration/soainfra/sql/soa_purge/
e. Connect to the database as the soainfra user using SQL*Plus:
sqlplus _SOAINFRA/

@soa_purge_scripts.sql

Procedure created.
Function created.
Type created.
Type body created.
PL/SQL procedure successfully completed.
Package created.
Package body created
.
f. Exit SQL*Plus and create a directory where the log files (generated by the Oracle purge script) should be written to:
exit
$mkdir -p /PurgeLogs
g. Connect to the database with SQL*Plus as SYSDBA and declare the directory:
sqlplus / as sysdba

CREATE OR REPLACE DIRECTORY SOA_PURGE_DIR AS '/PurgeLogs';

GRANT READ, WRITE ON DIRECTORY SOA_PURGE_DIR TO
_SOAINFRA;

All the database objects required for purging data using the Oracle purge script are now loaded into the SOAINFRA schema ready for use.

How to execute the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script?

There are two options for running the purge script:
  • Looped
  • Parallel 
What is Looped purging (Oracle SOA Suite 11g (PS6 11.1.1.7) purge script)?

Looped purge is a single threaded PL/SQL script that will iterate through the SOAINFRA tables and delete instances matching the parameters specified.

What is Parallel purging (Oracle SOA Suite 11g (PS6 11.1.1.7) purge script)?

Parallel purge is essentially the same as the looped purge. It is meant to be more efficient as it uses the dbms_scheduler package to spawn multiple purge jobs all working on a distinct subset of data. There are 2 more parameters that can be specified in addition to the ones used by the looped purge. This is designed to purge large data volumes hosted on high-end database nodes with multiple CPUs and a good IO sub-system. A maintenance window should be used as it requires a lot of resources.


Example 1: Executing the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script to purge data for all composites

We are required to delete all composite instances which were created between 1st June 2010 and 30th June 2010. In addition, there is a requirement not to delete instances that have been modified after 30th June 2010. The script must finish running after an hour due to business hours resuming shortly afterwards.

min_creation_date = 1st June 2010
max_creation_date = 30 June 2010
retention_period = 1st July 2010

The above will in effect delete all "composite instances" where the created time of the instance is between 1st June 2010 and 30 June 2010 and the modified date of the BPEL instances is less than 1st July 2010.

a. looped
DECLARE

max_creation_date timestamp;
min_creation_date timestamp;
batch_size integer;
max_runtime integer;
retention_period timestamp;

BEGIN
min_creation_date := to_timestamp('2010-06-01','YYYY-MM-DD');
max_creation_date := to_timestamp('2010-06-30','YYYY-MM-DD');
max_runtime := 60;
retention_period := to_timestamp('2010-07-01','YYYY-MM-DD');
batch_size := 10000;

soa.delete_instances(
min_creation_date => min_creation_date,
max_creation_date => max_creation_date,
batch_size => batch_size,
max_runtime => max_runtime,
retention_period => retention_period);

END;
/
b. Parallel
DECLARE

max_creation_date timestamp;
min_creation_date timestamp;
batch_size integer;
max_runtime integer;
retention_period timestamp;
DOP integer;
max_count integer;
purge_partitioned_component boolean;

BEGIN
min_creation_date := to_timestamp('2010-06-01','YYYY-MM-DD');
max_creation_date := to_timestamp('2010-06-30','YYYY-MM-DD');
max_runtime := 60;
retention_period := to_timestamp('2010-07-01','YYYY-MM-DD');
batch_size := 10000;
DOP := 3;
max_count := 1000000;
purge_partitioned_component := false);


soa.delete_instances_in_parallel (
min_creation_date => min_creation_date,
max_creation_date => max_creation_date,
batch_size => batch_size,
max_runtime => max_runtime,
retention_period => retention_period,
DOP => DOP,
max_count => max_count,
purge_partitioned_component => purge_partitioned_component);

END;
/


Example 2: Executing the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script to purge data from a specific composite

Same as Example 1 Scenario but with an additional requirement of only purging data from the composite named OrderBookingComposite . No other composite data should be purged.

Composite details can be gathered by querying the COMPOSITE_INSTANCE table within the SOAINFRA schema. The column named COMPOSITE_DN (distinguished name) holds the details required by the purge script:

Format: <soa_partition name>/<composite name>!<composite_revision>
Example: default/OrderBookingComposite!1.0

a. Looped
DECLARE
min_creation_date timestamp;
max_creation_date timestamp;
batch_size number;
max_runtime number;
retention_period timestamp;
purge_partitioned_component boolean;
composite_name varchar2(200);
composite_revision varchar2(200);
soa_partition_name varchar2(200);

BEGIN
min_creation_date := to_timestamp('2010-06-01','YYYY-MM-DD');
max_creation_date := to_timestamp('2010-06-30','YYYY-MM-DD');
max_runtime := 60;
retention_period := to_timestamp('2010-07-01','YYYY-MM-DD');
batch_size := 10000;
purge_partitioned_component := true;
composite_name := 'OrderBookingComposite';
composite_revision := '1.0';
soa_partition_name := 'default';


soa.delete_instances(
min_creation_date => min_creation_date,
max_creation_date => max_creation_date,
batch_size => batch_size,
max_runtime => max_runtime,
retention_period => retention_period,
purge_partitioned_component => purge_partitioned_component,
composite_name => composite_name,
composite_revision => composite_revision,
soa_partition_name => soa_partition_name);

END;
/
b. Parallel
DECLARE
min_creation_date timestamp;
max_creation_date timestamp;
batch_size number;
max_runtime number;
retention_period timestamp;
DOP integer;
max_count integer;
purge_partitioned_component boolean;
composite_name varchar2(200);
composite_revision varchar2(200);
soa_partition_name varchar2(200);

BEGIN
min_creation_date := to_timestamp('2010-06-01','YYYY-MM-DD');
max_creation_date := to_timestamp('2010-06-30','YYYY-MM-DD');
max_runtime := 60;
retention_period := to_timestamp('2010-07-01','YYYY-MM-DD');
batch_size := 10000;
DOP := 3
max_count := 1000000;
purge_partitioned_component := true;
composite_name := 'OrderBookingComposite';
composite_revision := '1.0';
soa_partition_name := 'default';


soa.delete_instances(
min_creation_date => min_creation_date,
max_creation_date => max_creation_date,
batch_size => batch_size,
max_runtime => max_runtime,
retention_period => retention_period,
DOP => DOP,
max_count => max_count,
purge_partitioned_component => purge_partitioned_component,
composite_name => composite_name,
soa_partition_name => soa_partition_name);

END;
/

Conclusion

This blog has provided a basic understanding of the purge script contained within Oracle SOA Suite 11g (PS6 11.1.1.7).

A long term purging strategy needs to be implemented and in order to do so, a good understanding of the workings of the purge script is required along with an awareness of the issues related to the script.

Therefore, leading on from part 1 there will be few more blogs that will cover other the following:
  • Part 2: How does the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script work?
  • Part 3: How to establish a long term purge strategy for Oracle SOA Suite 11g (PS6 11.1.1.7)
Irfan Suleman, C2B2 Senior Consultant 

9 March 2015

How to Configure a Simple JBoss Cluster in Domain Mode

Clustering is a very important thing to master for any serious user of an application server. Clustering allows for high availability by making your application available on secondary servers when the primary instance is down or it lets you scale up or out by increasing the server density on the host, or by adding servers on other hosts. It can even help to increase performance with effective load balancing between servers based on their respective hardware.

Andy Overton has already covered how to set up a cluster of servers in standalone mode fronted by mod_cluster for load balancing, so in this post I'll cover clustering in domain mode. I won't rehash mod_cluster settings, so this will just cover the set up of a doman controller on one host, and the host controller and server instances on another host.

To follow along with this blog, you'll need to download either JBoss EAP 6.x or WildFly. I'll be using WildFly 8.2 on Xubuntu 14.04. I'll be using $WF_HOME to refer to your WildFly home directory.

4 March 2015

Installing Weblogic with Chef


In my previous blog I discussed using Chef to deploy Weblogic SOA suite, in this blog I will show you how to create a simple Weblogic Cluster on a virtual machine with two managed servers using Chef.  This solution uses a Chef Cookbook named weblogic which contains recipes and templates, an environment and roles to model the infrastructure as code.  Two recipes have been created, one to install the Weblogic binaries ‘install-wls.rb and the second ‘create-domain.rb’ which creates the Weblogic Domain with an Admin Server and two managed servers in a Cluster.  The recipes read attributes defined in the environment ‘weblogic_dev’

The Weblogic jar installer was downloaded from Oracle and stored in a Nexus repository.  The source files for the Cookbooks, Environment and Roles were created in the local Chef repository on the Chef Workstation in the following folder structure, and then uploaded to the Chef Server.


The source for the recipes, templates and environment is described below:

Recipes


install-wls.rb

This recipe installs the Weblogic binaries by:

Downloading the Weblogic installer jar file from a Nexus repository (2)
Creates the user oracle, group oinstaller and weblogic home directory (3)
Creates the Oracle Inventory and installer response files from the templates ora_inventory.rsp.erb and wls-12c.rsp.erb (3).  The templates have placeholders which are substituted with attributes read from the Node object.
Executes the Weblogic jar file in silent mode referencing the two response files created by running :

java -jar weblogic-12.1.3.jar -silent  -reponseFile responsefile -InvPtrLoc OraInventoryFile
The source for the recipe is listed below:

# (1) Get the attributes from the Node object on the server that the recipe is run on (Defined in environment)
os_user = node['weblogic']['os_user']
os_installer_group = node['weblogic']['os_installer_group']
user_home = File.join("/home", os_user)
nexus_url = node['weblogic']['nexus_url']
repository = node['weblogic']['repository']
group_id = node['weblogic']['group_id']
artifact_id = node['weblogic']['artifact_id']
version = node['weblogic']['version']
packaging = node['weblogic']['packaging']

# (2) Create the user/group used to install Weblogic and the WLS home directory
group os_installer_group do
action :create
append true
end

user os_user do
supports :manage_home => true
comment "Oracle user"
gid os_installer_group
home user_home
shell "/bin/bash"
end

# Create FMW Directory
directory node['weblogic']['oracle_home'] do
owner os_user
group os_installer_group
recursive true
action :create
end

# (3) Download the Weblogic installer from Nexus
installer_jar = File.join(user_home, "#{artifact_id}-#{version}.#{packaging}")
remote_file "download Oracle Weblogic Server" do
#source "#{nexus_url}?r=#{repository}&g=#{group_id}&a=#{artifact_id}&v=#{version}&p=#{packaging}"
source "file:///mnt/hgfs/vmwareData/Alan/SOA/fmw_12.1.3.0.0_wls.jar"
path installer_jar
owner os_user
group os_installer_group
end

# (4) Create OraInventory and Installer response files to allow silent install
ora_inventory_directory = File.join(user_home, "oraInventory")
ora_inventory_file = File.join( ora_inventory_directory, "ora_inventory.rsp")

directory ora_inventory_directory do
owner os_user
group os_installer_group
recursive true
action :create
end

template ora_inventory_file do
source "ora_inventory.rsp.erb"
owner os_user
group os_installer_group
variables(
ora_inventory_directory: ora_inventory_directory,
install_group: os_installer_group
)
owner os_user
group os_installer_group
end

# Create Response File
response_file = File.join(user_home, "wls-12c.rsp")
oracle_home = node['weblogic']['oracle_home']

template response_file do
source "wls-12c.rsp.erb"
variables(
oracle_home: oracle_home
)
owner os_user
group os_installer_group
end

# (5) Install Weblogic Server by executing the jar command defining the appropriate command line options to install silently
install_command = "#{node['weblogic']['java_home']}/bin/java -jar #{installer_jar} -silent -responseFile #{response_file} -invPtrLoc #{ora_inventory_file}"

execute install_command do
cwd user_home
user os_user
group os_installer_group
action :run
creates "#{oracle_home}/oraInst.loc"
end
 create-domain.rb

The main purpose of this recipe is to create a WLST script from the template create_domain.py.erb that configures the Weblogic domain offline.  The template defines the following WLST helper functions to create the respective Weblogic components to define the cluster:

createManagedServer(servername,  machinename, address, port)
createAdminServer(servername, address, port)
createMachine(machinename, address, port)
createCluster(clustername, address, port)
assignCluster(clustername, server)

The main function createCustomDomain calls the above functions to create and configure the domain using attributes defined in the environment as json objects.  A Weblogic domain requires one Admin server and can have multiple clusters, which each can contain one or more managed servers.  The managed servers can be located on one machine or across multiple machines, with each machine requiring a Node manager to be configured.  The configuration for the clusters, machines and managed servers is defined in a json object array in the environment.  The respective json object array for the Clusters, machines and managed servers is passed into the template and the appropriate block of code iterates through each item in the array to generate a call to helper function with the correct values passed as arguments.  The code snippet below, with some code moved for clarity, shows how the call to the createMachine helper function is generated by iterating through the items in the machines object array.
def createCustomDomain():
print 'Creating Domain... ' + domain;
readTemplate('<%= @wl_home %>/common/templates/wls/wls.jar', domain_mode)

setOption('ServerStartMode', start_mode)
. . . .

<% @machines.each do |machine| -%>
createMachine('<%= machine['name'] %>', '<%= machine['nm_address'] %>',
<%= machine['nm_port'] %>)
<% end -%>

. . . .
writeDomain(domain_path)
closeTemplate()
The source for the recipe is listed below:
# (1) Get the attributes from the Node object of the server recipe is run on # (Defined in environment)
os_user = node['weblogic']['os_user']
os_installer_group = node['weblogic']['os_installer_group']
middleware_home = node['weblogic']['oracle_home']
weblogic_home = "#{middleware_home}/wlserver"
common_home = "#{middleware_home}/oracle_common"
domains_path = File.join(middleware_home, "domains")
domain_name = node['wls_domain']['name']
domain_py = File.join(middleware_home, "create_domain.py")

# (2) Create the WLS Dommains directory
directory domains_path do
owner os_user
group os_installer_group
recursive true
action :create
end

# (3) Create the WLST script to create the domain, passing in variables read from # the node's attribute hash map. Save the script to the server for execution
template domain_py do
source "create_domain.py.erb"
variables(
domain_mode: node['wls_domain']['mode'],
domains_path: domains_path,
domain: domain_name,
start_mode: node['wls_domain']['start_mode'],
crossdomain_enabled: node['wls_domain']['crossdomain_enabled'],
username: node['wls_domain']['admin_username'],
password: node['wls_domain']['admin_password'],
wl_home: weblogic_home,
machines: node['wls_domain']['machines'],
admin_server: node['wls_domain']['admin_server'],
managed_servers: node['wls_domain']['managed_servers'],
clusters: node['wls_domain']['clusters']
)
owner os_user
group os_installer_group
end

# (4) Run the WLST script to create the domain offline
ENV['ORACLE_HOME'] = middleware_home

execute "#{weblogic_home}/common/bin/wlst.sh #{domain_py}" do
environment "CONFIG_JVM_ARGS" => "-Djava.security.egd=file:/dev/./urandom"
user os_user
group os_installer_group
action :run
creates "#{domains_path}/#{domain_name}/config/config.xml"
end

Templates


The recipe install-wls uses two templates, ora_inventory.rsp.erb and wls-12c.rsp.erb to create the response file used for the silent install and the file oraInst.loc which specifies the location of the Oracle Inventory directory. The template create_domain.py.erb is called by the recipe create_domain and defines the WLST script which is run to create the domain. The recipe passes in a json object array for the Clusters, Machines and Managed servers to the template, a code block is defined which iterates through each object array creating a call to the helper functions to create a call to the methods createMachine, createManagedServer and createMachine for each item in the respective array.

create_domain.py.erb
domain_mode='<%= @domain_mode %>'
domain_path='<%= @domains_path %>/<%= @domain %>'
domain='<%= @domain %>'
start_mode='<%= @start_mode %>'
crossdomain_enabled=<%= @crossdomain_enabled %>
admin_username='<%= @username %>'
admin_password='<%= @password %>'

def createManagedServer(servername, machinename, address, port):
print 'Creating Managed Server Configuration... ' + servername;
cd("/")
create(servername, "Server")
cd("/Servers/" + servername)

if machinename:
set('Machine', machinename)

set('ListenAddress', address)
set('ListenPort', int(port))

def createAdminServer(servername, address, port):
print 'Creating Admin Server Configuration... ' + servername;
cd("/")

cd("/Servers/" + servername)
set('ListenAddress', address)
set('ListenPort', int(port))
cd('/')
cd('Security/base_domain/User/weblogic')
set('Name',admin_username)
cmo.setPassword(admin_password)

def createMachine(machinename, address, port):
print 'Creating Machine Configuration... ' + machinename;

try:
cd('/')
create(machinename, 'Machine')
except BeanAlreadyExistsException:
print 'Machine ' + machinename + ' already exists';

cd('Machine/' + machinename)
create(machinename, 'NodeManager')
cd('NodeManager/' + machinename)
set('ListenAddress', address)
set('ListenPort', int(port))

def createCluster(clustername, address, port):
print 'Creating Cluster Configuration... ' + clustername;
cd('/')
create(clustername, 'Cluster')
cd('Clusters/' + clustername)
set('MulticastAddress', address)
set('MulticastPort', port)
set('WeblogicPluginEnabled', 'true')

def assignCluster(clustername, server):
print 'Assigning server ' + server + ' to Cluster ' + clustername;
cd('/')
assign('Server', server, 'Cluster', clustername)

def createCustomDomain():
print 'Creating Domain... ' + domain;
readTemplate('<%= @wl_home %>/common/templates/wls/wls.jar', domain_mode)

setOption('ServerStartMode', start_mode)

createAdminServer('<%= @admin_server['name'] %>',
'<%= @admin_server['address'] %>', <%= @admin_server['port'] %>)

<% @clusters.each do |cluster| -%>
createCluster('<%= cluster['name'] %>', '<%= cluster['multicast_address'] %>',
<%= cluster['multicast_port'] %>)
<% end -%>

<% @machines.each do |machine| -%>
createMachine('<%= machine['name'] %>', '<%= machine['nm_address'] %>',
<%= machine['nm_port'] %>)
<% end -%>

<% @managed_servers.each do |managed_server| -%>
createManagedServer('<%= managed_server['name'] %>',
'<%= managed_server['machine_name'] %>',
'<%= managed_server['address'] %>',
<%= managed_server['port'] %>)

assignCluster('<%= managed_server['cluster_name'] %>',
'<%= managed_server['name'] %>')
<% end -%>

writeDomain(domain_path)
closeTemplate()

createCustomDomain()
dumpStack()
print('Exiting...')
exit()

Environment


The environment defines all the attributes referenced by the recipes.

weblogic_dev.json

{
"name": "weblogic_dev",
"description": "",
"cookbook_versions": {},
"json_class": "Chef::Environment",
"chef_type": "environment",
"default_attributes": {
"weblogic": {
"nexus_url": "http://chefserver01.c2b2.co.uk:8081/nexus/service/local/artifact/maven/redirect",
"repository": "C2B2",
"group_id": "com.oracle",
"artifact_id": "weblogic",
"version": "12.1.3",
"packaging": "jar",
"os_user": "oracle",
"os_installer_group": "orainstall",
"wls_version": "12.1.3",
"oracle_home": "/home/oracle/c2b2/middleware/product/fmw",
"java_home": "/opt/jdk1.8.0_25",
"installer_jar": "/home/oracle/fmw_12.1.3.0.0_wls.jar"

},
"wls_domain": {
"name": "c2b2-domain",
"mode": "Compact",
"start_mode": "dev",
"crossdomain_enabled": "true",
"admin_username": "weblogic",
"admin_password": "welcome1",
"admin_server": {
"name": "AdminServer",
"machine_name": "wls1",
"address": "wls1.c2b2.co.uk",
"port": "7001"

},
"managed_servers": [
{ "name": "node1", "machine_name": "wls1", "address": "wls1.c2b2.co.uk",
"port": "8001", "cluster_name": "c2b2-cluster"},
{ "name": "node2", "machine_name": "wls1", "address": "wls1.c2b2.co.uk",
"port": "9001", "cluster_name": "c2b2-cluster"}
],
"machines": [
{ "name": "wls1", "nm_address": "wls1.c2b2.co.uk", "nm_port": "5556"}
],
"clusters": [
{"name": "c2b2-cluster", "multicast_address": "237.0.0.101",
"multicast_port": "9200"}
]
}
},
"override_attributes": {}
}
The source files are uploaded using the knife tool using the following commands:
knife cookbook upload weblogic
knife environment from file /ahs1/chef-repo/weblogic/environments/weblogic_dev.json
knife role from file /ahs1/chef-repo/weblogic/roles/weblogic_domain.json
knife role from file /ahs1/chef-repo/weblogic/roles/weblogic_install.json
The virtual machine hostname wls1.c2b2.co.uk is bootstrapped which installs the chef-client and registers as a Node with the Chef server.  Using the console, the Node was edited and the environment weblogic_dev is assigned to it and the recipes weblogic_install and weblogic_domain added to the Nodes run list.


To install Weblogic and configure the domain, the chef-client is run on the node wls1.c2b2.co.uk by executing the following knife command:

knife ssh -x afryer “chef_environment:weblogic_dev” “sudo -u root chef-client -l info”

The knife ssh command queries the Chef Server returning a list of matching nodes in weblogic_dev environment, in this case wls1.c2b2.co.uk. An ssh session is started on this node as the user ‘afryer’ and runs the Chef client with sudo access. The Chef client connects to the Chef server and updates the attributes in the node’s hash map and executes the recipes defined in the weblogic_install and weblogic_domain Role’s run-list. The recipes read the attributes from the nodes hash map (defined in the environments) and performs the operations in the recipes, installing and configuring a Weblogic domain on the node.

Hopefully this blog has given you an insight into how you can automate a Weblogic installation using Chef. Extending the techniques used above should give you a good basis for a reusable and extensible set of scripts to make your installations quicker and easier.



17 February 2015

JBoss EAP 6 Domain Mode

When Red Hat redesigned JBoss 5, one of the key things they did was to combine all the configuration from separate files within their own modules to a single XML file. When you start JBoss by calling standalone.sh, that single XML file holds all the configuration it uses, which makes it much easier to track down any misconfiguration.

Clustering in standalone mode is very straightforward; simply edit the JGroups subsystem in the appropriate configuration file (standalone-ha or standalone-full-ha) as Andy Overton has outlined in his previous blog. Providing they both have the same configuration, the servers will discover each other and you're done. The downside comes when you have large clusters to manage and need to make the same configuration change in many places!

In domain mode, however, things are a little different.


9 February 2015

Automating SOA Suite Installations

Automation is a useful ability in many fields, with installation of software being no different. Once the time has been put in to create an automating installation script, automation can save you a great deal of time in avoiding what can be seen as a repetitive and time consuming task. In this blog I’ll describe how much of the installation process can be automated, and ways to make the automation reusable.

3 February 2015

Clustering WebSockets on WildFly - Part 2


Hello all

I know that I said in my previous blog that I was doing a two-part series on how to put together a clustered web application running on WildFly on EC2. Having spent some more time hacking around, I've realised that if I was to dump everything else into one more blog it would be quite long-winded. As a result, this is now part two of three. This blog will be focusing on setting up and using Infinispan.

Recap


In the previous post, we talked about the basics of setting up a WebSocket application running on WildFly and ran through some of the front-end and back-end code for that. We also looked at how to tweak the JGroups configuration for WildFly's clustering on EC2.


Infinispan


Infinispan is a highly scalable and highly available in-memory key/value data store written in Java - and is the open source project behind JBoss Data Grid. Infinispan can be used as a distributed cache in front of an SQL database or as its own NoSQL data grid. When using it as a data grid, it is possible to configure Infinispan to persist data to disk-based data stores in order to ensure that data is not lost.

Infinispan is typically used in two different modes:

1. Library mode is where you use Infinispan within your own application's source code.
2. Client/Server mode is where you have Infinispan running separately from your application as its own server. This is the mode that we will use for our setup.

The Infinispan Server has four endpoints - or protocols - that clients can make use of. The protocols are:

1. HotRod
2. REST
3. Memcached
4. WebSocket

In this case, we will make use of the WebSocket protocol. Given that we have previously looked at setting up a WebSocket server on WildFly, it seems like a natural fit to now look at making use of a WebSocket client in Java to store our data.

Setup changes


In the first blog, I put together a diagram of the architecture set-up that I was intending to use. I did not include anything in there regarding how and where we were going to deal with our storage for the application.

This is an updated revision of the architecture:

In practice, we would expect multiple Infinispan instances running within our architecture to ensure the availability of our data. For this demo however, we will just be using a single server.


WebSocket requests


There isn't a large amount of documentation on making use of the WebSocket server for Infinispan - at least for using a Java client - so it was a nice exercise to dig around the Infinispan server codebase to dig out what our request would have to look like.




As we can see from the above gist, we need to send a request (as a string) that has a JSON structure with some specific parameters:
  • The "opCode" parameter tells the server what operation you are attempting to perform - put/get/remove/notify are the options. This is so that the server knows which of its internal handlers to invoke in order to deal with your operation. 
  • The "mime" parameter tells the server the mime type that you are using. The server cannot deal with "application/json". So I used "text/plain". 
  • The "cacheName" parameter tells the server the name of the cache you want to perform your operation on. 
  • The "key" parameter states the key you wish to perform your operation on. 
  • The "value" parameter (only specified for a put operation) will be the value that you wish to insert.
Any time when we do send a message to the server, we have to ensure that we create a structure which follows this one.


InfinispanEndpoint class


In the application that runs on WildFly, we have put together an InfinispanEndpoint class. This class will be the one that deals with communicating with the Infinispan server.

Similar to the @ServerEndpoint annotation used on the server side, we would use a @ClientEndpoint annotation for the Java client. This is annotated on the class level. We also have the same method level annotations available, such as @OnOpen, @OnClose, @OnMessage etc.

For starters, let's look at how we start the client. In this case, we are instantiating the client from our application code; so we provide the WebSocket URI to our constructor.



For the other annotated methods, all that we are doing is logging messages in our application - apart from the onMessage() method, which is where we do a little bit more work. What we have done in addition is set up a separate MessageHandler interface; we can instantiate different instances in our Getter and Storer classes that will then deal with these messages appropriately. For now though, let's look at what we have to do within this endpoint class.

The important method to look at is sendMessage() (in testing, the WebSocket wouldn't be opened in time before we tried to send a message so we just pause for a short time) as this is the one that sends the message to the server. The API for doing so is identical to the server-side one, because we are dealing with the same type of Session object!



I also put together a separate client (running in a main() method) to test this functionality. What this client does is put a key/value pair, wait 10s and then try and get the same key. Once it gets the server response it will just dump that message to screen - this was also so I could understand what the server responses looked like. Here is how I am handling the message:

From the gist below, we can see how the messages that we send to Infinispan are constructed and then sent out, as well as the successful response from the server.



And there we go, we have now built a Java client to connect to the Infinispan WebSocket Server. If you are unclear on how some of the other internal client wiring works, I followed this thread on StackOverflow quite closely for some ideas. It quite neatly explains how to set things up.

In the next part, we will look at putting all of these parts together in a full application running on Amazon EC2.

Thanks for reading!

Navin Surtani 
C2B2 Consultant 

27 January 2015

MIDDLEWARE INSIGHT - C2B2 Newsletter Issue 20



FEATURED NEWS


Clustering WebSockets on Wildfly - read more

Can Chef be used to help provide Continuous Delivery of Oracle SOA Suite Applications? - read more



JAVA EE & OPEN SOURCE


Java EE 8
What’s up with Java EE 8? A Whistle-Stop Tour - read Part 1 and Part 2 on Voxxed 
A Look at the Proposed Java EE 8 Security API - read more here 
The most popular upcoming Java EE 8 technologies according to ZEEF users - read more on Arjan Tijms' blog
GlassFish
Using JASPIC to secure a web application in GlassFish, read more on the C2B2 Blog
Vasilis Souvatzis's Java EE 7 Thesis Using GlassFish - see more on the Aquarium Blog  
Getting started with Payara Server - a drop in replacement for GlassFish Server - see the video here
What's in store for the Vampire fish? Read more on Payara Blog
A GlassFish Alternative with Teeth - read more on Voxxed
London JavaEE and GlassFish User Group with Peter Pilgrim ' Digital Java EE 7 New and Noteworthy' - find out more and register here
Tomcat
Self-Signed Certificate for Apache TomEE (and Tomcat) - read more on Alex Soto's blog  
Alternative Logging Frameworks for Application Servers: Tomcat - read the article by Andy Pielage 
Other
Guerilla JMX Monitoring - read more by Mike Croft
Java EE VS Spring smackdown - find out more on the Payara Blog
Lightweight Integration with Java EE and Apache Camel - read more on Voxxed
Thinking About Picking Up a New JVM Language? A Masterpost to Guide Java Devs - read more on Voxxed
Pivotal cuts funding for open-source JVM language Groovy - read more here ; see the related article 'Open Source doesn’t need more funding – it needs better business models' on Jaxenter.com 
Spring Framework 4.1.4 released - read more on the Spring Blog
Why I Don't Like Open Source - read the article by Remy Sharp 
DDD (Domain-Driven Design) + Java EE "Hanginar" on Thursday - read more by Reza Rahman
File Uploads Using JSF 2.2 and Java EE 7 - find out more on The Aquarium Blog 

ORACLE


Can Chef be used to help provide Continuous Delivery of Oracle SOA Suite Applications? - read more on the C2B2 Blog
Purging data from Oracle SOA Suite 11g - Part 1 read the article by Irfan Suleman
Set-up a 12c SOA/BPM Infrastructure - read the article by Rene van Wijk  
WebLogic Server and the Oracle Maven Repository - read more on the WebLogic Server Blog
Additional new material WebLogic Community - read more on the WebLogic Community
Oracle presents: Avatar 2.0 – where to next? Read more on Jaxenter.com

JBOSS & RED HAT

Configuring RBAC in JBoss EAP and Wildfly - Part Two - read more on the C2B2 Blog
Clustering WebSockets on Wildfly - read the article by Navin Surtani
Hibernate Search 5: Adding Full-Text Query Super-Powers to Your JPA! - see presentation video and slides by Sanne Grinovero
Java EE Webcast: Hibernate OGM - see more on Voxxed
Java EE, Docker, WildFly and Microservices on Docker - read more on Markus Eisele's Blog
Openshift - New Platform, New Docs: Be a Part of It! - read more on the Openshift Blog
Containers, microservices, and orchestrating the whole symphony - read the article by Uri Cohen
Vagrant with Docker provider, using WildFly and Java EE 7 image - read more on Arun Gupta's Blog
Simple Java EE (JSF) Login Page with JBoss PicketLink Security - read more by Lincoln Baxter

  DATA GRIDS & BIG DATA


Payara 4.1.151 sneak preview - Hazelcast session persistence - find out more on the Payara Blog
Tapping Big Data to Your Own Advantage - read more on Dzone
How To Setup Big Data Tooling For JBoss Developer Studio 8 - read more on Eric Schabell's Blog
DBA skills must evolve, morph to cope with big data technologies - read more on TechTarget