Load Balancing With Apache Tomcat

Recently, we have to make our application work with more than one node which is currently using Tomcat as a server. We tried scalability using Apache Web Server and mod_jk with Tomcat. And it worked well for us. At the end we ended up using Amazon AWS inbuilt clustering support for our application. But whatever homework I have done for Tomcat clustering, I am logging here with simple steps.

mod_jk Tomcat Connector:

mod_jk is the connector used to connect the Tomcat servlet container with web servers such as Apache using the AJP protocol. Here is how this mechanism works:

A web server is responsible for managing client HTTP requests. When these requests come, the web server needs also to perform the following:

  • Load the servlet container connector library and initialize it.
  • It has to identify the URL and decide certain request belongs to a servlet, if so then give those requests to connector.

The connector needs to know what requests it is going to serve, and to where to direct these requests (based on configuration we provided).

Now, let’s configure mod_jk with Apache Web server. So, this is what you need to download:

  1. Apache Web Server 2.2.21 from http://httpd.apache.org/download.cgi.
  2. Apache Tomcat 6.0.20 from http://tomcat.apache.org/download-60.cgi.
  3. Mod_jk Tomcat connector from http://tomcat.apache.org/download-connectors.cgi

Now let’s start by installing Tomcat first.

  1. Extract the Tomcat zip you have downloaded. Hereafter, the directory you extracted to will be referred to as TOMCAT_1_HOME.
  2. Test Tomcat to see that it works. Go to TOMCAT_1_HOME\bin and run startup.bat.
  3. Open up your browser and access http://localhost:8080/
  4. If you see the default page, then Tomcat Instance 1 is working fine. Shut down Tomcat.

That’s all for the first Tomcat instance. Now, for the second.

  1.  Make a directory called TOMCAT_2_HOME and extract Tomcat.zip.
  2. Open up TOMCAT_2_HOME \conf\server.xml in a text editor. We’ve got to change the port numbers so that they don’t conflict with the first instance.
  3. Change <Server port=”8005″ shutdown=”SHUTDOWN”>  to  <Server port=”7005″ shutdown=”SHUTDOWN”>.
  4. Change <Connector port=”8080″ maxHttpHeaderSize=”8192″… to <Connector port=”7070″ maxHttpHeaderSize=”8192″…
  5. Change <Connector port=”8009″ enableLookups=”false” redirectPort=”8443″ protocol=”AJP/1.3″ /> to  <Connector port=”7009″ enableLookups=”false” redirectPort=”8443″ protocol=”AJP/1.3″ />
  6. Go to bin directory of tomcat2 and start the second tomcat using startup.bat.
  7. Test it out by pointing your browser to http://localhost:7070/
  8. Your second tomcat instance is now ready to be used.

Next, let’s set up the Apache HTTP Server. It’s very simple…

  1. Run the installer you downloaded. The standard install will do.
  2. Open the Apache Server Monitor and start the web server if it’s not already running.
  3. Point your browser to http://localhost/ to verify that Apache is running on port 80.
  4. Stop Apache.

Finally, we reach mod_ JK. Let’s set it up first just to delegate requests to the two Tomcat instances, and we’ll load balance it a bit later.

  1. Get mod_jk.so file from your downloaded mod_jk archive and copy it to the modules directory in your Apache installation.
  2. Open up httpd.conf in the conf directory of your Apache installation in a text edit, and
    add the following line at the end of the set of LoadModule statements:

                                  LoadModule jk_module modules/mod_jk.so 

       3.  Create a file called workers.properies in the conf directory. Add these lines to it:

                                worker.list=worker1,worker2

                                worker.worker1.port=8009

                                worker.worker1.host=localhost

                                worker.worker1.type=ajp13

                                worker.worker2.port=7009

                                worker.worker2.host=localhost #you can also specify other machine address or name

                                worker.worker2.type=ajp13

       4.  This file defines which workers Apache can delegate to. We’ve listed worker1 and worker2 to correspond to our two tomcat instances.

       5.   Specify the worker properties in httpd.conf:

             Add these lines just after the LoadModule definitions-

# Path to workers.properties

JkWorkersFile c:/apache2.2/conf/workers.properties

# Path to jk logs

JkLogFile c:/apache2.2/mod_jk.log

# Jk log level [debug/error/info] JkLogLevel info

# Jk log format

JkLogStampFormat “[%a %b %d %H:%M:%S %Y] “

# JkOptions for forwarding

JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories

# JkRequestLogFormat set the request format

JkRequestLogFormat “%w %V %T”

JkMount /jsp-examples/* worker1

JkMount /tomcat-docs/* worker2

Defining these tells Apache where to look for definitions of its workers and tells it that any requests for the jsp-examples context should be handed off to the Tomcat instance represented by worker 1, and any requests for tomcat-docs context should be handed off to Tomcat Instance 2, represented by worker 2.

Edit the server.xml for Tomcat 1 and Tomcat 2 and add a jvmRoute attribute to the Engine element:

<Engine name=”Catalina” defaultHost=”localhost” jvmRoute=”worker1″>

                      For the first instance and

<Engine name=”Catalina” defaultHost=”localhost” jvmRoute=”worker2″>

                     For the second.

       6.  Start Tomcat 1 and 2. Start up the Apache webserver. Point your browser to http://localhost/jsp-examples/ and then to http://localhost/tomcat-docs. You should see the respective pages load. To distinguish which Tomcat is serving you the page, the easiest thing to do is edit the index page in the tomcat-docs and jsp-examples of Tomcat 2 and change the title for example. Then you can verify that tomcat-docs is being served only by the second instance.

             At this point, your apache serve is ready to route certain requests to specific Tomcat instance. But we need to set tomcat for load balancing and for failover mechanism. If Tomcat 1 is crashed for whatever reason, Apache will automatically keep delegating to Tomcat 2 so your application remains accessible.

Load balancing is a simple configuration. First shut down your Tomcat instances and Apache as well.

  • Open workers.properties.

Edit it so it looks like this (changed lines in bold)-

#worker.list=worker1,worker2

 worker.list=balancer

worker.worker1.port=8009

worker.worker1.host=localhost

worker.worker1.type=ajp13

worker.worker1.lbfactor=1

worker.worker2.port=7009

worker.worker2.host=localhost

worker.worker2.type=ajp13

worker.worker2.lbfactor=1

worker.balancer.type=lb

worker.balancer.balance_workers=worker1,worker2

 worker.balancer.method=B

# specifies whether requests with SESSION ID’s should be routed back to the same #Tomcat worker.

worker.balancer. sticky_session =True

              We’ve changed the worker list to a single worker called balancer, and specified that the worker type of balancer is ‘lb‘ or load      balancer. The workers it manages are worker1 and worker2 (these do not need to appear in the workers list). And finally, we set the balance method to ‘B’ or balance by busy factor. Apache will delegate the next request to the Tomcat instance which is least busy. Please note that there are some other options for method- check out the Apache/Tomcat documentation which lists out options for workers properties to help you decide the best method for your type of application.

             You can also see that we have kept session stickiness to true. That means, the tomcat instance who first initiated the session will receive all rest of the request from that session. This is one way of managing sessions across the cluster. There are some other ways like persistence sessions etc. to achieve the same.

  • Open httpd.conf and comment out the previous JkMount directives. Replace them with these:

JkMount /tomcat-docs/* balancer

JkMount /jsp-examples/* balancer

                          We’ve just pointed Apache to a single worker- the balancer.

  • Start-up both Tomcats and Apache. Access http://localhost/jsp-examples. You will either be served by Tomcat 1 or Tomcat 2. To prove that both are capable of serving, shut down the first instance and refresh your browser. You should be served by instance two.

Conclusions

This solution provides high scalability, high availability, and good load balancing capabilities that are comparable with any other software solution. And it is pretty easy as well!

Remote SSH: Using JSCH with Expect4j

Now-a-days, we can see that whole world is moving around Clouds and virtualization. More and more applications are building for managing datacentre servers and other stuff. I have been part of one of such a module. I developed one module for monitoring and managing Linux servers remotely. We used JCraft’s Jsch with Google’s Expect4j for the same. Let’s get some idea about those API in brief.

JSCH

As the website suggests, it is pure implementation of SSH2. It is very easy to use and integrate into your program. It is a fact that it is not well documented. You can read more about JSch from its website.

Expect4j

Expect is the kitchen sink of IO control. It supports control of processes and sockets, and a complex method of match multiple patterns at the same time. This is what Google code has to say about Expect4j. When you executes commands on remote machines one after other, your program needs to know when to execute next command. It’s like send your command and wait for execution of the same. Or we can say that wait for the command prompt. Expect4j does similar stuff(as far as I know). It also provides closures for getting complete output log of executed commands. I don’t know if there is some other use of closures.

Now, let’s start with an example. First open up SSH connection on remote machine.

 JSch jsch = new JSch();
Session session = jsch.getSession(username, hostname, port);
session.setPassword(password);

Hashtable<String,String> config = new Hashtable<String,String>();
config.put("StrictHostKeyChecking", "no");
session.setConfig(config);
session.connect(60000);
ChannelShell channel = (ChannelShell) session.openChannel("shell");
Expect4j expect = new Expect4j(channel.getInputStream(), channel.getOutputStream());
channel.connect();

You can see that we have opened “shell” channel. That is because we want to execute sequence of commands on linux shell. You can see that Expect4j is initialized from JSCH session.

Now, we will prepare RegX patterns of command prompts of targeted machine. Be careful here because if these are not right then you might end up not executing commands. You can also provide closure provided by Expect4j to get output of your commands. (There might be some other use of it.)

StringBuilder buffer = new StringBuilder();
Closure closure = new Closure() {
			public void run(ExpectState expectState) throws Exception {
				buffer.append(expectState.getBuffer());//string buffer for appending output of executed command
			}
};
String[] linuxPromptRegEx = new String[]{"\\>","#"};
List<Match> lstPattern =  new ArrayList<Match>();
		for (String regexElement : linuxPromptRegEx) {
			try {
				Match mat = new RegExpMatch(regexElement, closure);
				lstPattern.add(mat);
			} catch (MalformedPatternException e) {
				e.printStackTrace();
			} catch(Exception e) {
				e.printStackTrace();
			}
}

You can see that we have defined a closure that appends output of every command executed. We are providing this closure to every command pattern. I have my linux shell prompt working at “/>” and “#”. You can define your prompts as per your linux box.

Now, start executing commands.

		List<String> lstCmds = new ArrayList<String>();
		lstCmds.add("ls");
		lstCmds.add("pwd");
		lstCmds.add("mkdir testdir");

		for(String strCmd : lstCmds) {
			int returnVal = expect.expect(objPattern);
			if (returnVal == -2) {
				expect.send(strCommandPattern);
				expect.send("\r");//enter character
			}
		}
		

We have three commands to execute in a single continuous SSH session.  First it expects one of its prompt patterns to match. If right prompt has encountered then send one command over the shell to execute.  Immediately after the command, send an enter character to execute the command. After that again wait for one of your command prompt to occur and then send other command. So, we can see that it send/wait kind of mechanism.

Now, put all together and here is a SSH client that can execute sequence of command on remote linux box. You will need following libraries in your project to execute this test class.

  • expect4j-1.0.jar
  • jakarta-oro.jar
  • jsch-0.1.44.jar
import org.apache.oro.text.regex.MalformedPatternException;

import com.jcraft.jsch.ChannelShell;
import com.jcraft.jsch.JSch;
import com.jcraft.jsch.Session;

import expect4j.Closure;
import expect4j.Expect4j;
import expect4j.ExpectState;
import expect4j.matches.Match;
import expect4j.matches.RegExpMatch;

public class SSHClient {

	private static final int COMMAND_EXECUTION_SUCCESS_OPCODE = -2;
	private static String ENTER_CHARACTER = "\r";
	private static final int SSH_PORT = 22;
	private List<String> lstCmds = new ArrayList<String>();
	private static String[] linuxPromptRegEx = new String[]{"\\>","#", "~#"};

	private Expect4j expect = null;
	private StringBuilder buffer = new StringBuilder();
	private String userName;
	private String password;
	private String host;

	/**
	 *
	 * @param host
	 * @param userName
	 * @param password
	 */
	public SSHClient(String host, String userName, String password) {
		this.host = host;
		this.userName = userName;
		this.password = password;
	}
	/**
	 *
	 * @param cmdsToExecute
	 */
	public String execute(List<String> cmdsToExecute) {
		this.lstCmds = cmdsToExecute;

		Closure closure = new Closure() {
			public void run(ExpectState expectState) throws Exception {
				buffer.append(expectState.getBuffer());
			}
		};
		List<Match> lstPattern =  new ArrayList<Match>();
		for (String regexElement : linuxPromptRegEx) {
			try {
				Match mat = new RegExpMatch(regexElement, closure);
				lstPattern.add(mat);
			} catch (MalformedPatternException e) {
				e.printStackTrace();
			} catch(Exception e) {
				e.printStackTrace();
			}
		}

		try {
			expect = SSH();
			boolean isSuccess = true;
			for(String strCmd : lstCmds) {
				isSuccess = isSuccess(lstPattern,strCmd);
				if (!isSuccess) {
					isSuccess = isSuccess(lstPattern,strCmd);
				}
			}

			checkResult(expect.expect(lstPattern));
		} catch (Exception ex) {
			ex.printStackTrace();
		} finally {
			closeConnection();
		}
		return buffer.toString();
	}
	/**
	 *
	 * @param objPattern
	 * @param strCommandPattern
	 * @return
	 */
	private boolean isSuccess(List<Match> objPattern,String strCommandPattern) {
		try {
			boolean isFailed = checkResult(expect.expect(objPattern));

			if (!isFailed) {
				expect.send(strCommandPattern);
				expect.send(ENTER_CHARACTER);
				return true;
			}
			return false;
		} catch (MalformedPatternException ex) {
			ex.printStackTrace();
			return false;
		} catch (Exception ex) {
			ex.printStackTrace();
			return false;
		}
	}
	/**
	 *
	 * @param hostname
	 * @param username
	 * @param password
	 * @param port
	 * @return
	 * @throws Exception
	 */
	private Expect4j SSH() throws Exception {
		JSch jsch = new JSch();
		Session session = jsch.getSession(userName, host, SSH_PORT);
		if (password != null) {
			session.setPassword(password);
		}
		Hashtable<String,String> config = new Hashtable<String,String>();
		config.put("StrictHostKeyChecking", "no");
		session.setConfig(config);
		session.connect(60000);
		ChannelShell channel = (ChannelShell) session.openChannel("shell");
		Expect4j expect = new Expect4j(channel.getInputStream(), channel.getOutputStream());
		channel.connect();
		return expect;
	}
	/**
	 *
	 * @param intRetVal
	 * @return
	 */
	private boolean checkResult(int intRetVal) {
		if (intRetVal == COMMAND_EXECUTION_SUCCESS_OPCODE) {
			return true;
		}
		return false;
	}
	/**
	 *
	 */
	private void closeConnection() {
		if (expect!=null) {
			expect.close();
		}
	}
	/**
	 *
	 * @param args
	 */
	public static void main(String[] args) {
		SSHClient ssh = new SSHClient("linux_host", "root", "password");
		List<String> cmdsToExecute = new ArrayList<String>();
		cmdsToExecute.add("ls");
		cmdsToExecute.add("pwd");
		cmdsToExecute.add("mkdir testdir");
		String outputLog = ssh.execute(cmdsToExecute);
		System.out.println(outputLog);
	}
}

If you find any difficulties to execute it, just play around execute command loop and command prompts RegX patterns.


							

Spring AOP: A Go through with Simple Example

AOP concepts

I am assuming that you are aware of what is Aspect Oriented Programming and its use in applications. I will briefly describe some AOP concepts and terminology. These terms are not Spring-specific.

  • Aspect:  It is a general feature that you can apply to your application globally. It allows you to add common behaviour to your module without interfering in your business logic. (like logging, performance monitoring, exception handling, transaction management, etc).
  • Join point: a point during the execution of a program, such as the execution of a method or the handling of an exception. In Spring AOP, a join point always represents a method execution.
  • Advice: action taken by an aspect at a particular join point. Different types of advice include “around,” “before” and “after” advice. It is a piece of code that should be executed at specific aspect. Like you want to execute some logging before any method invocation aspect.
  •  Pointcut: a predicate that matches join points. Advice is associated with a pointcut expression and runs at any join point matched by the pointcut (for example, the execution of a method with a certain name). The concept of join points as matched by pointcut expressions is central to AOP, and Spring uses the AspectJ pointcut expression language by default.
  • Target object: object being advised by one or more aspects. Also referred to as the advised object. Since Spring AOP is implemented using runtime proxies, this object will always be a proxied object.
  • AOP proxy: an object created by the AOP framework in order to implement the aspect methods (advice method executions).

Types of Advice:

  • Before advice:  Advice that executes before a join point.
  • After returning advice Advice to be executed after a join point completes normally: for example, if a method returns without throwing an exception.
  • After throwing advice Advice to be executed if a method exits by throwing an exception.

Spring AOP currently supports only method execution join points (advising the execution of methods on Spring beans). Field interception is not implemented, although support for field interception could be added without breaking the core Spring AOP APIs. If you need to advise field access and update join points, consider a language such as AspectJ.

Spring And Advice:

We will start with advices. Advice implementations in Spring are simply implementations of the org.aopalliance.intercept.MethodInterceptor interface. But that’s not a Spring class. Spring’s AOP implementation uses a standard AOP API from the AOP Alliance (JAVA/J2EE AOP standards). The MethodInterceptor interface is actually a child of the org.aopalliance.intercept.Interceptor interface, which is a child of another interface, org.aopalliance.aop.Advice.

The MethodInterceptor interface is very simple:

 public interface MethodInterceptor extends Interceptor {
        Object invoke(MethodInvocation invocation) throws Throwable;
}

Basically, when you write an advice for intercepting a method, you have to implement one method – the invoke method, and you are given a MethodInvocation object to work with. The MethodInvocation object gives different operations about the method that we’re intercepting, and also gives a hook to tell the method to go ahead and run.

Spring has multiple alternatives to the basic MethodInterceptor.They allow you to do more specific things related to advice. They include:

  • org.springframework.aop.MethodBeforeAdvice - Implementations of this interface have to implement this method:
void before(Method method, Object[] args, Object target) throws Throwable;

All you need to do is do what you need before the method is called (as the interface name implies).

  • org.springframework.aop.AfterReturningAdvice - This interface’s method will be called on the return from the invocation of a method. The method looks like this: 
void afterReturning(Object returnValue, Method method, Object[] args, Object target) throws Throwable;

You’ll notice it looks a whole like the before advice, but simply adds the Object’s return value to the method arguments.

  • org.springframework.aop.ThrowsAdvice - Instead of requiring you to implement a particular method, this is simply a ‘marker’ interface, and expects you to implement any number of methods that look like this:
void afterThrowing([Method], [args], [target], [some type of throwable] subclass)

Spring And PointCuts:

Point cut is a predicate that matches join points. Advice is associated with a pointcut expression and runs at any join point matched by the pointcut (for example, the execution of a method with a certain name). The concept of join points as matched by pointcut expressions is central to AOP, and Spring uses the AspectJ pointcut expression language by default. Pointcuts in Spring implement the org.springframework.aop.Pointcut interface and look like this:

public interface Pointcut {
	ClassFilter getClassFilter();
	MethodMatcher getMethodMatcher();
}

The method matcher simply describes which methods for a given class are considered valid joinpoints for this pointcut.

Spring has different implementation for static vs. dynamic method matching point cuts. A static method matcher pointcut knows at a runtime that which method to be considered as join point on that object. While a dynamic method matcher consulted at every method invocation (to determine which is valid joint point).

Dynamic method matcher includes org.springframework.aop.support.JdkRegexpMethodPointcut and  org.springframework.aop.support.PerlRegexpMethodPointcut classes.

A static method matcher, while having less flexibility (you can’t check the method invocation arguments), is, by design, much faster, as the check is only performed once, rather than at every method invocation. Static Matching pointcuts implementations look like this:

public class MyBeanPointcut extends StaticMethodMatcherPointcut {
@Override
	public boolean matches(Method method, Class<?> arg1) {
		if (method.getName().toLowerCase().indexOf("aspect")!=-1) {
			return true;
		}
		return false;
	}
}

This requires implementing just one abstract method (although it’s possible to override other methods to customize behavior). It will be a joint point of any advice if executing method name contains string “aspect”.

Pointcut Advisor

Now we have to tie a point cut with advices. The most basic pointcut advisor is the org.springframework.aop.support.DefaultPointcutAdvisor class. We can do this by following definition in spring file.

<bean id="catchBeforeMethod" class="com.nik.aop.advice.CatchBeforeMethod" />

<bean id="pointcut" class="com.nik.aop.pointcut.MyBeanPointcut" />

<bean name="methodPointcut"
		class="org.springframework.aop.support.DefaultPointcutAdvisor">
		<property name="advice" ref="catchBeforeMethod" />
		<property name="pointcut" ref="pointcut" />
</bean>

Now, lets’ glue them all together and create very simple example of Spring AOP. I have a test bean which has a string attribute. We will change that attribute in an advice.

Spring AOP Proxy:

The basic way to create an AOP proxy in Spring is to use the ProxyFactoryBean. This gives complete control over point cuts and advice that will apply.

Following is my test proxy bean.

public class TestBean {

	private String aspectName;

	public String getAspectName() {
		return aspectName;
	}

	public void setAspectName(String aspectName) {
		this.aspectName = aspectName;
	}

	@Override
	public String toString() {
		return "TestBean [aspectName=" + aspectName + "]";
	}
}

Define Advice:

public class CatchBeforeMethod implements MethodBeforeAdvice  {

	@Override
	public void before(Method method, Object[] arg1, Object target) throws Throwable {
		String aspectName = ((TestBean)target).getAspectName();
		System.out.println("Method Caught before execution: "+method.getName());
		System.out.println("Updating Bean in Method Before Advice.");
		((TestBean)target).setAspectName("BeforeMethod");
	}
}

Define Pointcut.

public class MyBeanPointcut extends StaticMethodMatcherPointcut {

	@Override
	public boolean matches(Method method, Class<?> arg1) {
		if (method.getName().toLowerCase().indexOf("aspect")!=-1) {
			return true;
		}
		return false;
	}
}

Spring file:

<beans xmlns="http://www.springframework.org/schema/beans"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://www.springframework.org/schema/beans

http://www.springframework.org/schema/beans/spring-beans-2.5.xsd">

	<bean id="aopBean" class="com.nik.aop.beans.TestBean">
		<property name="aspectName" value="No_Aspect" />
	</bean>

	<bean id="catchBeforeMethod" class="com.nik.aop.advice.CatchBeforeMethod" />
	<bean id="pointcut" class="com.nik.aop.pointcut.MyBeanPointcut" />

	<bean name="methodPointcut"
		class="org.springframework.aop.support.DefaultPointcutAdvisor">
		<property name="advice" ref="catchBeforeMethod" />
		<property name="pointcut" ref="pointcut" />
	</bean>

	<bean id="aopBeanProxy" class="org.springframework.aop.framework.ProxyFactoryBean">

		<property name="target" ref="aopBean" />

		<property name="interceptorNames">
			<list>
				<value>methodPointcut</value>
			</list>
		</property>
	</bean>

</beans>

Main Class:

public class MainClass {

	public static void main(String[] args) {

		ApplicationContext appContext = new ClassPathXmlApplicationContext(new String[] { "/conf/spring_aop.xml" });
		TestBean aopBean = (TestBean) appContext.getBean("aopBeanProxy");
		String aspectName = aopBean.getAspectName();
		System.out.println("In Main Class **********************: ");
		System.out.println("	After MethodBefore Aspect: "+aspectName);
		System.out.println("Out Main Class **********************: ");
	}
}

Enjoy!

Inside Explanation Of Inverse Attribute in Hibernate

This post will show, how inverse attribute works in hibernate.  Essentially, inverse attribute tells which side of association is responsible for maintaining relationship.  It specifies that if parent is responsible for updating relationship with child while saving parent or child will update relationship itself. I will describe this by some examples.

I am assuming that you have good knowledge of:

  • Mapping a collection
  • Bidirectional Association
  • Parent Child Relationships

USING LIST…

Inverse=false

Following is a hibernate mapping for this example. You can see that Parent has a list of child with attribute inverse=false. And also child does not have reference of parent in mapping file. (I mean many-to-one). I am assuming cascade=all in all cases.

<class name="com.nik.dbobjects.Parent" table="parent">
		<id name="idParent" type="long" column="id_parent">
			<generator class="increment" />
		</id>
		<property name="name" type="string" column="name" />
		<list name="childList" cascade="all" inverse="false">
			<key column="id_parent" />
			<index column="idx" />
			<one-to-many class="com.nik.dbobjects.Child" />
		</list>
</class>
<class name="com.nik.dbobjects.Child" table="child">
		<id name="idChild" type="long" column="id_child">
			<generator class="increment" />
		</id>
		<property name="name" type="string" column="name" />
</class>

INSERT Operation

Code snippet:

Parent parent = new Parent("parent");
Child child1 = new Child("child_1");
parent.getChildList().add(child1);
session.save(parent);

Above code will execute following sql queries:

insert into parent (name, id_parent) values (?, ?)// insert parent
insert into child (name, id_child) values (?, ?) // insert child without foreign key
update child set id_parent=?, idx=? where id_child=?//update child with foreign key and index

You can see that hibernate execute three queries for saving parent in this case. Since, inverse-false, it is parent’s responsibility to update child’s relationship. It first saves child and then executes an extra query to update foreign key and list index column (id_parent and idx columns).  So, in this way, if you have n child elements, then there will be n extra queries to update relationship.

UPDATE Operation

Code Sample:

                Parent parent1 = (Parent)session.load(Parent.class, 1L);
		Parent parent2 = (Parent)session.load(Parent.class, 2L);

		Child c = parent1.getChildList().get(0);

		parent1.getChildList().remove(c);
		parent2.getChildList().add(c);

		session.flush();

SQL queries:

update child set id_parent=null, idx=null where id_parent=? and id_child=?//remove parent
update child set id_parent=null, idx=null where id_parent=? and id_child=?//remove parent
update child set id_parent=?, idx=? where id_child=?//set new parent and update index
update child set id_parent=?, idx=? where id_child=?//set new parent and update index.

We can see that, parent has to update its child index whenever a child removes from the list. So, it first sets null to all parent id and index id columns of child and then they are updated as per new parent reference. (It follows behaviour of List for maintaining order of its elements.)

Inverse = true

Now, make inverse=true in child list mapping of Parent class. Also add many-to-one reference to child class.

<many-to-one name="parent" class="com.nik.dao.Parent"
			column="id_parent"></many-to-one>

INSERT Operation

Example code:

                Parent parent = new Parent("parent");
		Child child1 = new Child("child_1");
		child1.setParent(parent);
		parent.getChildList().add(child1);

		session.save(parent);

It will execute following SQL queries:

insert into parent (name, id_parent) values (?, ?)
insert into child (name, id_parent, id_child) values (?, ?, ?)

As you can see, relationship (id_parent column in child table) is updated during child save: this is of the child responsibility. Hibernate will execute only two queries for saving parent in this case. In this case, there is no need to execute an extra query to update relationship.

But wait. In this case, index column (idx in childList of Parent class) in child class will be null. It is used by parent list to maintain order of child elements. So, next time, when you access child list from parent class (parent.getChildList().get(0)), it will throw an exception saying null index column for collection: Parent.childList.

So, we can say that if you are using List as one-to-many collection and inverse attribute is false, then there will be total n+n queries for saving n child elements. If you inverse = true, there will be no extra query for saving child elements, but also you will not be able to load child elements because index column of child will be null.

USING SET…

Now, let we take Set as a collection of child elements in Parent class. Since Set is not an indexed collection, it is not required to have an index column in child table.

Inverse = false

For insert operation, Set will act as same as List will do. It will first inserts the child element and then updates the relationship using update query.

UPDATE Operation

Code Snippet:

                Parent parent1 = (Parent)session.load(Parent.class, 1L);
		Parent parent2 = (Parent)session.load(Parent.class, 2L);

		Child c = parent1.getChildList().iterator().next();

		parent1.getChildList().remove(c);
		parent2.getChildList().add(c);

Will execute following queries:

update child set id_parent=null where id_parent=? and id_child=?
update child set id_parent=? where id_child=?

We can see that, it will first set null to parent id from which child is to be removed. And then set parent id of new parent. But this can be optimized using inverse = true.

Inverse=true

For inverse=true, Set has an advantage. Since, Set is not an indexed collection; it does not require maintaining an index column like List.

INSERT Operation

Code sample:

                Parent parent = new Parent("parent");
		Child child1 = new Child("child_1");
		child1.setParent(parent);
		parent.getChildList().add(child1);

		session.save(parent);

Will execute following queries:

insert into parent (name, id_parent) values (?, ?)
insert into child (name, id_parent, id_child) values (?, ?, ?)

So, there are no extra queries require to update relationship.

UPDATE Operation

Code sample:

                Parent parent1 = (Parent)session.load(Parent.class, 1L);
		Parent parent2 = (Parent)session.load(Parent.class, 2L);

		Child c = parent1.getChildList().iterator().next();

		parent1.getChildList().remove(c);// remove child from parent 1

		c.setParent(parent2); //set parent as parent 2
		parent2.getChildList().add(c);//add child to parent 2

		session.flush();

Will execute following query:

update child set name=?, id_parent=? where id_child=?

We can see that it requires only one update query to shuffle child from parent 1 to parent 2. Here, Child updates itself rather than setting parent id to null and then further updating to new parent.

Conclusion

Using and understanding inverse=”true” is very important to optimize your code. Prefer using inverse=”true” on bidirectional association. Also you must have noticed performance difference between using List and Set. Set is more preferable if is not requires to have an indexed collection.

JBPM 5 in simple steps

There is a lot been said about JBPM and its fate since its release of version 4. Two jBPM projects guru Tom Baeyens and Joram Barrez left the jBPM project and started a new one named Activiti which is backed by Alfresco company.  Current stable release of jBPM (which is the 4.4) is not commercially supported by Redhat in the JBoss SOA Platform and also is not as promising as their previous releases. But still,  JBPM 5 is  a very good option for a BPM solution. It is also integrated with Drools, so one can also integrate Rule system with a BPM easily. And also, JBPM 5 supports BPMN 2.0.

In this post, I will show you some simple steps to start with JBPM 5.0. It is a very simple low level example to understand the JBPM API. The example includes execution of a simple process definition with sample code.

KnowledgeBase: 

The jBPM API allows you to first create a knowledge base. The Knowledge Base is a repository of all the application’s knowledge definitions. It may contain rules, processes, functions etc. The Knowledge Base itself does not contain instance data, known as facts; instead, sessions are created from the Knowledge Base into which data can be inserted and where process instances may be started. The following code snippet shows how to create a knowledge base consisting of only one process definition.

KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();
kbuilder.add(ResourceFactory.newClassPathResource("TestProcess.bpmn"), ResourceType.BPMN2);
KnowledgeBase kbase = kbuilder.newKnowledgeBase();

Knowledge Session:

Next step is to start a session to interact with created knowledge base. The following code snippet shows how easy it is to create a session based on the earlier created knowledge base, and to start a process.

StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession();
ProcessInstance processInstance = ksession.startProcess("first_test");

Events: 

JBPM API allows you to listens to some events during process execution to interact with JBPM runtime. A ProcessEventListener can be used to listen to process-related events, like starting or completing a process, entering and leaving a node, etc. You can set your listener class using following line of code.

ksession.addEventListener(new TestEventListener());

Input Parameters:

You can also provide some input parameters to help you executing your business logic meanwhile the process is executing. Following code sample shows how to provide input parameters to a process execution.

Map<String, Object> inputParams = new HashMap<String, Object>();
inputParams.put("name", "i am an input parameter.");
ProcessInstance processInstance = ksession.startProcess("first_test", inputParams);

You can access this parameter during process execution in your event listener implementation.

public class TestEventListener implements ProcessEventListener {

	@Override
	public void beforeNodeTriggered(ProcessNodeTriggeredEvent event) {
		System.out.println("Before Node triggered event received for node: "+ event.getNodeInstance().getNodeName());
		Object obj  = event.getNodeInstance().getVariable("name");
		System.out.println("Input Parameter of ProcessInstance: "+obj.toString());
	}
}

Now, let’s put up all together in a test class. Followings are libraries I have includes in my eclipse workspace. You can download them from here.

  • antlr-runtime.jar
  • drools-compiler.jar
  • drools-core.jar
  • jbpm-bam.jar
  • jbpm-bpmn2.jar
  • jbpm-flow-builder.jar
  • jbpm-flow.jar
  • knowledge-api.jar
  • mvel2.jar
  • org.eclipse.jdt.core_3.3.0.v_771.jar
  • xstream.jar

BPMN 2.0 Process Definition

Following is a sample process definition that I have used in my test class. It contains three nodes: start, script task and end node. ScriptTask simply prints something when it executes. The id attribute in Process tag is used in starting workflow from session.

<definitions id="Definition" targetNamespace="http://www.jboss.org/drools" typeLanguage="http://www.java.com/javaTypes"
expressionLanguage="http://www.mvel.org/2.0"
xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.omg.org/spec/BPMN/20100524/MODEL BPMN20.xsd"
xmlns:g="http://www.jboss.org/drools/flow/gpd"
xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI"
xmlns:dc="http://www.omg.org/spec/DD/20100524/DC"
xmlns:di="http://www.omg.org/spec/DD/20100524/DI"
xmlns:tns="http://www.jboss.org/drools">

	<process processType="Private" isExecutable="true" id="first_test" name="Hello">

		<!-- nodes -->
		<startEvent id="_1" name="Start" >
		</startEvent>

		<scriptTask id="_2" name="Hello" >
			<script>System.out.println("This is my first test process.");</script>
		</scriptTask>

		<endEvent id="_3" name="End" >
			<terminateEventDefinition/>
		</endEvent>

		<!-- connections -->
		<sequenceFlow id="_1-_2" sourceRef="_1" targetRef="_2" />
		<sequenceFlow id="_2-_3" sourceRef="_2" targetRef="_3" />
	</process>
</definitions>

TestEventListener.java

import org.drools.event.process.ProcessCompletedEvent;
import org.drools.event.process.ProcessEventListener;
import org.drools.event.process.ProcessNodeLeftEvent;
import org.drools.event.process.ProcessNodeTriggeredEvent;
import org.drools.event.process.ProcessStartedEvent;
import org.drools.event.process.ProcessVariableChangedEvent;

public class TestEventListener implements ProcessEventListener {

	@Override
	public void beforeNodeTriggered(ProcessNodeTriggeredEvent event) {
		System.out.println("Before Node triggered.   "+event.getNodeInstance().getNodeName());		
		Object obj  = event.getNodeInstance().getVariable("name");
		System.out.println("Input Parameter of ProcessInstance: "+obj.toString());
	}
	@Override
	public void afterNodeLeft(ProcessNodeLeftEvent arg0) {}

	@Override
	public void afterNodeTriggered(ProcessNodeTriggeredEvent arg0) {}

	@Override
	public void afterProcessCompleted(ProcessCompletedEvent arg0) {}

	@Override
	public void afterProcessStarted(ProcessStartedEvent arg0) {}

	@Override
	public void afterVariableChanged(ProcessVariableChangedEvent arg0) {}

	@Override
	public void beforeNodeLeft(ProcessNodeLeftEvent arg0) {}

	@Override
	public void beforeProcessCompleted(ProcessCompletedEvent arg0) {}

	@Override
	public void beforeProcessStarted(ProcessStartedEvent arg0) {}

	@Override
	public void beforeVariableChanged(ProcessVariableChangedEvent arg0) {}
}

FirstTest.java

import java.util.HashMap;
import java.util.Map;

import org.drools.KnowledgeBase;
import org.drools.builder.KnowledgeBuilder;
import org.drools.builder.KnowledgeBuilderFactory;
import org.drools.builder.ResourceType;
import org.drools.io.ResourceFactory;
import org.drools.runtime.StatefulKnowledgeSession;
import org.drools.runtime.process.ProcessInstance;

public class FirstTest {

	public static void main(String[] args) {

		KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();		
		kbuilder.add(ResourceFactory.newClassPathResource("conf/TestProcess.bpmn"), ResourceType.BPMN2);

		KnowledgeBase kbase = kbuilder.newKnowledgeBase();
		StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession();
		ksession.addEventListener(new TestEventListener());

		Map<String, Object> inputParams = new HashMap<String, Object>();
		inputParams.put("name", "i am an input parameter.");
		ProcessInstance processInstance = ksession.startProcess("first_test", inputParams);
	}
}

 

WMI with JAVA using J-Interop

A little about WMI

Windows Management Instrumentation (WMI) is Microsoft’s implementation of the Web-Based Enterprise Management (WBEM) and Common Information Model (CIM) standards from the Distributed Management Task Force (DMTF). WMI is used to access Windows’s system, applications, network, devices etc. components and manage them. Connection to a machine is managed through DCOM. So, little knowledge about DCOM will be helpful here. You can go to MSDN for more details about WMI.

J-Interop

There are some good libraries available in market for using WMI through JAVA. That includes J-Interop, JACOB-Project and J-Integra. Among these, I will prefer J-Interop because it is absolutely free and open source API. It provides pure DCOM Bridge without any dependency. It is fully written in JAVA without any JNI code.

Manage Windows Service Using WMI

Now, let’s see an example of using WMI in JAVA. This example uses Win32_Service class to explain WMI operation using J-Interop API. We will start and stop windows services in this example.

Step 1: Connect to WBEM Service

Following code sample shows how to initialize DCOM session and connect to remote DCOM service using J-Interop. It uses SWbemLocator object to connect to SWbemServices. SWbemServices object provides access to WMI on local or remote machine. It calls ‘ConnectServer’ method for connecting to SWbemServices. Provide administrator level user to connect to remote machine in this example.

JISession dcomSession  = JISession.createSession(domainName,userName,password);
dcomSession.useSessionSecurity(false);

JIComServer comServer = new JIComServer(valueOf("WbemScripting.SWbemLocator"),hostIP,dcomSession);
IJIDispatch wbemLocator = (IJIDispatch) narrowObject(comServer.createInstance().queryInterface(IID));
//parameters to connect to WbemScripting.SWbemLocator
Object[] params = new Object[] {
			new JIString(hostIP),//strServer
			new JIString(win32_namespace),//strNamespace
			JIVariant.OPTIONAL_PARAM(),//strUser 
			JIVariant.OPTIONAL_PARAM(),//strPassword 
			JIVariant.OPTIONAL_PARAM(),//strLocale 
			JIVariant.OPTIONAL_PARAM(),//strAuthority
			new Integer(0),//iSecurityFlags 
			JIVariant.OPTIONAL_PARAM()//objwbemNamedValueSet
			};

JIVariant results[] = wbemLocator.callMethodA("ConnectServer", params);
IJIDispatch wbemServices = (IJIDispatch) narrowObject(results[0].getObjectAsComObject());

(domainName= domain name of remote machine,  hostIP = ip address of remote machine

username =  administrator level user,  password= password)

Step 2: Get Win32_Service Instance.

Once you obtain a reference to the SWbemServices object, you can invoke any method of this class. There is SWbemServices.InstancesOf method to get instance of any Win32 class.

You can also use WMI Query Language (WQL) for achieving the same as below:

final int RETURN_IMMEDIATE = 0x10;
final int FORWARD_ONLY = 0x20;
Object[] params = new Object[] {
        new JIString("SELECT * FROM Win32_Service"),
        JIVariant.OPTIONAL_PARAM(),
        new JIVariant(new Integer(RETURN_IMMEDIATE + FORWARD_ONLY))
};
JIVariant[] servicesSet = wbemServices.callMethodA("ExecQuery", params);
IJIDispatch wbemObjectSet = (IJIDispatch) narrowObject(servicesSet[0].getObjectAsComObject());

Step 3: Execute Methods.

Now, you have got instances of Win32_Service class. You can call method of the same class using following code. Since, it returns multiple services instances, you have to enumerate them for getting service IJIDispatcher instance.

JIVariant newEnumvariant = wbemObjectSet.get("_NewEnum");
IJIComObject enumComObject = newEnumvariant.getObjectAsComObject();
IJIEnumVariant enumVariant = (IJIEnumVariant) narrowObject(enumComObject.queryInterface(IJIEnumVariant.IID));
 
Object[] elements = enumVariant.next(1);
JIArray aJIArray = (JIArray) elements[0];
 
JIVariant[] array = (JIVariant[]) aJIArray.getArrayInstance();
for (JIVariant variant : array) {
    IJIDispatch wbemObjectDispatch = (IJIDispatch) narrowObject(variant.getObjectAsComObject());
 
    JIVariant returnStatus = wbemObjectDispatch.callMethodA("StopService");
 
    System.out.println(returnStatus.getObjectAsInt());
}

Now, following code shows a full JAVA class for starting and stopping windows services using WMI.

package com.wmi.windows;

import static org.jinterop.dcom.core.JIProgId.valueOf;
import static org.jinterop.dcom.impls.JIObjectFactory.narrowObject;
import static org.jinterop.dcom.impls.automation.IJIDispatch.IID;
import java.util.logging.Level;
import org.jinterop.dcom.common.JIException;
import org.jinterop.dcom.common.JIRuntimeException;
import org.jinterop.dcom.common.JISystem;
import org.jinterop.dcom.core.IJIComObject;
import org.jinterop.dcom.core.JIArray;
import org.jinterop.dcom.core.JIComServer;
import org.jinterop.dcom.core.JISession;
import org.jinterop.dcom.core.JIString;
import org.jinterop.dcom.core.JIVariant;
import org.jinterop.dcom.impls.automation.IJIDispatch;
import org.jinterop.dcom.impls.automation.IJIEnumVariant;

public class ServiceManager {

	private static String domainName = "";
	private static String userName="administrator";
	private static String password="";
	private static String hostIP ="127.0.0.1";
	private static final String win32_namespace = "ROOT\\CIMV2";

	private static final int STOP_SERVICE = 0;
	private static final int START_SERVICE = 1;

	private JISession dcomSession = null;
	/**
	 *
	 * @param args
	 */
	public static void main(String[] args) {
		ServiceManager manager = new ServiceManager();
		manager.stopService(domainName, hostIP, userName, password, "MySql");//stops a service named MySql
	}
	/**
	 * Starts a given service if its stopped.
	 *
	 * @param domainName
	 * @param hostname
	 * @param username
	 * @param password
	 * @param serviceName
	 */
	public void startService(String domainName, String hostname, String username, String password, String serviceName) {
		execute(domainName, hostname, username, password, serviceName, START_SERVICE);
	}
	/**
	 * Stops a given service if its running.
	 *
	 * @param domainName
	 * @param hostname
	 * @param username
	 * @param password
	 * @param serviceName
	 */
	public void stopService(String domainName, String hostname, String username, String password, String serviceName) {
		execute(domainName, hostname, username, password, serviceName, STOP_SERVICE);
	}
	/**
	 * Starts/Stops a given service of remote machine as hostname. 
	 *
	 * @param domainName
	 * @param hostname
	 * @param username
	 * @param password
	 * @param serviceName
	 * @param action
	 */
	public void execute(String domainName, String hostname, String username, String password, String serviceName, int action) {

		try {
			IJIDispatch wbemServices = createCOMServer();

			final int RETURN_IMMEDIATE = 0x10;
			final int FORWARD_ONLY = 0x20;
			Object[] params = new Object[] {
					new JIString("SELECT * FROM Win32_Service WHERE Name = '" + serviceName + "'"),
					JIVariant.OPTIONAL_PARAM(),
					new JIVariant(new Integer(RETURN_IMMEDIATE + FORWARD_ONLY))
			};
			JIVariant[] servicesSet = wbemServices.callMethodA("ExecQuery", params);
			IJIDispatch wbemObjectSet = (IJIDispatch) narrowObject(servicesSet[0].getObjectAsComObject());

			JIVariant newEnumvariant = wbemObjectSet.get("_NewEnum");
			IJIComObject enumComObject = newEnumvariant.getObjectAsComObject();
			IJIEnumVariant enumVariant = (IJIEnumVariant) narrowObject(enumComObject.queryInterface(IJIEnumVariant.IID));

			Object[] elements = enumVariant.next(1);
			JIArray aJIArray = (JIArray) elements[0];

			JIVariant[] array = (JIVariant[]) aJIArray.getArrayInstance();
			for (JIVariant variant : array) {
				IJIDispatch wbemObjectDispatch = (IJIDispatch) narrowObject(variant.getObjectAsComObject());

				// Print object as text.
				JIVariant[] v = wbemObjectDispatch.callMethodA("GetObjectText_", new Object[] { 1 });
				System.out.println(v[0].getObjectAsString().getString());

				// Start or Stop the service
				String methodToInvoke = (action == START_SERVICE) ? "StartService" : "StopService";
				JIVariant returnStatus = wbemObjectDispatch.callMethodA(methodToInvoke);
				System.out.println("Return status: "+returnStatus.getObjectAsInt()); //if return code = 0 success.See msdn for more details about the method.
			}
		} catch (Exception e) {
			e.printStackTrace();
		} finally {
			if (dcomSession != null) {
				try {
					JISession.destroySession(dcomSession);
				} catch (Exception ex) {
					ex.printStackTrace();
				}
			}
		}
	}
	/**
	 * Initialize DCOM session and connect to SWBEM service on given host machine.
	 * @return
	 */
	private IJIDispatch createCOMServer() { 
		JIComServer comServer;
		try {           
			JISystem.getLogger().setLevel(Level.WARNING);
			JISystem.setAutoRegisteration(true);
			dcomSession  = JISession.createSession(domainName,userName,password);
			dcomSession.useSessionSecurity(false);
			comServer = new JIComServer(valueOf("WbemScripting.SWbemLocator"),hostIP,dcomSession);

			IJIDispatch wbemLocator = (IJIDispatch) narrowObject(comServer.createInstance().queryInterface(IID));
			//parameters to connect to WbemScripting.SWbemLocator
			Object[] params = new Object[] {
					new JIString(hostIP),//strServer
					new JIString(win32_namespace),//strNamespace
					JIVariant.OPTIONAL_PARAM(),//strUser 
					JIVariant.OPTIONAL_PARAM(),//strPassword 
					JIVariant.OPTIONAL_PARAM(),//strLocale 
					JIVariant.OPTIONAL_PARAM(),//strAuthority
					new Integer(0),//iSecurityFlags 
					JIVariant.OPTIONAL_PARAM()//objwbemNamedValueSet
			};
			JIVariant results[] = wbemLocator.callMethodA("ConnectServer", params);
			IJIDispatch wbemServices = (IJIDispatch) narrowObject(results[0].getObjectAsComObject());
			return wbemServices;
		} catch (JIException jie) {
			System.out.println(jie.getMessage());
			jie.printStackTrace();
		} catch (JIRuntimeException jire) {
			jire.printStackTrace();
		} catch (Exception e) {
			e.printStackTrace();
		} finally {
			if (dcomSession!=null) {
				try {
					JISession.destroySession(dcomSession);
				} catch (JIException e) {
					e.printStackTrace();
				}
			}
		}
		return null;
	}
}

 

 

My First Post..

I don’t know what to write in first post. But you will get a lot from whatever experience I have.

Follow

Get every new post delivered to your Inbox.