ActiveRecord Naming Conventions with Spring JDBC

java ruby on railsOne of the best choices we made at my current job was splitting the web development from the heavy message processing system, allowing the former to be written in Ruby on Rails and the latter to be written in Java. However, for ease of development, we allowed the two code bases share a single database. What then to do about database naming conventions? Those familiar with ActiveRecord will know that it is very particular about its database naming conventions so it was up to the Java code to play nicely. The first solution to present itself required hardcoding database names in each Java object but a more elegant solution seemed possible and, after the discovery of one small but important 3rd party project, it revealed itself.

Inflection

In Rails, the class responsible for converting object names to database names is called Inflections. For those unfamiliar with the term (as I was), inflection is the modification of a word to express different grammatical categories. Armed with this new knowledge I set about googling and was relieved to find that Tom White had already implemented a Java Inflector. However, the Java Inflector only implements the pluralization aspect of the Rails Inflections; for this to work I was going to need some additional string manipulation tools. Luckily the old Apache Commons StringUtils was there to lend a helping hand. Finally, confident in my string manipulation tools, the next step was to look into the JDBC side of things.

ORMs

When dealing with mapping objects to and from a database Hibernate is always the name that pops up. I’ve worked with it in the past and, while incredibly powerful, I always felt like I was wielding a hatchet when what I wanted was a surgical knife. I’ve heard its configuration has been simplified with JPA but its still a beast of a project that relies on a bit more magic than I’m comfortable with. In a more recent project I played with iBatis (now myBatis) and enjoyed the simplicity but felt overwhelmed by the xml configuration. Version 3.0 supports annotation configuration which greatly simplifies this but I was looking for something based on naming conventions so I wouldn’t have to supply the mapping myself. I don’t mind getting down and dirty with some SQL so I opted for Spring JDBC. Spring provides a super thin layer on top of raw JDBC which takes what is a fairly developer hostile API and makes it a breeze to use, all while adding some wonderful utilities to speed development.

Putting It All Together

AbstractDbObj

The first step was to create an AbstractDbObj to allow all other database objects to inherit from. This is where the logic for creating db names using the Inflector and StringUtils lives along with some other shared code.

import static com.google.common.collect.Collections2.filter;
import static com.google.common.collect.Collections2.transform;
import static org.apache.commons.lang.StringUtils.join;
import static org.apache.commons.lang.StringUtils.lowerCase;
import static org.apache.commons.lang.StringUtils.splitByCharacterTypeCamelCase;
import static org.jvnet.inflector.Noun.pluralOf;
import java.util.Collection;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import org.apache.commons.beanutils.BeanMap;
import org.apache.commons.lang.builder.EqualsBuilder;
import org.apache.commons.lang.builder.HashCodeBuilder;
import org.apache.commons.lang.builder.ReflectionToStringBuilder;
import org.apache.commons.lang.builder.ToStringStyle;
import com.google.common.base.Function;
import com.google.common.base.Predicate;
import com.google.common.collect.ImmutableSet;

public abstract class AbstractDbObj  implements DbObj {
  private static final String[] EXCLUDED_FIELD_NAMES = new String[] { "columnNames", "map", "fieldNames", "tableName" };
  private static final Predicate<String> IS_COLUMN = new Predicate<String>() {
    private final ImmutableSet<String> notColumns = ImmutableSet.of( "class", "tableName", "columnNames", "fieldNames" );
    public boolean apply( final String propName ) {
      return !notColumns.contains( propName );
    }
  };

  private static final ConcurrentMap<Class<? extends LocaDbObj>, String> TABLE_NAME_LOOKUP = new ConcurrentHashMap<Class<? extends LocaDbObj>, String>();

  private static final Function<String, String> TO_COLUMN = new Function<String, String>() {
    public String apply( final String fieldName ) {
      return underscore( fieldName );
    }
  };

  private static String tableName( final Class<? extends LocaDbObj> clazz ) {
    return pluralOf( underscore( clazz.getSimpleName() ) );
  }

  protected static String underscore( final String camelCasedWord ) {
    return lowerCase( join( splitByCharacterTypeCamelCase( camelCasedWord ), '_' ) );
  }

  private Integer id;
  private final Collection<String> fieldNames;
  private final String tableName;
  private final Map<String, Object> map;
  private final Collection<String> columnNames;

  @SuppressWarnings( "unchecked" )
  protected AbstractLocaDbObj() {
    super();
    final Class<? extends LocaDbObj> clazz = getClass();
    String tblName = TABLE_NAME_LOOKUP.get( clazz );
    if ( tblName == null ) {
      tblName = tableName( clazz );
      TABLE_NAME_LOOKUP.put( clazz, tblName );
    }
    tableName = tblName;
    final BeanMap beanMap = new BeanMap( this );
    fieldNames = filter( beanMap.keySet(), IS_COLUMN );
    columnNames = transform( fieldNames, TO_COLUMN );
    map = beanMap;
}

  @SuppressWarnings( "unchecked" )
  protected AbstractLocaDbObj( final String tableName ) {
    super();
    this.tableName = tableName;
    final BeanMap beanMap = new BeanMap( this );
    fieldNames = filter( beanMap.keySet(), IS_COLUMN );
    columnNames = transform( fieldNames, TO_COLUMN );
    map = beanMap;
  }

  @Override
  public boolean equals( final Object obj ) {
    return EqualsBuilder.reflectionEquals( this, obj, EXCLUDED_FIELD_NAMES );
  }

  public String getColumnName( final String fieldName ) {
    return underscore( fieldName );
  }

  public Collection<String> getColumnNames() {
    return columnNames;
  }

  public Collection<String> getFieldNames() {
    return fieldNames;
  }

  public Integer getId() {
    return id;
  }

  public String getTableName() {
    return tableName;
  }

  @Override
  public int hashCode() {
    return HashCodeBuilder.reflectionHashCode( this, EXCLUDED_FIELD_NAMES );
  }

  public void setId( final Integer id ) {
    this.id = id;
  }

  public Map<String, Object> toMap() {
    return map;
  }

  @Override
  public String toString() {
    return new ReflectionToStringBuilder( this ).setExcludeFieldNames(EXCLUDED_FIELD_NAMES ).toString();
  }
}

A few things to point out. First, I’m making use of the Google Collections Function and Predicate. For more info on that check out the developer.com article. Second, I’m using Apache BeanUtils BeanMap which uses reflection to easily create a property to value map of a JavaBean. Finally, I’m storing the table names in a lookup since the pluralization a very expensive operation. The rest should hopefully be somewhat straight forward.

SimpleJdbcInsert

Using Spring’s SimpleJdbcInsert, combined with the heavy lifting in the abstract class, insertions are practically free as you can see in the following code.

// Get Datasource from Spring Context
SimpleJdbcInsert jdbcInsert = new SimpleJdbcInsert( dataSource ).withTableName( dbObj.getTableName() ).usingGeneratedKeyColumns( "id" );
final int id = jdbcInsert.executeAndReturnKey( new BeanPropertySqlParameterSource( dbObj ) ).intValue();
dbObj.setId( id );

Note: you can reuse a SimpleJdbcInsert since it is thread safe, I created a new one in the above code just to show how it is done.

SimpleJdbcTemplate

After inserts, the other two pieces of required functionality are query and update, both of which are supplied by Spring’s SimpleJdbcTemplate. The trick here is to make generous use of Spring’s BeanPropertySqlParameterSource and BeanPropertyRowManager which effortlessly map between JavaBean properties and database columns.

Lets start with the simple case of deleting an object.


simpleJdbcTemplate.update( "DELETE FROM " + dbObj.getTableName() + " WHERE id = :id", new BeanPropertySqlParameterSource( dbObj ) );

You’ll notice the string “id = :id” in the query. This allows Spring to do value substitution, using the BeanPropertySqlParameterSource to replace :id with the actual value of .getId() from the dbObj. With this in mind we can step up to the more complicated update case.

final Collection<String> parts = buildUpdateParts( dbObj );
if ( parts.isEmpty() ) { return; }
final String query = "UPDATE " + dbObj.getTableName() + " SET " + join( parts, ", " ) + " WHERE id = :id";
simpleJdbcTemplate.update( query, new BeanPropertySqlParameterSource( dbObj ) );
private static final Collection<String> buildUpdateParts( final LocaDbObj dbObj ) {
  return transform( filter( dbObj.getFieldNames(), not( IS_ID ) ), new BuildPart( dbObj ) );
}
private static final class BuildPart implements Function<String, String> {
  private final LocaDbObj dbObj;
  public BuildPart( final LocaDbObj dbObj ) {
    this.dbObj = dbObj;
  }
  public String apply( final String propName ) {
    return dbObj.getColumnName( propName ) + " = :" + propName;
  }
}

This code again makes use of google collections but the point is that it takes the field names and the column names from the DbObj and creates snips of sql setting the column name to the field name identifier (i.e. the field name prefixed with ‘:’) just like was done in the deletion case with “id = :id”. Continuing to build on the previous examples we finally come to the query case. This one is a bit difference since we are interested in reading results from the database as well as building a query so we will have to deal with the BeanPropertyRowMapper.

final Collection<String> parts = buildQueryParts( dbObj );
if ( parts.isEmpty() ) { return ImmutableList.of(); }
final String query = "SELECT " + columnNames + " FROM " + dbObj.getTableName() + " WHERE " + join( parts, " AND " );
return result = simpleJdbcTemplate.query( query, new BeanPropertyRowMapper<DbObj>( dbObj.getClass() ), new BeanPropertySqlParameterSource( dbObj ) );
private static final Collection<String> buildQueryParts( final LocaDbObj dbObj ) {
  return transform( filter( dbObj.getFieldNames(), new HasValue( dbObj.toMap() ) ), new BuildPart( dbObj ) );
}
private static final class HasValue implements Predicate<String> {
  private final Map<String, Object> map;
  public HasValue( final Map<String, Object> map ) {
    this.map = map;
  }
  public boolean apply( final String propName ) {
    return map.get( propName ) != null;
  }
}

Hopefully by this point the google collections syntax is becoming familiar. We want to include only object fields that are non null so we use the HasValue Predicate to check each property by name. From there on the rest is quite similar to the update method where we build parts setting each column name equal to its field name identifier. The only other difference is we use the BeanPropertyRowMapper so Spring can return us a list of the correct type of DbObj.

While certainly a bit heavy on the code I hope this will be useful to those trying to create a simple, straight forward database layer with Spring and ActiveRecord naming conventions. As a bit of a tease, the full code base that this is pulled from contains an genericized DAO object that contains basic annotation driven caching as well as the ability to execute CRUD commands via example objects, ids, sql snippets (i.e. where clauses), or full raw SQL all while keeping things nice and type safe as well as sql injection free using PreparedStatements and value substitution.

Synchronous Request Response with ActiveMQ and Spring

activemq-5-x-box-reflectionA few months ago I did a deep dive into Efficient Lightweight JMS with Spring and ActiveMQ where I focused on the details for asynchronous sending and receiving of messages. In an ideal world all messaging would be asynchronous. If you need a response then you should set up an asynchronous listener and either have enough state stored in the service or in the message that you can continue processing once the response has been received. However, when ideals lead to complexity, we have to make the decision of how much complexity can we tolerate for the performance we need. Sometimes it just makes more sense to use a synchronous request/response.

Request Response with JMS

ActiveMQ documentation actually has a pretty good overview of how request-response semantics work in JMS.

You might think at first that to implement request-response type operations in JMS that you should create a new consumer with a selector per request; or maybe create a new temporary queue per request.

Creating temporary destinations, consumers, producers and connections are all synchronous request-response operations with the broker and so should be avoided for processing each request as it results in lots of chat with the JMS broker.

The best way to implement request-response over JMS is to create a temporary queue and consumer per client on startup, set JMSReplyTo property on each message to the temporary queue and then use a correlationID on each message to correlate request messages to response messages. This avoids the overhead of creating and closing a consumer for each request (which is expensive). It also means you can share the same producer & consumer across many threads if you want (or pool them maybe).

This is a pretty good start but it requires some tweaking to work best in Spring. It also should be noted that Lingo and Camel are also suggested as options when using ActiveMQ. In my previous post I addressed why I don’t use either of these options. In short Camel is more power than is needed for basic messaging and Lingo is built on Jencks, neither of which have been updated in years.

Request Response in Spring

The first thing to notice is that its infeasible to create a consumer and temporary queue per client in Spring since pooling resources is required overcome the JmsTemplate gotchas. To get around this, I suggest using predefined request and response queues, removing the overhead of creating a temporary queue for each request/response. To allow for multiple consumers and producers on the same queue the JMSCorrelationId is used to correlated the request with its response message.

At this point I implemented the following naive solution:

@Component
public class Requestor {

    private static final class CorrelationIdPostProcessor implements MessagePostProcessor {

        private final String correlationId;

        public CorrelationIdPostProcessor( final String correlationId ) {
            this.correlationId = correlationId;
        }

        @Override
        public Message postProcessMessage( final Message msg ) throws JMSException {
            msg.setJMSCorrelationID( correlationId );
            return msg;
        }
    }

    private final JmsTemplate jmsTemplate;

    @Autowired
    public RequestGateway( JmsTemplate jmsTemplate ) {
        this.jmsTemplate = jmsTemplate;
    }

    public String request( final String request, String queue ) throws IOException {
        final String correlationId = UUID.randomUUID().toString();
        jmsTemplate.convertAndSend( queue+".request", request, new CorrelationIdPostProcessor( correlationId ) );
        return (String) jmsTemplate.receiveSelectedAndConvert( queue+".response", "JMSCorrelationID='" + correlationId + "'" );
    }
}

This worked for a while until the system started occasionally timing out when making a request against a particularly fast responding service. After some debugging it became apparent that the service was responding so quickly that the receive() call was not fully initialized, causing it to miss the message. Once it finished initializing, it would wait until the timeout and fail. Unfortunately, there is very little in the way of documentation for this and the best suggestion I could find still seemed to leave open the possibility for the race condition by creating the consumer after sending the message. Luckily, according to the JMS spec, a consumer becomes active as soon as it is created and, assuming the connection has been started, it will start consuming messages. This allows for the reordering of the method calls leading to the slightly more verbose but also more correct solution. (NOTE: Thanks to Aaron Korver for pointing out that ProducerConsumer needs to implement SessionCallback and that true needs to be passed to the JmsTemplate.execute() for the connection to be started.)

@Component
public class Requestor {

    private static final class ProducerConsumer implements SessionCallback<Message> {

        private static final int TIMEOUT = 5000;

        private final String msg;

        private final DestinationResolver destinationResolver;

        private final String queue;

        public ProducerConsumer( final String msg, String queue, final DestinationResolver destinationResolver ) {
            this.msg = msg;
            this.queue = queue;
            this.destinationResolver = destinationResolver;
        }

        public Message doInJms( final Session session ) throws JMSException {
            MessageConsumer consumer = null;
            MessageProducer producer = null;
            try {
                final String correlationId = UUID.randomUUID().toString();
                final Destination requestQueue =
                        destinationResolver.resolveDestinationName( session, queue+".request", false );
                final Destination replyQueue =
                        destinationResolver.resolveDestinationName( session, queue+".response", false );
                // Create the consumer first!
                consumer = session.createConsumer( replyQueue, "JMSCorrelationID = '" + correlationId + "'" );
                final TextMessage textMessage = session.createTextMessage( msg );
                textMessage.setJMSCorrelationID( correlationId );
                textMessage.setJMSReplyTo( replyQueue );
                // Send the request second!
                producer = session.createProducer( requestQueue );
                producer.send( requestQueue, textMessage );
                // Block on receiving the response with a timeout
                return consumer.receive( TIMEOUT );
            }
            finally {
                // Don't forget to close your resources
                JmsUtils.closeMessageConsumer( consumer );
                JmsUtils.closeMessageProducer( producer );
            }
        }
    }

    private final JmsTemplate jmsTemplate;

    @Autowired
    public Requestor( final JmsTemplate jmsTemplate ) {
        this.jmsTemplate = jmsTemplate;
     }

    public String request( final String request, String queue ) {
        // Must pass true as the second param to start the connection
        return (String) jmsTemplate.execute( new ProducerConsumer( msg, queue, jmsTemplate.getDestinationResolver() ), true );
    }
}

About Pooling

Once the request/response logic was correct it was time to load test. Almost instantly, memory usage exploded and the garbage collector started thrashing. Inspecting ActiveMQ with the Web Console showed that MessageConsumers were hanging around even though they were being explicitly closed using Spring’s own JmsUtils. Turns out, the CachingConnectionFactory‘s JavaDoc held the key to what was going on: “Note also that MessageConsumers obtained from a cached Session won’t get closed until the Session will eventually be removed from the pool.” However, if the MessageConsumers could be reused this wouldn’t be an issue. Unfortunately, CachingConnectionFactory caches MessageConsumers based on a hash key which contains the selector among other values. Obviously each request/response call, with its necessarily unique selector, was creating a new consumer that could never be reused. Luckily ActiveMQ provides a PooledConnectionFactory which does not cache MessageConsumers and switching to it fixed the problem instantly. However, this means that each request/response requires a new MessageConsumer to be created. This is adds overhead but its the price that must be payed to do synchronous request/response.

Efficient Lightweight JMS with Spring and ActiveMQ

Spring

© Darwin Bell

Asynchronicity, its the number one design principal for highly scalable systems, and for Java that means JMS, which in turn means ActiveMQ. But how do I use JMS efficiently? One can quickly  become overwhelmed with talk of containers, frameworks, and a plethora of options, most of which are outdated. So lets pick it apart.

Frameworks

The ActiveMQ documentation makes mention of two frameworks; Camel and Spring. The decision here comes down to simplicity vs functionality. Camel supports an immense amount of Enterprise Integration Patterns that can greatly simplify integrating a variety of services and orchestrating complicated message flows between components. Its certainly a best of breed if your system requires such functionality. However, if you are looking for simplicity and support for the basic best practices then Spring has the upper hand. For me, simplicity wins out any day of the week.

JCA (Use It Or Loose It)

Reading through ActiveMQ’s spring support one is instantly introduced to the idea of a JCA container and ActiveMQ’s various proxies and adaptors for working inside of one. However, this is all a red herring. JCA is part of the EJB specification and as with most of the EJB specification, Spring doesn’t support it. Then there is a mention of Jencks, a “lightweight JCA container for Spring”, which was spun off of ActiveMQ’s JCA container. At first this seems like the ideal solution, but let me stop you there.  Jencks was last updated on January 3rd 2007. At that time ActiveMQ was at version 4.1.x and Spring was at version 2.0.x and things have come a long way, a very long way. Even trying to get Jencks from the maven repository fails due to dependencies on ActiveMQ 4.1.x jars that no longer exist. The simple fact is there are better and simpler ways to ensure resource caching.

Sending Messages

The core of Spring’s message sending architecture is the JmsTemplate. In typical Spring template fashion, the JmsTemplate abstracts away all the cruft of opening and closing sessions and producers so all the application developer needs to worry about is the actual business logic. However, ActiveMQ is quick to point out the JmsTemplate gotchas, mostly that JmsTemplate is designed to open and close the session and producer on each call. To prevent this from absolutely destroying the messaging performance the documentation recommends using ActiveMQ’s PooledConnectionFactory which caches the sessions and message producers. However this too is outdated. Starting with version 2.5.3, Spring started shipping its own CachingConnectionFactory which I believe to be the preferred caching method. (UPDATE: In my more recent post, I talk about when you might want to use PooledConnectionFactory.) However, there is one catch to point out. By default the CachingConnectionFactory only caches one session which the javadoc claims to be sufficient for low concurrency situations. By contrast, the PooledConnectionFactory defaults to 500. As with most settings of this type, some amount of experimentation is probably in order. I’ve started with 100 which seems like a good compromise.

Receiving Messages

As you may have noticed, the JmsTemplate gotchas strongly discourages using the recieve() call on the JmsTemplate, again, since there is no pooling of sessions and consumers.  Moreover, all calls on the JmsTemplate are synchronous which means the calling thread will block until the method returns. This is fine when using JmsTemplate to send messages since the method returns almost instantly. However, when using the recieve() call, the thread will block until a message is received, which has a huge impact on performance. Unfortunately, neither the JmsTemplate gotchas nor the spring support documentation mentions the simple Spring solution for these problems. In fact they both recommend using Jencks, which we already debunked. The actual solution, using the DefaultMessageListenerContainer, is buried in the how do I use JMS efficiently documentation. The DefaultMessageListenerContainer allows for the asynchronous receipt of messages as well as caching sessions and message consumers. Even more interesting, the DefaultMessageListenerContainer can dynamically grow and shrink the number of listeners based on message volume. In short, this is why we can completely ignore JCA.

Putting It All Together

Spring Context XML

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:amq="http://activemq.apache.org/schema/core"
xmlns:jms="http://www.springframework.org/schema/jms"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core-5.2.0.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-2.5.xsd
http://www.springframework.org/schema/jms http://www.springframework.org/schema/jms/spring-jms-2.5.xsd">

<!-- enables annotation based configuration -->
<context:annotation-config />
<!-- scans for annotated classes in the com.company package -->
<context:component-scan base-package="com.company"/>
<!-- allows for ${} replacement in the spring xml configuration from the system.properties file on the classpath -->
<context:property-placeholder location="classpath:system.properties"/>
<!-- creates an activemq connection factory using the amq namespace -->
<amq:connectionFactory id="amqConnectionFactory" brokerURL="${jms.url}" userName="${jms.username}" password="${jms.password}" />
<!-- CachingConnectionFactory Definition, sessionCacheSize property is the number of sessions to cache -->
<bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
    <constructor-arg ref="amqConnectionFactory" />
    <property name="exceptionListener" ref="jmsExceptionListener" />
    <property name="sessionCacheSize" value="100" />
</bean>
<!-- JmsTemplate Definition -->
<bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate">
   <constructor-arg ref="connectionFactory"/>
</bean>
<!-- listener container definition using the jms namespace, concurrency is the max number of concurrent listeners that can be started -->
<jms:listener-container concurrency="10" >
    <jms:listener id="QueueListener" destination="Queue.Name" ref="queueListener" />
</jms:listener-container>
</beans>

There are two things to notice here. First, I’ve added the amq and jms namespaces to the opening beans tag. Second, I’m using the Spring 2.5 annotation based configuration. By using the annotation based configuration I can simply add @Component annotation to my Java classes instead of having to specify them in the spring context xml explicitly. Additionally, I can add @Autowired on my constructors to have objects such as JmsTemplate automatically wired into my objects.

QueueSender

package com.company;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.jms.core.JmsTemplate;
import org.springframework.stereotype.Component;

@Component
public class QueueSender
{
    private final JmsTemplate jmsTemplate;

    @Autowired
    public QueueSender( final JmsTemplate jmsTemplate )
    {
        this.jmsTemplate = jmsTemplate;
    }

    public void send( final String message )
    {
        jmsTemplate.convertAndSend( "Queue.Name", message );
    }
}

Queue Listener


package com.company;

import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.TextMessage;

import org.springframework.stereotype.Component;

@Component
public class QueueListener implements MessageListener
{
    public void onMessage( final Message message )
    {
        if ( message instanceof TextMessage )
        {
            final TextMessage textMessage = (TextMessage) message;
            try
            {
                System.out.println( textMessage.getText() );
            }
            catch (final JMSException e)
            {
                e.printStackTrace();
            }
        }
    }
}

JmsExceptionListener

package com.company;

import javax.jms.ExceptionListener;
import javax.jms.JMSException;

import org.springframework.stereotype.Component;

@Component
public class JmsExceptionListener implements ExceptionListener
{
    public void onException( final JMSException e )
    {
        e.printStackTrace();
    }
}

Update

I have finally updated the wiki at activemq.apache.org. The following pages now recommend using MessageListenerContainers and JmsTemplate with a Pooling ConnectionFactory instead of JCA and Jencks.

Unit Testing Spring Jersey Integration

Recently, I’ve been working on a side project which has given me the opportunity to play with an interesting Java technology stack. After a few weekends of working on the build and moving my way up the stack from database to service layer I was at the point of writing a full integration test. In the past I’ve done integration testing with an iBatis/Spring stack, which, with the help of DBUnit is a fairly simple exercise. However, this project is heavily REST based and the addition of Jersey (JAX-RS) was a serious curveball.

To really test the whole stack an embedded webserver is required.  Looking through the Jersey/Spring unit tests they make use of an exploded WAR using Glassfish.  Assuming this was the optimal solution I added Glassfish to my ivy.xml config and ran an ivy update.  However, once I realized the overwhelming volume of jars being downloaded I decided it was much too heavy weight of a solution.

My next choice was Jetty.  I have had experience with using Jetty in the past but I quickly ran into issues trying to get it working right with Jersey, Spring, and an exploded WAR configuration.  It was then that I saw the GrizzlyServerFactory referenced in another Jersey unit test.  After some prodding and reading into the source code I ended up with the following:

final URI baseUri = UriBuilder.fromUri( "http://localhost/" ).port( 9998 ).build();
final ServletAdapter adapter = new ServletAdapter();
adapter.addInitParameter( "com.sun.jersey.config.property.packages", <your-package-name> );
adapter.addContextParameter( "contextConfigLocation","classpath:applicationContext.xml" );
adapter.addServletListener( "org.springframework.web.context.ContextLoaderListener" );
adapter.setServletInstance( new SpringServlet() );
adapter.setContextPath( baseUri.getPath() );
SelectorThread threadSelector = GrizzlyServerFactory.create( baseUri, adapter );

We start by making a base Uri for localhost at a high numbered port.  Then we build the Grizzly ServletAdaptor.  The first parameter is the package name where your Jersey enabled are located.  The next two lines are taken directly from the web.xml file that is used for the actual WAR.  The first is the config location for the Spring Context and the second is the ContextLoaderListener servlet.  Finally we add the Jersey SpringServlet and the context path.  With these two objects created we can build a SelectorThread and begin using the server using the Jersey Client api that I talked about in my previous post.  When you are done, don’t forget to call stopEndpoint() on the SelectorThread when you are done.

EDIT:

I’ve recently come across one additional helpful tip regarding this setup. If you need to get access to your ApplicationContext, for instance to get access to the spring managed DataSource for database testing, there is a simple but non obvious way of doing this. The simple part is using Spring’s built in utilities for getting the application from a ServletContext.


WebApplicationContextUtils.getWebApplicationContext( adapter.getServletInstance().getServletConfig().getServletContext() );

However, when I first tried this I got an unexplained null pointer exception. After a long while trying to google around for the answer I gave up and dug into the source code. Turns out you need to add one important property to the ServletAdaptor for this to all work.


adapter.setProperty( "load-on-startup", 1 );

This will force your adaptor to initialize the servlet immediately and you populate the ServletConfig and ServletContext so you can get the ApplicationContext from them.

Follow

Get every new post delivered to your Inbox.