Sharing Spring JdbcTemplate instance

By Abhijat Upadhyay – Sr. Software Engineer – ADP Cobalt Inventory Platform

Prior to sharing the JdbcTemplate instance in our application, we injected a Datasource object directly into the DAOs and then in the setter method we created an instance of  the JdbcTemplate class.

For example:

@Autowired
public void setDataSource(DataSource dataSource) {
    this.jdbcTemplate = new JdbcTemplate(dataSource);
}

Then we made the following change to our context xml:

<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
    <property name="dataSource" ref="dataSource" />
    <property name="fetchSize" value="200" />
</bean>

And this is how we started injecting the shared instance:

@Resource(name="jdbcTemplate")
private JdbcTemplate jdbcTemplate

As per Spring, JdbcTemplate is thread safe once constructed. However, they also mention that the object is stateful but that state is not conversational. This is evident from our declaration in the context.xml above.

Motivation to share JdbcTemplate instance:

  1. To improve performance by reducing the roundtrips that resultset object has to make to fetch the next set of records. We achieve this by setting the fetchSize on JdbcTemplate. Without that the default value of 10 is used by Oracle JDBC driver. For example, if we are fetching 250 records from db with default fetchSize then there will be 25 roundtrips between Java layer and the database. This results in some performance hits. In our case prior to this change we reduced our cache priming time by about 40%. We work with large caches (30 GB+).
  2. We can set config params like fetchSize globally versus having to do inside each DAO.
  3. Object reuse: why to create new ones when it is thread safe… (or may be not, as we found out).

So, what’s the issue we ran into?

The issue is that JdbcTemplate is a “threadsafe” stateful object. In addition to storing fetchSize it also allows us to set maxRows.

From JdbcTemplate.java:

/**
* If this variable is set to a non-zero value, it will be used for setting the
* maxRows property on statements used for query processing.
*/
private int maxRows = 0;

Couple of our DAO classes were setting this value to 30K for business reasons. Since our JdbcTemplate instance is shared, if we set maxRows on it then it impacts all DAOs. Any DAO that is trying to fetch more than that many records will return back truncated resultset. For example if a query returns 45,000 records and the maxRows is set to 30,000 then the JdbcTemplate will return 30,000 records only. Previously, since each DAO had its own instance of JdbcTemplate we were not bitten by this issue.

So, what’s the solution?

Few options that we have on the table:

  1. Do what we were doing previously… meaning each DAO should create its own instance of JdbcTemplate.
  2. Make jdbcTemplate instance a prototype bean. This will ensure that each injected jdbcTemplate instance will be different. Similar to #1 but Spring does creation and injection and we can also set global variables in one place.
  3. Do not set maxRows on jdbcTemplate and “fix” the code that is doing that.
  4. Write a wrapper around JdbcTemplate to disallow modifying the state once it has been created.
  5. Request Spring to make JdbcTemplate safe to be used by shared instances. Meaning state should be assigned only during creation time and once that is done any further modification should generate an exception.

Using Asynchronous Gateways in Spring Integration and Spring’s Async Annotation

By Lynn Walton – Sr. Software Engineer – ADP Cobalt SEO Team

Introduction

Recently the SEO team configured a new job using Spring Integration. The job includes several phases where steps need to be done for each item in a collection. While I could have used the same single-integration approach we have used before, there were disadvantages with the previous approach that I was hoping to eliminate by using new techniques.

First I’ll describe two features used in the new approach.

Asynchronous Gateways

Spring Integration’s  @Gateway annotation allows you to designate a POJO defining an interface which, when wired up with <int:gateway> configuration, will allow a call to the interface to send the passed-in argument as the message payload for the configured channel.  It’s convenient for “kicking off” an integration flow to operate with a given set of data.  With configuration like the following, Spring Integration creates a GatewayFactoryProxyBean that implements your interface.

<int:gateway id="myJobGateway"
    service-interface="com.cobalt.services.seo.integration.support.MyJobGateway"
   default-request-channel="startJob" error-channel="jobErrorChannel"  />
public interface MyJobGateway {
@Gateway
    void startJob(List<Acccount> accounts);
}

In the single-integration approach we typically have our interface return void and do not configure a default-reply-channel.  By not having a reply channel, and by having the integration use an ExecutorChannel early in the flow to break the single-thread context between sender/receiver, we end up with behavior similar to making an asynchronous call to the gateway.  (Using an ExecutorChannel is done by adding <int:dispatcher taskExecutor=”…”/> to a channel.)  In the new approach we use a true asynchronous gateway.

Since Spring Integration 2.0 there has been support for making the Gateway asynchronous. You automatically get an AsynchronousGateway just by defining your interface to return a Future<T>.

public interface MyAsyncJobGateway {
@Gateway
Future<MyResultClass> startJob(List<Account>);
}

Spring’s @Async annotation

Since Spring 3.0, annotating a method with @Async causes Spring to wrap your service in a proxy.  When your method is called, the caller will get an immediate return, while the actual execution occurs in a task submitted to a Spring TaskExecutor.

@Service
public class MyServiceImpl  implements MyService {
    @Async
    void myMethod(){
        // do work
    }
}

For a method where you wish to return something so callers have the option of choosing to wait for a result, you return your result by passing it to the constructor of the AsyncResult<T> class.

@Service
public class MyServiceImpl  implements MyService {
    @Async
    Future myMethod(){
        // do work
        return new AsyncResult(instanceOfMyResultClass);
    }
}

In our normal use case, our service is called by a REST endpoint (which is called from cron).  We know the job will be long-running so the REST endpoint does not wait for the Future.  But having the service defined to return a Future allows us to wait for it in our Integration tests. These tests are configured to work with a small dataset so waiting is feasible, and making assertions on the returned object simplifies the integration tests.

A High-level Overview of the Two Approaches

Note: the integration flow pseudo-code shown below attempts to provide clarity by leaving out all components other than those which show the flow of the work.

Previous Approach
(Note: processing for each item in a collection is handled by splitters and aggregators)
New Approach
1 integration flow:GetDataForAllSitesInAllAccounts – a synchronous gateway (with null return and no reply-channel *) which sends a list of accounts into the flow:

splitter
  each account
    get list of sites
    splitter
      each site
        convert to list of urls to call
        splitter
          each url
            make call and store
        aggregator
    aggregator
aggregator
2 integration flows:GetSitesForAccount – an asynchronous gateway which sends a single account into the flow:

get list of sites

GetSiteData –  an asynchronous gateway which sends each site into a second flow

convert to list of urls to call
splitter
  each url
    make call and store
aggregator
1 service method:

  • obtain the list of accounts
  • call the gateway
  • returns null almost immediately after gateway call because the gateway has a null return and no reply channel, and the integration has ExecutorChannel early in the flow.
1 service method marked with Spring’s @Async:

  • obtain the list of accounts
  • logic to control flow and concurrency for calling both gateways (See “Logic in the Service Method” below)
  • returns a Future so callers have the option of waiting for result (great for testing)

Comparisons of Advantages and Disadvantages (PRO/CON)

Previous Approach New Approach with multiple Spring Async Gateways  and a @Async service method
CON: The integration flow is large and complex enough to make understanding more difficult. (Note: the pseudo-code above doesn’t show the real difference in complexity.) PRO: The separate integrations are much simpler to understand.CON: The looping that would be done by splitters/aggregators is now done with custom logic in the service method.
CON: Getting a summary report is critical but quite difficult with this approach as you must code separate integration components that can handle 1) failures that might have occurred at different stages and 2) summarization of both error and non-error results. The error handling code is complicated because the available payload and header information is different in all stages.Additionally, for the summary (in our logs) to be easily understood, you need to be able to make a good estimate of the time each aggregator needs to wait for the typical case to be finished. (This estimated time is set as the MessageGroupStoreReaper’s timeout value.) Estimating gets more difficult when there are nested splitters and aggregators as each estimated time has to take into account the timeouts for the preceding aggregators. If you set these too short or if unusual circumstances make the job take longer than what you’ve estimated, the summary logging will be difficult to interpret as it relies on counts of items released to the aggregator – which might have happened more than once.  Finally, the “best estimate” for production is not easy to guess in advance when pre-production environments differ significantly from production. PRO: Summary reporting is easier and more accurate because you’re catching any errors in the sections of code where you know the stage in which the error has occurred. Because of this there is no need to complicate the integration to store metadata in headers for retrieval from an error handling component. Also, there is only one aggregator that needs a time estimate.
CON: Integration tests need to use Thread.sleep() with a long enough value to give the integration time to run before verifying results.  This time varies in different computer environments with different loads, so to prevent build failures you’re forced to choose a pessimistically high value. The tests then take longer than they might have needed. PRO: Integration tests can wait with Future.get(estimatedTime, TimeUnit.SECONDS). You can set estimatedTime on the high side to avoid build failures without suffering the penalty of having to wait longer than necessary.
CON: Integration tests have to perform before-and-after state querying to make meaningful assertions, since the gateway returns null to simulate asynchronous behavior. PRO: Integration tests can easily make meaningful assertions on the Future object returned rather than querying before-and-after state.
PRO: Doing all of the work in one integration allows easy configuration of the number of concurrently executing tasks in each phase, by setting <int:dispatcher task-executor=”myExecutor”/> and configuring the desired pool-size on the executor. CON: We have to write the logic in Java to control a “quasi” level of concurrency for a particular stage. I say “quasi” because the nature of it is different than the way it would actually run in the case of a dispatcher with a task executor on a channel. With the coded logic we allow a degree of concurrency but only in batches of a set size where the batch is delayed until each task in the batch has finished. So there are pauses in the throughput that wouldn’t happen with the normal task-executor on a channel, which can always start on the next task when it has an available thread in the pool.PRO: Despite the CON, coding the concurrency logic can also be seen as an advantage, because we can dynamically pass the desired batch size to our service method. This allows the level of “quasi-concurrency” to be changed without deploying a new build and makes experimentation for arriving at the best value easier.

Logic in the Service Method

The relevant portions of the service method are listed below to show the extra amount of logic we implement to get the other benefits described above.  We do our own looping in place of the nested splitters/aggregators. We code the logic for performing some work concurrently in batches, but in exchange for this we get the flexibility of being able to dynamically change the batch size.  Finally, the code below for creating and updating a JobStats (summary) object is not really extra.  In our previous approach, it would still need to be coded in a separate summarization component used by the integration.

@Override
@Async()
public Future startJobForAccounts(final int numConcurrentSites, final String... acctLogins) {
    try {
        final List accounts = accountsUtil.createAccountsListForAcctEmail(acctLogins);
        final JobStats stats = new JobStats();
        // [1] LOOP REPLACING FIRST SPLITTER/AGGREGATOR
        for (Account account : accounts) {
            startJobForSingleAccount(numConcurrentSites, account, stats);
        }
        LOGGER.info(stats.createStatsLogStr());
        return new AsyncResult(stats);
    } catch (Exception exc) {
        LOGGER.error("Unexpected error: " + exc.getMessage());
        final JobStats stats = new JobStats();
        // set properties on JobStats appropriate for indicating the error
        return new AsyncResult(stats);
    }
}

private void startJobForSingleAccount(final int numConcurrentSites, final Account account, final JobStats stats) {
    Assert.isTrue(numConcurrentSites > 0, NUM_CONCURRENT_SITES_MUST_BE_GT_ZERO_MSG);
    final List sites = getSitesForAccount(account, stats);

    final Map<String, Future> futuresBatchMap = new LinkedHashMap<String, Future>();
    final Iterator siteIter = sites.iterator();
    int idx = 0;

    // [2] LOOP REPLACING SECOND SPLITTER/AGGREGATOR
    while (siteIter.hasNext()) {
        final Site site = siteIter.next();
        idx++;
        // [3] 1<sup>st</sup> Gateway call
        futuresBatchMap.put(site.toString(), getSiteDataGateway.startGetSiteData(site));

        /*
         * sites.size() - idx < numConcurrentSites makes sure that any final partial batch gets processed.
         * It also means some of the last entries are processed in smaller batches or even one at a time.
         */
         if (idx % numConcurrentSites == 0 || sites.size() - idx < numConcurrentSites) {
             blockToProcessBatch(futuresBatchMap, stats);
         }
    }
}

private List getSitesForAccount(final Account account, final JobStats stats) {
    // [4] 2nd Gateway call
    final Future future = getSitesForAccountGateway.getSitesForAccount(account);
    try {
        final List sites = future.get(secondsToWaitForSitesListFuture, TimeUnit.SECONDS);
        stats.getAccountsSucceeded().add(account.getLogin());
        return sites;
    } catch (Exception exc) {
        stats.getAccountsFailed().add(account.getLogin());
        LOGGER.error("Failed to process Account {}", account.getLogin(), exc);
        return new ArrayList();
    }
}

private void blockToProcessBatch(final Map<String, Future> futuresBatchMap, final JobStats stats) {
    for (Map.Entry<String, Future> futureEntry : futuresBatchMap.entrySet()) {
        try {
            final SiteDataSummary summary = futureEntry.getValue()
                    .get(secondsToWaitForSiteDataFuture, TimeUnit.SECONDS);
            summary.setSuccessful(true);
            stats.incrementSitesSucceeded();
        } catch (Exception exc) {
            stats.getSitesFailed().add(futureEntry.getKey());
            LOGGER.error("Failed to process {}", futureEntry.getKey(), exc);
        }
    }
    futuresBatchMap.clear();
}

Conclusion

With a complex framework such as Spring Integration, there are often many ways to implement a desired task.  We generally lean toward solutions that require less of our own business logic code and many times this approach serves us well. But it is worthwhile to think about alternative approaches. Sometimes writing a little more code to increase control and improve testability can be better than taking advantage of the features within a complex framework.

I’m glad I tried the approach as I learned a lot and believe the benefits listed above outweigh the only disadvantage – that of having slightly more service code to implement.

Reference Links:
http://docs.spring.io/spring/docs/3.0.x/reference/scheduling.html
http://docs.spring.io/spring-integration/docs/2.0.0.RC1/reference/html/gateway.html

Why Doesn’t My Spring @Transactional Annotation Work?

By Ting Lin – Software Engineer – Cobalt Digital Advertising Platform

With Spring framework, wrapping a call in a transaction is easy. Adding annotation @Transactional can do the work. But sometimes, @Transactional does not work for mysterious reasons. It seems like the annotation is ignored. What causes that malfunction?

Recently, I came across this issue of @Transactional again and there no exception or error at all. What was going on? After playing with the problem for a while, I found out the root cause and a solution. I think it is worthwhile to share this with others who might not have encountered this situation before.

To demonstrate, I’ve created a sample application that inserts a record into MySQL database. The source code can be found in Resources section. To setup MySQL database, follow these instructions then run create-table.sql.

First, I just wanted to persist data in an old-fashioned way:

final EntityManagerFactory emf =
        Persistence.createEntityManagerFactory("example");
final EntityManager em = emf.createEntityManager();
em.getTransaction().begin();
em.persist(entity);
em.getTransaction().commit();

Running this, I was confident that the O/R layer is working and hibernate-configuration was correct.

Next I wanted to do the same thing as prior, I just wanted to do it utilizing proper DAO using Spring-JPA to manage transactions.

So lets start with a simple piece of code.

@Component
public class ExampleDaoImpl {
  @PersistenceContext private EntityManager entityManager;

  @Transactional(propagation = Propagation.REQUIRED)
  public final void persist(final Example entity) {
    entityManager.persist(entity);
  }
}

And of course the test that goes with it.

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration (locations = {"classpath:context.xml"})
public class ExampleDaoTest {
  @Autowired private ExampleDaoImpl dao;

  @Test
  public void shouldPersist() {
    final Example entity = new Example("example 2");
    dao.persist(entity);
  }
}

Confident in my small sample, I went ahead and ran the test… works right? Sadly, I got an exception.

Apparently, entityManager in ExampleDaoImpl was null, but how come? The console logging for Spring didn’t show any exceptions! Now I became curious as to why such a simple piece of code can be tricky…

First things first, let’s see if Spring successfully initialized ExampleDaoImpl, a simple trick is to just print something out, so I added the following constructor:


public ExampleDaoImpl() {
  System.out.println("ExampleDaoImpl hashcode = [" + System.identityHashCode(this) + "]");
}

Running the test code again, I was even more surprised, not only did the constructor print out the hashcode as expected, it did it twice!!

The constructor was called twice with different hash codes.  The seemingly simple task is becoming messier with every step.  Not to derail the task at hand, we’ll set aside the fact that it is getting called twice and just focus on why EntityManager didn’t get wired in.  If entityManager in ExampleDaoImpl was null at the time it was called, how about we pass in the entityManager?

Lets try changing ExampleDaoImpl.persist() to be:

@Transactional(propagation = Propagation.REQUIRED)
public final void persist(final EntityManager em, final Example entity) {
  em.persist(entity);
}

Now, the test code is changed to:

@Test
public void shouldPersist() {
  final Example entity = new Example("example 2");
  final EntityManagerFactory emf =
          Persistence.createEntityManagerFactory("example");
  final EntityManager em = emf.createEntityManager();
  dao.persist(em, entity);
}

SUCCESS!!  I can even see the hibernate SQL generated on the standard out inserting the record.

Now lets take a look in the database, “0 rows selected”!!  Where is the record I just inserted into database? Why was my record not committed? And how come there were no exceptions or errors thrown?

At this point, I am guessing that either the persist call somehow failed, or the transaction never ran.

Let’s take a look at the call stack that invokes ExampleDaoImpl.persist(), and BINGO!!

What I expected to see isn’t there:
org.springframework.transaction.interceptor.TransactionInterceptor.invoke

Now I’m sure the persist call was not in a transaction but why?

Not sure how to proceed from here, I decided to see if perhaps the double initialization might hold some clue, remember the two instances being created printing out two separate hashcodes.

Lets see who’s creating those, and do to that, I need to examine the call stack for each constructor, so lets print it out too.

public ExampleDaoImpl() {
  System.out.println("create instance with hashCode [" + hashCode() + "]");
  new RuntimeException().printStrackTrace();
}

Interestingly, I see two different call stacks, one involving CGLIB, and the other not.

So I do some Google searching and read this article Proxying mechanisms. In short, if a Spring bean does not implement an interface, CGLIB will be used as the proxy; and for each proxied instance, two objects are created: the actual proxied object and an instance of the subclass that implements the advice.

Okay, well that solves that riddle, but why isn’t the transaction intercepting the call?

Lets try a quick test and change the code around to use interfaces instead, see if the JDK proxy can shed any light…

We’ll change ExampleDaoImpl to implement ExampleDao

And the test to autowire the Interface…


public class ExampleDaoTest {
  @Autowired private ExampleDao dao;
  ...
}

That works great…  Now it seems that something is different about CGLIB, carefully examining the console output again, I found an interesting warning log:

WARNING: Unable to proxy method [public final void com.example.dao.ExampleDaoImpl.persist(com.example.entity.Example)] because it is final: All calls to this method via a proxy will be routed directly to the proxy.

This warning means that CGLIB will not be able to implement the aspect around the method (aka Transaction) due to the method being final.  So if we want to know why ‘final’ can cause problems for us, we need to understand how proxies are implemented. In fact, both JDK dynamic proxies and CGLIB can work. Going back to the older code, see:

CGLIB:

@Component
public class ExampleDaoImpl {
  @PersistenceContext private EntityManager entityManager;

  @Transactional(propagation = Propagation.REQUIRED)
  public void persist(final Example entity) {
    entityManager.persist(entity);
  }
}

notice method persist() is no longer final?

JDK dynamic proxies:

public interface ExampleDao {
  void persist(T entity);
}

@Component
public class ExampleDaoImpl implements ExampleDao {
  @PersistenceContext private EntityManager entityManager;

  @Override
  @Transactional(propagation = Propagation.REQUIRED)
  public final void persist(final Example entity) {
    entityManager.persist(entity);
  }
}

Great! We now know how to work with CGLIB and JDK dynamic proxies. But why @transactional doesn’t work sometimes?

CGLIB proxy extends ExampleDaoImpl and the proxy holds on to a plain instance of ExampleDaoImpl. When AOP is required, the logic of the AOP is performed in the subclass and the actual method call is then dispatched to the plain instance in the proxy.  If the method is final, the subclass can’t surround the call to it, so the actual execution is given to the parent class of the proxy subclass without any interception by the subclass..

Now remember how we got here?  We were getting NullPointerException on this call

@Transactional(propagation = Propagation.REQUIRED)
public final void persist(final Example entity) {
  entityManager.persist(entity);
}

The reason for that lies in the fact that since the persist method is final, calls to it are handled inside the proxy through inheritence and never dispatched to the plain instance contained within the proxy, so the entityManager was never auto-wired by spring.

Doing more research and further testing I summorize the findings in this table:

Component doesn’t implement Interface then use CGLIB
Method Needs AOP? Method Level Modifier Result
yes final AOP fails at runtime with WARNING: Unable to proxy method
yes not final work as proxy
no final work without proxy
no not final work without proxy
Component implements an Interface then use JDK Dynamic Proxy
Method Needs AOP? Component Used As Interface? Result
yes yes work as proxy
yes no Runtime Error, Spring will fail to autowire.
no yes work without proxy
no no Runtime Error, Spring will fail to autowire.

As you can see, JDK proxy works better than CGLIB as it doesn’t suffer an error in setup that keeps the system working only to find out later that something is wrong.

Armed with all this knowledge, I realized that months down the line, I will forget the exact values in each cell in the table I built, so to make sure I don’t fall into this same trap again in the future, I implemented a test which makes sure every component that requires AOP will implement at least one interface. In other words, we will not use CGLIB.

public class ComponentInterfaceEnforcerTest {
  // The package to test
  private static final String POJO_PACKAGE = "com.example";

  private List<PojoClass> pojoClasses;
  private PojoValidator pojoValidator;

  @Before
  public void setup() {

    pojoClasses = PojoClassFactory.getPojoClassesRecursively(
      POJO_PACKAGE, new ComponentFilter());

    pojoValidator = new PojoValidator();

    // Create Rules to validate structure for POJO_PACKAGE
    pojoValidator.addRule(new MustImplementInterface());
  }

  @Test
  public void evaluateEveryComponent() {
    for (PojoClass pojoClass : pojoClasses) {
      pojoValidator.runValidation(pojoClass);
      }
  }
}

public class ComponentFilter implements PojoClassFilter {
  private final List<Class<? extends Annotation>> beanTypes =
    Lists.newArrayList();
  private final List<Class<?>> methodAnnotations = Lists.newArrayList();

  public ComponentFilter() {
    beanTypes.add(Component.class);
    beanTypes.add(Repository.class);
    beanTypes.add(Service.class);

    methodAnnotations.add(Transactional.class);
  }

  @Override
  public boolean include(final PojoClass clazz) {
    for (final Annotation annotation : clazz.getAnnotations()) {
      if (isTargetClass(clazz, annotation.annotationType())) {
        return true;
      }
    }
    return false;
  }

  private boolean isTargetClass(final PojoClass clazz,
      final Class<? extends Annotation> classAnnotation) {
    for (final Class<? extends Annotation> beanType : beanTypes) {
      if (beanType == classAnnotation && hasAOP(clazz)) {
        return true;
      }
    }
    return false;
  }

  private boolean hasAOP(final PojoClass clazz) {
    final List<PojoMethod> methods = clazz.getPojoMethods();
    for (final PojoMethod method : methods) {
      final List<? extends Annotation> annotations = method.getAnnotations();
      for (final Annotation annotation : annotations) {
        if (isTargetMethod(clazz, annotation.annotationType())) {
          return true;
        }
      }
    }
    return false;
  }

  private boolean isTargetMethod(final PojoClass clazz,
      final Class<? extends Annotation> methodAnnotation) {
    for (final Class<?> annotation : methodAnnotations) {
      if (annotation == methodAnnotation) {
        return true;
      }
    }
    return false;
  }
}

public class MustImplementInterface implements Rule {
  @Override
  public void evaluate(final PojoClass clazz) {
    assertTrue("component " + clazz + " should implement at least one interface",
      clazz.getInterfaces().size() > 0);
  }
}

openpojo is a nice utility to make testing easier. The test uses openpojo to scan every component the application requests; then it uses reflection to make sure every method in that component, who requires @Transactional, should have an interface. With the test, we will only get a test failure at compile-time instead of runtime every time we forget to implement an interface. In other words, I will never be trapped into CGLIB problem.

From this investigation, I found that it is easy to accidentally fall into situations where Spring is not able to proxy methods in a bean.  As a rule, it is simple to adopt a policy where you always create interfaces for Spring-managed beans, and interact with beans only through their interface.  Additionally, we can create unit tests which will seek out cases where this policy is violated, so if we forget to follow this rule in the future we will be notified as soon as the tests are ran.

Resources:
Example application source: example.zip
Proxying mechanisms: http://static.springsource.org/spring/docs/3.0.0.M3/spring-framework-reference/html/ch08s06.html
Transaction management: http://static.springsource.org/spring/docs/2.0.x/reference/transaction.html
Openpojo: http://code.google.com/p/openpojo/
ASM: http://asm.ow2.org/doc/tutorial.html#resources
Mysql: http://dev.mysql.com/doc/refman/5.1/en/installing.html

%d bloggers like this: