The Little Integration Test that Didn’t, part 2

In part 1, we walked through how we decided to add the rollback-for=”Exception” attribute to our transactional advice.  What we didn’t discuss is how these changes dovetailed with changes to the existing integration test.

The Integration Test, in its Natural Habitat

The integration test consists of three modules, simulating the layers in our system.  There is a persistence project, a domain project, and a service project.  The service-level project has classes named such that the transactional AOP advice applies to their methods.

So far, so good, right?

The Existing Rollback Test

The service-level project contained an integration test, a junit class with testSuccessfulCommit() and testRollback() methods.  The testRollback() method passed a special string such that, when the call got down to the persistence layer, the persistence layer would recognize it and throw a RuntimeException.  Then the integration test would catch that and verify that the data could not be found in the database — the idea being that if the thing we just wrote to the database couldn’t be found, the rollback must have succeeded.

Testing Rollback on Exceptions and Errors

When we changed the transaction advice to also roll back on Exception, we weren’t sure if that would remove the default of rolling back on RuntimeException and Error.  So we modified the DAO at the persistence layer, which could already throw a RuntimeException, to recognize requests for Exception and Error as well.  Then we modified the service-layer integration test to test the transactional behavior for those kinds of exceptions (with a bunch of Maven installs and (possibly unnecessary) Maven Eclipses being run between all these changes).

Two Problems

When we changed the transactional advice to include Exception, we were getting really odd failures in the service-level integration test.  These errors were caused by two problems in the integration test that banded together against us:

1. An Error of Commission: we never tried to write!

Here was what we’d updated the create() method to look like:

    public Long create(SimplePerson persistentObject) throws MyException {
        // Simulate something going wrong at the persistence layer
        if (persistentObject.getSsn().equals("Exception")) {
            throw new MyException();
        } else if (persistentObject.getSsn().equals("RuntimeException")) {
            throw new MyRuntimeException();
        } else if (persistentObject.getSsn().equals("Error")) {
            throw new MyError();

        Session session = sessionFactory.getCurrentSession();
        return (Long);

Do you see the problem?

If we passed the special SSN to have create() throw an exception, it would never even try to save the persistent object to disk.  So when (back at the service layer) we told it to throw an Exception, a rollback appeared to have occurred — leastways, the Person Id couldn’t be found in the database.

The persistence layer needed to really save the object:

    public Long create(SimplePerson persistentObject) throws MyException {
        Session session = sessionFactory.getCurrentSession();
        Long id = (Long);

        // Simulate something going wrong at the persistence layer
        if (persistentObject.getSsn().equals("Exception")) {
            throw new MyException();
        } else if (persistentObject.getSsn().equals("RuntimeException")) {
            throw new MyRuntimeException();
        } else if (persistentObject.getSsn().equals("Error")) {
            throw new MyError();

        return id;

Now (after Maven installing this, I mean) our test would show the true state of things — once we cleaned up our dirty data!

2. Dirty Data

Yes, we’d unwittingly gotten into a state where data leftover from a failed test caused future runs of the test to fail. What would happen is that if we expected a test to throw an exception but it didn’t (due in our case  to me accidentally not passing one of the words the persistence layer was watching for in the SSN field), a record would be written and never cleaned up.

Complication 1: Not Realizing Which Assert Failed

I didn’t realize the dirty data problem for a while (resulting in a few extra rounds of changing the AOP advice beans in that project, Maven Installing that project, and re-running the tests at the integration test service layer), because when the test resulted in an AssertionError involving Assert.assertNull, I assumed it was the final assertNull at the end of the test, the one that tests whether the record finally ended up on disk or not.

Here’s what one of the test methods looked like:

    public void testRuntimeExceptionRollback() throws MyException {
        final String ssn = "RuntimeException";
        SimplePerson person = new SimplePerson();

        try {
  "Should have gone boom");
        } catch (MyRuntimeException success) {
            SimplePerson person2 = personSvc.findBySsn(ssn);

But actually the assert that was failing was the one on line 4 where it checks for dirty data by asserting that the record should not already exist.  (Should be using the Assume class, which New Ben just told me about a day or few ago, for this type of pre-checking?)

Solution to Complication 1

The solution to this confusion was to put some text on these asserts so they could be easily differentiated:

        Assert.assertNull("Should start out null", personSvc.findBySsn(ssn));
            Assert.assertNull("Should still be null due to rollback", person2);

Complication 2: No Clean Way to Clean Up

We wanted to clean up all the records in the @Before and @After method, but you had to know the object Id in order to delete it.  With our existing DAO interface, we would have needed to loop calling findBySsn()… it was easier and cleaner to just add a deleteAll() method to the DAO, and (after exposing it through the domain layer project to the service layer project) then call that from our service layer test’s @Before and @After method.

We were finally in a position to test out our changes to the transactional advice.

The Little Integration Test that Didn’t: part 1

Our system is set up to automagically wrap each call coming into the service layer in a transaction (using AOP transaction advice), automatically committing the transaction if the call completes normally and automatically rolling back the transaction if an exception is thrown… a RuntimeException or Error, that is.

Rolling back on RuntimeException or Error but not on a (checked) Exception is Spring’s default behavior:

Note however that the Spring Framework’s transaction infrastructure code will, by default, only mark a transaction for rollback in the case of runtime, unchecked exceptions; that is, when the thrown exception is an instance or subclass of RuntimeException. (Errors will also – by default – result in a rollback.) Checked exceptions that are thrown from a transactional method will not result in the transaction being rolled back.

Exactly which Exception types mark a transaction for rollback can be configured.

(from the Rolling Back section of the Transaction Management chapter of the Spring 2.5 Manual)

Battling Assumptions

While we framework guys assumed that any exceptions thrown would extend RuntimeException, the application guys didn’t share our assumption.  Their assumption was, “If an exception is thrown, the transaction rolls back.”  So the question was, should the transaction advice be changed to also apply to checked exceptions, or should the application-level exceptions be changed to extend RuntimeException?

Seems Ok to Change from Spring’s Default

I thought perhaps it was for some reason a bad idea to change this default rollback behavior — that perhaps there was an important reason Spring did it this way.  But a quick look at the Spring manual told me otherwise:

While the EJB default behavior is for the EJB container to automatically roll back the transaction on a system exception (usually a runtime exception), EJB CMT does not roll back the transaction automatically on an application exception (that is, a checked exception other than java.rmi.RemoteException). While the Spring default behavior for declarative transaction management follows EJB convention (roll back is automatic only on unchecked exceptions), it is often useful to customize this.

(from the Declarative Transaction Management section of the Spring 2.5 Manual)

Changing our Advice

There being no blockage from that quarter and not being able to think of a good reason why we’d want to commit our transactions when a checked exception was thrown, we changed our transactional advice bean from this:

    <tx:advice id="txAdvice" transaction-manager="transactionManager">
            <tx:method name="*" />

to this:

    <tx:advice id="txAdvice" transaction-manager="transactionManager">
            <tx:method name="*" rollback-for="Exception"/>

And, after some fits and starts, the integration test showed that transactions were rolling back for RuntimeException, Error, and Exception.  (Why the fits and starts?  That explanation will have to wait till next time.)

The convenience of going back in time

Eclipse has a really nice feature called “Local History”.  Allow me to demonstrate how it helped me just now:

We have a TransactionManager class that provides the ability for writers of integration tests who are writing tests below the level where AOP transaction advice is automatically applied, to just do basic getTransaction(), commit() and rollback() operations without worrying about the details of the underlying Spring PlatformTransactionManager.

Once you instantiate our TransactionManager, it holds a reference to the PlatformTransactionManager bean.

It’s possible for a test writer to have a situation where they reinitialize the Spring application context but are still holding onto a PlatformTransactionManager bean reference from the prior instantiation.  These developers don’t tend to focus their energies on such low-level issues like we Framework guys do, so it could be quite a time-waster if it resulted in an obscure error they had to debug to get to the root of — we’d rather detect the situation and throw a more descriptive error.

To that end, I had started yesterday morning writing a test that would show what currently happens in this scenario, in preparation for changing the code to do the detect-n-throw.

After this start though, I spent most of the day doing training.   It was fun!  But an interesting thing about doing interactive training with code examples: by the time it was all done, my Eclipse workspace was in quite an odd state!  Tests failing, things not compiling — because in the process of explaining I would just go try things here and there.  Afterward, the best thing to do was just revert everything back to what was in the repository.

Today I looked at my test class for the TransctionManager and noticed that my work on the scenario I’ve described was gone.  “Ah yes, I reverted the source…” I mused.  I started to type it in again, but it required enough thinking about what calls the mocks should expect, that I hesitated, and remembered Local History.

Local history records a snapshot each time you save, and I save changes so often that a lot of times it’s hard to find just the local revision I need…

I decided to try it anyway.  I right-clicked on my TransactionManagerTest and chose Compare With -> Local History…

This added the History view, and I expanded Yesterday…

Notice the several saves leading up to 8:49 AM, and then the lone change at 3:16 PM.  3:16 PM looks like the revert, so let’s double-click on the 8:49 AM one to compare…

(The whitish bars along the right-hand side show where differences are.  Bigger changes show up as bigger whitish bars.  I clicked on the largish bar near the bottom.)

This is the method I coded yesterday!  Let’s put it back in the source: while it’s selected, press the “Copy Current Change from Right to Left” button, and…

There we are…beautiful!

Simple JMS transaction rollbacks work…

Last time, I thought I had AOP transaction advice in place, but I still was not seeing the rollbacks I thought I should. This time, we try simple JMS local transactions to see if they work, to have another data point…

If I change my DefaultMessageListenerContainer bean to remove the external JTA transaction manager reference and replace it with native JMS transactions like so:

<jms:listener-container acknowledge=”transacted” connection-factory=”queueConnectionFactory”>
<jms:listener destination=”gloriousTest.Queue” ref=”messageListener” />

…then I can throw an unchecked exception and it will rollback the receive of the message.  It then receives several times, until it goes to the dead letter queue.  However, it doesn’t seem to pay attention to my setting that is supposed to only roll back my SpecialRuntimeExceptions, not all RuntimeExceptions (unless I need a no-rollback-for attribute in my tx:advice):

<tx:advice id=”txAdvice” transaction-manager=”transactionManager”>
<tx:method name=”*” rollback-for=”SpecialRuntimeException” />


Back to a JTA transaction manager, adding the propagation=”REQUIRES_NEW” attribute didn’t help either (still no rollback when my messagelistener throws a SpecialRuntimeException):

<tx:advice id=”txAdvice” transaction-manager=”transactionManager”>
<tx:method name=”*” rollback-for=”SpecialRuntimeException” propagation=”REQUIRES_NEW”/>

Guess we’ll keep trying next time

Trying out XA, part 2: transaction manager and JNDIView

Last Thursday I tried out the configuration from last time, and… fireworks!  (A big stack trace — appropriate for the day, no?)

So, I commented out the three bean IDs in the Spring bean file and tried again.  No big stack trace in JBoss (mind you, I’m not actually trying to do anything with the beans yet, just deploy the .war file and not see a big failure stack trace on the JBoss console).  That deployed ok.  And I’m able to uncomment the transactionManager bean — when I do, I get this new output on the JBoss console:

7d56b: defining beans [transactionManager]; root of factory hierarchy
08:09:27,205 INFO  [JtaTransactionManager] Using JTA UserTransaction:
08:09:27,221 INFO  [JtaTransactionManager] Using JTA TransactionManager: com.arjuna.ats.jbossatx.jta.TransactionManagerDelegate@1d008ff

So that looks promising.  The DefaultMessageListenerContainer depends on both the transactionManager and the JNDI-looked-up queueConnectionFactory, and when I uncomment the queueConnectionFactory-JNDI-looked-up bean I get this big stack trace on the JBoss console.

The first error is:

08:26:12,253 ERROR [ContextLoader] Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘queueConnectionFactory’: Invocation of init method failed; nested exception is javax.naming.NameNotFoundException: QueueConnectionFactory not bound

This NameNotFoundException on QueueConnectionFactory keeps coming up in the “Caused by” lines farther down in the stack trace.

Spring is not finding something JNDI-registered as QueueConnectionFactory.  This seemed intractable Thursday, but today I’m wondering:

Did I mix up the jndi-name with the ID?

Perhaps I got the jndi-name and the id mixed up in the Spring bean file.  In the bean file I have:

<jee:jndi-lookup id="queueConnectionFactory" jndi-name="activemq/QueueConnectionFactory"/>

and in the deployed activemq-jms-ds.xml there is


Those seem to match.

Let’s see if I can examine JBoss’s JNDI directory somehow to see how it has the queue registered.

Searching for JBoss’s JNDI Directory

I pointed my browser to http://localhost:8080 , where my local JBoss instance is listening.  After poking around in the Administration Console and looking a bit at Sun’s JNDI Tutorial, I thought I’d look in the JMX Console to see if I could find how activemq/QueueConnectionFactory was listed… or any other JNDI thing, for that matter.

Under the jboss heading I see a JNDIView link that looks promising:

Following that link, I get to a page about the JNDIView service that has this:

I press Invoke and find myself on an Operation Results page that has interesting JNDI stuff listed out.

Examining JBoss’s JNDI directory

On this page, I search for queue and find this under a heading titled java: Namespace:

  +- activemq (class: org.jnp.interfaces.NamingContext)
  |   +- QueueConnectionFactory (class: org.apache.activemq.ra.ActiveMQConnectionFactory)
  |   +- TopicConnectionFactory (class: org.apache.activemq.ra.ActiveMQConnectionFactory)

Ok, continuing to search for queue, I find this under the heading Global JNDI Namespace:

  +- queue (class: org.jnp.interfaces.NamingContext)
  |   +- A (class:
  |   +- testQueue (class:
  |   +- ex (class:
  |   +- DLQ (class:
  |   +- D (class:
  |   +- C (class:
  |   +- B (class:

And, just as interesting, farther down under the same heading:

+- QueueConnectionFactory (class: org.jboss.naming.LinkRefPair)

To my untrained eye, it looks like in the global JNDI namespace, we have queue/A, queue/B, etc… but then just plain QueueConnectionFactory without the activemq/ before it. I even remember seeing entries like queue/A, queue/B, etc. in the JBoss console when it started up:

08:09:07,645 INFO  [A] Bound to JNDI name: queue/A
08:09:07,645 INFO  [B] Bound to JNDI name: queue/B
08:09:07,645 INFO  [C] Bound to JNDI name: queue/C
08:09:07,661 INFO  [D] Bound to JNDI name: queue/D
08:09:07,661 INFO  [ex] Bound to JNDI name: queue/ex
08:09:07,692 INFO  [testTopic] Bound to JNDI name: topic/testTopic
08:09:07,692 INFO  [securedTopic] Bound to JNDI name: topic/securedTopic
08:09:07,707 INFO  [testDurableTopic] Bound to JNDI name: topic/testDurableTopic
08:09:07,707 INFO  [testQueue] Bound to JNDI name: queue/testQueue
08:09:07,754 INFO  [UILServerILService] JBossMQ UIL service available at : /
08:09:07,785 INFO  [DLQ] Bound to JNDI name: queue/DLQ

I bet if one of our Spring beans referred to a queue/A jndi-name, it would be found.

So if the global JNDI namespace holds the JNDI names to which a Spring bean file might refer, perhaps in my Spring bean file I should put plain QueueConnectionFactory as the jndi-name, instead of activemq/QueueConnectionFactory.

Reluctance to Use the Short JNDI-name

Let’s sit on that thought for a minute, because I’d rather use the more descriptive (?) activemq/QueueConnectionFactory as the jndi-name in the Spring bean file if possible. Looking at the java: Namespace, it would appear that the QueueConnectionFactory would be specified as activemq/QueueConnectionFactory there, if the tree is built the same as the queue/A under the Global JNDI Namespace heading.  To access the java: JNDI namespace, do we just prepend java: to the JNDI-name?

Accessing the java: JNDI Namespace

That would make our bean definition look like this:

<jee:jndi-lookup id=”queueConnectionFactory” jndi-name=”java:activemq/QueueConnectionFactory”/>

I rebuilt and redeployed, and I don’t get a stack trace this time.  No positive log messages about Spring finding the queueConnectionFactory, but at least no stack trace.  I guess I was hoping for a little more positive feedback before I went on…

The Droids We Are Looking For

But while searching through the JBoss console messages for the queue/A, queue/B… messages, I found this!

08:09:06,367 INFO  [ConnectionFactoryBindingService] Bound ConnectionManager ‘jboss.jca:service=ConnectionFactoryBinding,name=activemq/QueueConnection
Factory’ to JNDI name ‘java:activemq/QueueConnectionFactory’

Aha!  It looks like that is the right name syntax.

Now that we have defined the JtaTransactionManager and QueueConnectionFactory beans, we can go on to define the DefaultMessageListenerContainer bean, which depends on both of them.  But we’ll save that for Part 3

Trying out XA: untested theory

Ok, let’s give XA a try.

0. What do we need?

The Spring AbstractMessageListenerContainer API doc says:

[W]rap the entire processing with an XA transaction, covering the reception of the message as well as the execution of the message listener. This is only supported by DefaultMessageListenerContainer, through specifying a “transactionManager” (typically a JtaTransactionManager, with a corresponding XA-aware JMS ConnectionFactory passed in as “connectionFactory”).

Keith adds that we’ll also need a Hibernate SessionFactory bean… but we’ll leave that out for this first round.

1. The ConnectionFactory Part

For the ConnectionFactory part, we’ll use ActiveMQ’s ConnectionFactory implementation, and it will be XA-aware because we are configuring it as a JNDI datasource (do I have my terms right?) in JBoss.  So, how do we do that?

1.1. Configuring the Datasource

We dropped a file named activemq-jms-ds.xml which he got from somewhere (maybe from an article on ActiveMQ’s site titled Integrating Apache ActiveMQ with JBoss) into the JBoss deploy folder*.  The file starts out something like this:

<!DOCTYPE connection-factories
PUBLIC “-//JBoss//DTD JBOSS JCA Config 1.5//EN””&gt;
<!– More stuff here… –>

Notice that this configuration file specifies XA transactions and points to an activemq-ra.rar file

*Interestingly to me, not to the config folder.

1.1.1. The thing that matters about this

This is a bunch of configuration, but what matters from the Spring bean file’s point of view is the JNDI name, because that’s how we’ll refer to it from there.

1.2. The RAR file

We had also deployed the activemq-ra.rar file to JBoss.  Inside that .rar file, under a meta-inf folder, is an ra.xml file.  From looking at the xsd file, other Sun documentation, and the <display-name> tag within the file, it seems that this file is a Connector Deployment Descriptor for the ActiveMQ JMS Resource Adapter.  Apparently it’s not only a connector, it’s also a resource adapter.  (I bet that’s why there’s an “ra” in the filename.)

At any rate, under connector -> resourceadapter -> outbound-resourceadapter, one of the <connection-definition> tags under there has these entries:


So it looks like this is how the app server knows what implementation of the javax.jms.QueueConnectionFactory interface ActiveMQ provides.  (I’ve been doing a lot of guessing around — I was missing some background on JCA.  ActiveMQ’s article Integrating Apache ActiveMQ with JBoss, which I didn’t see until this post was half written, helps fill in some of these gaps.)

2. Tying It All Together

There are some other things we’ll need to tie this all together:

  • A bean definition for the queue connection factory to reference
  • A bean definition for a JtaTransactionManager to reference
  • A DefaultMessageListenerContainer bean in our Spring beans file

For the first one, we refer to the Spring manual appendix A.2.3. The jee schema; the latter two are based on the examples in section 19.4.5. Processing messages within transactions.  The concept of referencing a jndi-lookup bean as if it were just a normal bean is from the Java World article, the example where it defines an appJmsDestination bean pointing to the JNDI name and then sets the JmsTemplate’s defaultDestination property to reference this bean.

So here’s the proposed Spring bean file:

<?xml version=”1.0″ encoding=”UTF-8″?>
<beans xmlns=”;
xmlns:xsi=”; xmlns:jee=”;
<jee:jndi-lookup id=”queueConnectionFactory” jndi-name=”activemq/QueueConnectionFactory”/>
<bean id=”transactionManager” class=”org.springframework.transaction.jta.JtaTransactionManager”/>

<bean id=”jmsContainer” class=”org.springframework.jms.listener.DefaultMessageListenerContainer”>
<property name=”connectionFactory” ref=”queueConnectionFactory“/>
<property name=”destination” ref=”destination”/>
<property name=”messageListener” ref=”messageListener”/>
<property name=”transactionManager” ref=”transactionManager“/>

3. It’s All A Bit Backwards…

Now the crazy thing is, none of this may work — I haven’t actually tried any of this.  There are so many new (to me) concepts flying around that I needed to write down what I was going to try to do just to keep track of it all.  (It’s like trying to keep seven concepts in your head when you can only do five at once — you keep losing focus on one of the important concepts and have to work to bring it back while hopefully not losing focus on another one.)

Next time, we’ll see if it works!

Spring and local JMS transactions

Here’s some stuff about how you apparently* use Spring with local JMS transactions enabled.

Sending messages

For sending messages, it looks like you’d just call setSessionTransacted(true) on the jmsTemplate instance.  Seems simple enough.

Receiving messages

To enable local JMS transactions on the receiving end, looks like you just call setSessionTransacted(true) on the DefaultMessageListenerContainer.  So I guess you’d have something like this in your Spring beans file:

    <bean id="myListenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
        <property name="concurrentConsumers" value="${consumerThreadCount}" />
        <property name="connectionFactory" ref="jmsConnectionFactory" />
        <property name="destination" ref="myQueue" />
        <property name="messageListener" ref="myQueueListener" />
        <property name="sessionTransacted" value="true" />

Hey, look, that’s just what the example in the Spring manual JMS chapter’s section on Transactions shows!

Managed vs. Unmanaged Transactions

There’s another concept here: managed vs. unmanaged transactions. The above example is how you’d enable transactions for an unmanaged session (which I think in Spring means you have not set the transactionManager property on your messagelistenercontainer.)  The Spring manual, Section 19.4.5. Processing messages within transactions, speaks to this as well.


So… how do you rollback

  • a message in a transacted but un-transaction-managed JMS session in Spring?
  • a message in a transaction-managed JMS session in Spring?

Guess we’ll need a third installment.  :)

*I haven’t actually tried any of this, I’m just looking through the Spring manual.

XA transactions or not? Part 2

This is an continuation of an earlier post, as I continue to learn about XA transactions and try to guess if we’ll use ’em or not.  (I’d just try it out, but I’m not quite there yet, so I’m trying to keep track of what I’m reading as I go along).

Good discussion:

Setting this flag to “true” will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. The latter has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction.

Hmm, a “synchronized local JMS transaction”.  Keith points out, though, that there is a point in time when the database transaction has just been committed but the receiving of the message has not been committed, and if you blow up then, you’re out of sync.  Rats.

But wait, that’s where the custom duplicate message detection comes in.  If you blow up trying to commit the receiving of your JMS message, that means that when you come up again you’ll receive the message again even though you already processed the message before.  This is the scenario where you’d need to be able to detect that you’d already processed this message.

XA resolves the problem a different way, by making sure the duplication never occurs.

Update: Quotes

It’s funny, in the reading I’ve done, what people say about whether to use XA or not.  Here are a few examples:

“Note that XA transaction coordination adds significant runtime overhead, so it might be feasible to avoid it unless absolutely necessary.” —AbstractMessageListenerContainer javadoc

“If you are using more than one resource; e.g. reading a JMS message and writing to a database, you really should use XA…The problem with XA is it can be a bit slow…This adds significant cost (in terms of latency, performance, resources and complexity)…So a good optimisation is to use regular JMS transactions – with no XA…” —ActiveMQ JMS FAQ: Should I use XA?

Increasingly, people want to hit multiple resources simultaneously – their ERP system, their RDBMS, and maybe publish a few messages or throw something on a JMS queue while they’re at it – and they want to do this in a transactionally sound manner. And the way to do this is with XA. And along the way, many developers are finding out about the pitfalls of XA along with the positives, and are also discovering an old adage coming alive once again: “Yep, XA sucks, but for us it sucks less than the alternatives”…And even when all the vendors get it right, I think this article has shown that error recovery in XA can be an arduous task – and harrowing for the operations people to boot. But don’t dismiss XA outright – for all the faults, the benefits are quite real. But it pays to know what you’re really buying into, and hopefully this article has brought a few people a small step closer to what’s really going on under the hood, and how it can affect them. —Mike Spille

Transactions: more to learn

Reading through the transaction management chapter of the Spring 2.5 manual, here’s what it says about transaction propagation:

[N]ormally all code executed within a transaction scope will run in that transaction. However, there are several options specifying behavior if a transactional method is executed when a transaction context already exists: for example, simply continue running in the existing transaction (the common case); or suspending the existing transaction and creating a new transaction…

The manual then goes on to say

These settings reflect standard transactional concepts. If necessary, please refer to a resource discussing transaction isolation levels and other core transaction concepts because understanding such core concepts is essential to using the Spring Framework or indeed any other transaction management solution.

(Emphasis mine.) Hmm. Looks like I have some more learning to do!


Another nugget, this one from Section 9.5.3 Rolling back:

The recommended way to indicate to the Spring Framework’s transaction infrastructure that a transaction’s work is to be rolled back is to throw an Exception from code that is currently executing in the context of a transaction. The Spring Framework’s transaction infrastructure code will catch any unhandled Exception as it bubbles up the call stack, and will mark the transaction for rollback.

Offhand, this seems really good – if something bad happens I would expect there would be an exception involved, and it will be great if we don’t have to explicitly translate all those exceptions to whatever the transaction mechanism wanted instead.

Transactions in the Spring Framework

This JavaWorld article pointed me to Spring’s PlatformTransactionManager interface. The API docs for that interface say this:

This is the central interface in Spring’s transaction infrastructure. Applications can use this directly, but it is not primarily meant as API: Typically, applications will work with either TransactionTemplate or declarative transaction demarcation through AOP.

Next: Figure out what this TransactionTemplate is.