Posts Tagged AOP

It helps to have a message listener container

I couldn’t figure out why the AOP advice kept not running around the MessageListener’s onMessage() method in my integration test.  I had declared the MessageListener as a bean (the advice only gets applied to beans) and relaxed the pointcut expression so it shouldn’t have to reside in a certain subpackage; the setup looked good, but still no advice.  I ratcheted the logging level up and saw the message being sent, so why wasn’t the advice running?  I began to have that piteous, put-upon feeling you get when you think you’re doing everything right but the program is unreasonably refusing to cooperate.

Then I realized that I hadn’t declared a DefaultMessageListenerContainer in my Spring bean file.  And then I realized that I had only been seeing messages being sent, never received.  (The advice only applies when the message is received).  Oops!

I declared the message listener container bean, and messages began to be received with the advice applying as it should.  : )

, ,

Leave a comment

Applying advice based on type

When we add JMS queue support to our architecture, we’ll want to apply some AOP advice each time a listener receives a message.  We want to have a convention for how the listeners will be set up so that our advice will find them.

In a recent communication with the design leads, I proposed that one of the ways we recognize a MessageListener be by having a convention that its class name must be *MessageListener.

One of the design leads asked a great question: Could we go based on the class implementing the MessageListener interface instead of being named a certain way?  That would be slick, wouldn’t it?  I didn’t know the answer, though.  I only knew about matching based on name of the class/package/method/signature.

Starting

Too much advice

This pointcut expression works, but applies to too much:

execution(void com.ontsys..svc..*.onMessage(..))

We want to modify it to only apply to classes that implement the javax.jms.MessageListener interface.

Trying

A ineffectual plus

At first it looked like the plus sign was what we needed.  The AspectJTM Programming Guide Appendix B. Language Semantics: Type patterns section on Subtype patterns says:

It is possible to pick out all subtypes of a type (or a collection of types) with the “+” wildcard. The “+” wildcard follows immediately a type name pattern.

And the Spring manual contains an example of the + wildcard on an execution pointcut.

My first attempt was a misuse of the plus sign, though I didn’t know it right away:

execution(void com.ontsys..svc..MessageListener+.onMessage(..)

That didn’t work– when I deployed the project, the advice wasn’t applied to my listener. (I could tell because the listener relies on something the AOP advice sets up, and it blew up when that wasn’t there.)

I see now that I was misusing the plus operator — I wanted the advice to apply to any implementor/subclass of javax.jms.MessageListener, but the way I was using it it was as if I was trying to have the advice apply to any implementor/subclass of com.ontsys..svc..MessageListener.

Escaping from the Ampersands

The Spring manual’s AOP pointcut examples section shows an example using “target”:

target(com.xyz.service.AccountService)

and says it matches “any join point (method execution only in Spring AOP) where the target object implements the AccountService interface”

I tried this:

execution(void com.ontsys..svc..*.onMessage(..)) && target(MessageListener)

On deployment, though, I got an error about one of the ampersands…

And…

I modified the pointcut expression to use the word and instead:

execution(void com.ontsys..svc..*.onMessage(..)) and target(MessageListener)

Now I got this error:

warning no match for this type name: MessageListener [Xlint:invalidAbsoluteTypeName]

Success

Noticing that the Spring manual’s AOP pointcut examples all seemed to show fully-qualified types, I thought I’d try fully qualifying the name of the MessageListener interface:

execution(void com.ontsys..svc..*.onMessage(..)) and target(javax.jms.MessageListener)

This one worked. Woohoo!

Maybe I’m on my way to being a fully-qualified pointcut-expression writer.  :D

, ,

Leave a comment

Customizing log4j logging on JBoss

I want to turn up the logging level on JBoss so that I can verify that the transactional AOP advice runs before the Spring-security-context-populatin’ advice.  The reason being, I want to make sure we’re already in a transaction if something goes wrong in the security context advice, so the message will get rolled back onto the queue.

To be more precise, I want to turn up the logging level on the right categories of classes — my server/default/log/server.log is already 190MB today, and Notepad and Eclipse are having trouble opening it.

The Packages I want to log about

For the first round, I want to see DEBUG-level messages for my Spring-security-context-populatin’ advice (the com.ontsys.advice package in my prototype) and the org.springframework.transaction.jta package.

Adding Logging to my Class

First, I need to add logging to my class — it currently doesn’t log any messages.

Adding logging to a class is simple:

import org.apache.log4j.Logger;
public class MyClass {
    private static final Logger LOGGER = Logger.getLogger(MyClass.class); 

    public void decombobulate() {
        LOGGER.debug("Decombobulating...");
        //...
    }
}

Configuring jboss-log4j.xml

The Logging Conventions chapter of the JBoss Development Process Guide (see also the Logging page on the JBoss Wiki) explains how to log certain categories to a different file.

Only instead of  ${jboss.server.home.dir}/log/ I’ll use more elegant ${jboss.server.log.dir}/ .

Everything in its Proper Place

Note that according to the log4j 1.2.15 DTD, the elements have to be ordered with all appenders first, followed by categories, and then the root element:

<!ELEMENT log4j:configuration (renderer*, appender*,plugin*, (category|logger)*,root?,
                               (categoryFactory|loggerFactory)?)>

Additivity

Note also that there is an additivity attribute you can use on the on the appenders.  The section called Redirecting Category Output has this to say about the additivity attribute:

The additivity attribute controls whether output continues to go to the root category appender. If false, output only goes to the appenders referred to by the category.

That would explain why my main server/default/log/server.log was so small the other time I tried adding a category appender — I think I had additivity=”false” from an example configuration I found somewhere, without understanding what it meant — so output that was going to my special log file was no longer going to my server.log file.

The DTD says that the additivity attribute defaults to “true” though, which is what I want, so I’m leaving it out.

My Changes to jboss-log4j.xml

Here’s what I eventually came up with:

<appender name="AdviceLog" class="org.apache.log4j.FileAppender">
<errorHandler class="org.jboss.logging.util.OnlyOnceErrorHandler" />
<param name="Append" value="false" />
<param name="File" value="${jboss.server.log.dir}/advice.log" />
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="%d %-5p &#91;%c&#93; %m%n" />
</layout>
</appender>
...
<category name="com.ontsys.advice">
<priority value="DEBUG" />
<appender-ref ref="AdviceLog" />
</category>
<category name="org.springframework.transaction.jta">
<priority value="DEBUG" />
<appender-ref ref="AdviceLog" />
</category>

Success

This worked great!  Only stuff from the com.ontsys.advice and org.springframework.transaction.jta packages in the advice.log, at the DEBUG level.

For Next Time

At first the advice.log file was getting way too much information… but we’ll stop here for the moment!

, , ,

2 Comments

Who did you say you were?

Sometimes you want authentication — you don’t want to let a connection in to your system unless it supplies the right credentials.  Other times, you just want identification — for instance, you might want to remember who it was that initiated a request.

One example of this distinction is when you’re dealing with JMS queues.  The process that puts a message in a queue may have been involved in authenticating the user… but the process that takes that message out of the queue to process may not have been.  That receiving process needs a way to identify the user that placed the message in the queue.

The first part of the solution is to pass the userId along with the JMS message, to the queue.  You can do this by setting a String property on the JMS Message.

Before we can use LDAP to look up information about this user, though, we need to populate the Spring security context with this userId.  (Spring does support setting the SecurityContextHolder contents directly.)  So that the domain-level developers don’t have to worry about this, we can use AOP around-advice, like this:


    public Object doPopulateSecurityContext(ProceedingJoinPoint proceedingJoinPoint) throws Throwable {
        Message message = (Message) proceedingJoinPoint.getArgs()[0];
        String userId = message.getStringProperty("userId");
        UsernamePasswordAuthenticationToken authenticationToken = new UsernamePasswordAuthenticationToken(userId, "");
        SecurityContext context = new SecurityContextImpl();
        context.setAuthentication(authenticationToken);
        SecurityContextHolder.setContext(context);

        Object retVal = proceedingJoinPoint.proceed();

        SecurityContextHolder.clearContext();
        return retVal;
    }

This populates the information about the current user so that the MessageListener can look up things in LDAP based on that.

, ,

Leave a comment

The Little Integration Test that Didn’t, part 2

In part 1, we walked through how we decided to add the rollback-for=”Exception” attribute to our transactional advice.  What we didn’t discuss is how these changes dovetailed with changes to the existing integration test.

The Integration Test, in its Natural Habitat

The integration test consists of three modules, simulating the layers in our system.  There is a persistence project, a domain project, and a service project.  The service-level project has classes named such that the transactional AOP advice applies to their methods.

So far, so good, right?

The Existing Rollback Test

The service-level project contained an integration test, a junit class with testSuccessfulCommit() and testRollback() methods.  The testRollback() method passed a special string such that, when the call got down to the persistence layer, the persistence layer would recognize it and throw a RuntimeException.  Then the integration test would catch that and verify that the data could not be found in the database — the idea being that if the thing we just wrote to the database couldn’t be found, the rollback must have succeeded.

Testing Rollback on Exceptions and Errors

When we changed the transaction advice to also roll back on Exception, we weren’t sure if that would remove the default of rolling back on RuntimeException and Error.  So we modified the DAO at the persistence layer, which could already throw a RuntimeException, to recognize requests for Exception and Error as well.  Then we modified the service-layer integration test to test the transactional behavior for those kinds of exceptions (with a bunch of Maven installs and (possibly unnecessary) Maven Eclipses being run between all these changes).

Two Problems

When we changed the transactional advice to include Exception, we were getting really odd failures in the service-level integration test.  These errors were caused by two problems in the integration test that banded together against us:

1. An Error of Commission: we never tried to write!

Here was what we’d updated the create() method to look like:


    public Long create(SimplePerson persistentObject) throws MyException {
        // Simulate something going wrong at the persistence layer
        if (persistentObject.getSsn().equals("Exception")) {
            throw new MyException();
        } else if (persistentObject.getSsn().equals("RuntimeException")) {
            throw new MyRuntimeException();
        } else if (persistentObject.getSsn().equals("Error")) {
            throw new MyError();
        }

        Session session = sessionFactory.getCurrentSession();
        return (Long) session.save(persistentObject);
    }

Do you see the problem?

If we passed the special SSN to have create() throw an exception, it would never even try to save the persistent object to disk.  So when (back at the service layer) we told it to throw an Exception, a rollback appeared to have occurred — leastways, the Person Id couldn’t be found in the database.

The persistence layer needed to really save the object:


    public Long create(SimplePerson persistentObject) throws MyException {
        Session session = sessionFactory.getCurrentSession();
        Long id = (Long) session.save(persistentObject);
        session.flush();

        // Simulate something going wrong at the persistence layer
        if (persistentObject.getSsn().equals("Exception")) {
            throw new MyException();
        } else if (persistentObject.getSsn().equals("RuntimeException")) {
            throw new MyRuntimeException();
        } else if (persistentObject.getSsn().equals("Error")) {
            throw new MyError();
        }

        return id;
    }

Now (after Maven installing this, I mean) our test would show the true state of things — once we cleaned up our dirty data!

2. Dirty Data

Yes, we’d unwittingly gotten into a state where data leftover from a failed test caused future runs of the test to fail. What would happen is that if we expected a test to throw an exception but it didn’t (due in our case  to me accidentally not passing one of the words the persistence layer was watching for in the SSN field), a record would be written and never cleaned up.

Complication 1: Not Realizing Which Assert Failed

I didn’t realize the dirty data problem for a while (resulting in a few extra rounds of changing the AOP advice beans in that project, Maven Installing that project, and re-running the tests at the integration test service layer), because when the test resulted in an AssertionError involving Assert.assertNull, I assumed it was the final assertNull at the end of the test, the one that tests whether the record finally ended up on disk or not.

Here’s what one of the test methods looked like:


    @Test
    public void testRuntimeExceptionRollback() throws MyException {
        final String ssn = "RuntimeException";
        Assert.assertNull(personSvc.findBySsn(ssn));
        SimplePerson person = new SimplePerson();

        //...
        try {
            personSvc.savePerson(person);
            Assert.fail("Should have gone boom");
        } catch (MyRuntimeException success) {
            SimplePerson person2 = personSvc.findBySsn(ssn);
            Assert.assertNull(person2);
        }
    }

But actually the assert that was failing was the one on line 4 where it checks for dirty data by asserting that the record should not already exist.  (Should be using the Assume class, which New Ben just told me about a day or few ago, for this type of pre-checking?)

Solution to Complication 1

The solution to this confusion was to put some text on these asserts so they could be easily differentiated:


        Assert.assertNull("Should start out null", personSvc.findBySsn(ssn));
        //...
            Assert.assertNull("Should still be null due to rollback", person2);

Complication 2: No Clean Way to Clean Up

We wanted to clean up all the records in the @Before and @After method, but you had to know the object Id in order to delete it.  With our existing DAO interface, we would have needed to loop calling findBySsn()… it was easier and cleaner to just add a deleteAll() method to the DAO, and (after exposing it through the domain layer project to the service layer project) then call that from our service layer test’s @Before and @After method.

We were finally in a position to test out our changes to the transactional advice.

, , ,

Leave a comment

The Little Integration Test that Didn’t: part 1

Our system is set up to automagically wrap each call coming into the service layer in a transaction (using AOP transaction advice), automatically committing the transaction if the call completes normally and automatically rolling back the transaction if an exception is thrown… a RuntimeException or Error, that is.

Rolling back on RuntimeException or Error but not on a (checked) Exception is Spring’s default behavior:

Note however that the Spring Framework’s transaction infrastructure code will, by default, only mark a transaction for rollback in the case of runtime, unchecked exceptions; that is, when the thrown exception is an instance or subclass of RuntimeException. (Errors will also – by default – result in a rollback.) Checked exceptions that are thrown from a transactional method will not result in the transaction being rolled back.

Exactly which Exception types mark a transaction for rollback can be configured.

(from the Rolling Back section of the Transaction Management chapter of the Spring 2.5 Manual)

Battling Assumptions

While we framework guys assumed that any exceptions thrown would extend RuntimeException, the application guys didn’t share our assumption.  Their assumption was, “If an exception is thrown, the transaction rolls back.”  So the question was, should the transaction advice be changed to also apply to checked exceptions, or should the application-level exceptions be changed to extend RuntimeException?

Seems Ok to Change from Spring’s Default

I thought perhaps it was for some reason a bad idea to change this default rollback behavior — that perhaps there was an important reason Spring did it this way.  But a quick look at the Spring manual told me otherwise:

While the EJB default behavior is for the EJB container to automatically roll back the transaction on a system exception (usually a runtime exception), EJB CMT does not roll back the transaction automatically on an application exception (that is, a checked exception other than java.rmi.RemoteException). While the Spring default behavior for declarative transaction management follows EJB convention (roll back is automatic only on unchecked exceptions), it is often useful to customize this.

(from the Declarative Transaction Management section of the Spring 2.5 Manual)

Changing our Advice

There being no blockage from that quarter and not being able to think of a good reason why we’d want to commit our transactions when a checked exception was thrown, we changed our transactional advice bean from this:


    <tx:advice id="txAdvice" transaction-manager="transactionManager">
        <tx:attributes>
            <tx:method name="*" />
        </tx:attributes>
    </tx:advice>

to this:


    <tx:advice id="txAdvice" transaction-manager="transactionManager">
        <tx:attributes>
            <tx:method name="*" rollback-for="Exception"/>
        </tx:attributes>
    </tx:advice>

And, after some fits and starts, the integration test showed that transactions were rolling back for RuntimeException, Error, and Exception.  (Why the fits and starts?  That explanation will have to wait till next time.)

, ,

Leave a comment

XA, JNDI and Bitronix, part 4: dancing on the happy path

In part 3, we tried our first JMS message send; but even after we wrapped the message send in a transaction using AOP, BTM still reported that “resource ‘activemq/QueueConnectionFactory’ cannot be used outside XA transaction scope.”  Today we’ll work to resolve that issue.

Both Sends and Receives Need Transactions

Ludovic’s helpful comment on yesterday’s post confirms that we need to make sure both sends and receives are in transactions.  As of yesterday, we were only wrapping the message sends, but not receives, in a transaction — at least that was what we were trying to do!  As it turns out, the transaction-wrappage of the sends was unsuccessful, while on the other hand we were accidentally successfully wrapping the receives!  Here’s how it happened:

Wrapping Sends: If It Ain’t A Bean…

As you may recall from yesterday, the pointcut I had defined for applying transaction advice to message sends was as follows:

        <aop:pointcut id="messageReceiveOperation" expression="execution(void com.example.dmn.JndiBitronixTest.transactedSendMessage(..))" />

But as Keith points out, since my JndiBitronixTest class is not defined as a bean in my Spring beans files, the transaction advice will not be applied to the transactedSendMessage method execution.  Oops!  I didn’t realize that Spring AOP advice applied only to beans!  Fixing that is easy enough: the transactedSendMessage method calls one layer down into the MessageSender class, which is defined as a bean.  We can just start the transaction there instead:

        <aop:pointcut id="messageSendOperation" expression="execution(void com.example.dmn.MessageSender.sendMessage(..))" />

Wrapping Receives: Forgotten Advice

I was looking in the main beans file to verify that MessageSender was defined as a bean when I saw a surprising thing:  I already had transactional advice on the message receives, including the following pointcut:

        <aop:pointcut id="messageReceiveOperation" expression="execution(void com.example.dmn.MessageProcessor.processIt(..))" />

This advice came along when I brought this code over from my original XA example manual test that runs only on an app server.  I consolidated this advice with the send advice into the spring-beans-bitronix.xml file to yield this advice:

    
        
            
        
    

    
        <aop:pointcut id="messageSendOperation" expression="execution(void com.example.dmn.MessageSender.sendMessage(..))" />
        
    

    
        <aop:pointcut id="messageReceiveOperation" expression="execution(void com.example.dmn.MessageProcessor.processIt(..))" />
        
    

And this time both pointcuts point to classes that are Spring beans.  :)

Happy Path Seems to Work

Now our integration test can send a JMS message and the listener picks it up from the queue and writes it to the database, without exceptions or stack traces.  Furthermore, when I bump the log level to DEBUG for Bitronix classes, in my log4j.properties file:

log4j.logger.bitronix.tm=DEBUG

Then I see encouraging looking XA-type messages like the following:

DEBUG bitronix.tm.BitronixTransaction  - committing, 2 enlisted resource(s)
DEBUG bitronix.tm.BitronixTransaction  - changing transaction status to PREPARING
DEBUG bitronix.tm.journal.TransactionLogAppender  - between 19089 and 19193, writing a Bitronix TransactionLogRecord with status=PREPARING, recordLength=96, headerLength=28, time=1219154728262, sequenceNumber=11, crc32=-1825120209, gtrid=737072696E672D62746D0000011BDB48D3FF00000000, uniqueNames=XAOracleDS,activemq/QueueConnectionFactory
DEBUG bitronix.tm.journal.TransactionLogAppender  - disk journal appender now at position 19193
DEBUG bitronix.tm.twopc.AbstractPhaseEngine  - executing phase on 2 resource(s) enlisted in 1 position(s) in natural position order

...

DEBUG bitronix.tm.BitronixTransaction  - changing transaction status to PREPARED
...

DEBUG bitronix.tm.twopc.Preparer  - successfully prepared 2 resource(s)
DEBUG bitronix.tm.BitronixTransaction  - 2 interested resource(s)
DEBUG bitronix.tm.BitronixTransaction  - changing transaction status to COMMITTING (forced)
...

DEBUG bitronix.tm.BitronixTransaction  - changing transaction status to COMMITTED

There’s Also a One-Phase Commit In There (and that’s good)

Before I found that two-phase commit sequence in the log, I found another sequence farther up


DEBUG bitronix.tm.BitronixTransaction  - committing, 1 enlisted resource(s)
DEBUG bitronix.tm.BitronixTransaction  - changing transaction status to PREPARING
DEBUG bitronix.tm.journal.TransactionLogAppender  - between 18721 and 18813, writing a Bitronix TransactionLogRecord with status=PREPARING, recordLength=84, headerLength=28, time=1219154728200, sequenceNumber=7, crc32=-667556454, gtrid=737072696E672D62746D0000011BDB48D4AA00000003, uniqueNames=activemq/QueueConnectionFactory
DEBUG bitronix.tm.journal.TransactionLogAppender  - disk journal appender now at position 18813
DEBUG bitronix.tm.twopc.Preparer  - 1 resource enlisted, no prepare needed (1PC)
DEBUG bitronix.tm.BitronixTransaction  - changing transaction status to PREPARED
DEBUG bitronix.tm.journal.TransactionLogAppender  - between 18813 and 18905, writing a Bitronix TransactionLogRecord with status=PREPARED, recordLength=84, headerLength=28, time=1219154728200, sequenceNumber=8, crc32=-2001767057, gtrid=737072696E672D62746D0000011BDB48D4AA00000003, uniqueNames=activemq/QueueConnectionFactory
DEBUG bitronix.tm.journal.TransactionLogAppender  - disk journal appender now at position 18905
DEBUG bitronix.tm.BitronixTransaction  - 1 interested resource(s)
DEBUG bitronix.tm.BitronixTransaction  - changing transaction status to COMMITTING
DEBUG bitronix.tm.journal.TransactionLogAppender  - between 18905 and 18997, writing a Bitronix TransactionLogRecord with status=COMMITTING, recordLength=84, headerLength=28, time=1219154728200, sequenceNumber=9, crc32=236131807, gtrid=737072696E672D62746D0000011BDB48D4AA00000003, uniqueNames=activemq/QueueConnectionFactory
...

DEBUG bitronix.tm.BitronixTransaction  - changing transaction status to COMMITTED

I worried about line 8 where it said that only one resource was enlisted and so only a one-phase commit (1PC) was needed — “What happened to the database resource?” I wondered.  But then I remembered that this must be the message send (I tend to forget that that’s being transacted too), and the send does not involve the database resource.

Next Steps

I’d still like to have my integration test query the database to verify that the record really made it in there.

, , , ,

Leave a comment