How to read maven-enforcer-plugin’s RequireUpperBoundDeps rule failure report

Scenario: To make use of a new feature in a certain dependency or to get a bugfix that’s causing your project pain, you bump the version of a dependency in your pom file — maybe the parent pom version. Then you type

mvn compile

…put your hands over your eyes, and gingerly press Enter.

And?

You may see something like this:

[WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireUpperBoundDeps failed with message:
Failed while enforcing RequireUpperBoundDeps. The error(s) are [
Require upper bound dependencies error for org.slf4j:slf4j-api:1.7.11 paths to dependency are:
+-com.example.blah:blah-service:2.0.1-SNAPSHOT
  +-org.slf4j:slf4j-api:1.7.11
and
+-com.example.blah:blah-service:2.0.1-SNAPSHOT
  +-com.netflix.hystrix:hystrix-core:1.5.4
    +-org.slf4j:slf4j-api:1.7.11 (managed) < -- org.slf4j:slf4j-api:1.7.10
and
+-com.example.blah:blah-service:2.0.1-SNAPSHOT
  +-com.example.framework:example-core:0.5.9-SNAPSHOT
    +-org.slf4j:slf4j-api:1.7.11 (managed) < -- org.slf4j:slf4j-api:1.7.7
and
+-com.example.blah:blah-service:2.0.1-SNAPSHOT
  +-com.example.framework:example-core-data:0.5.9-SNAPSHOT
    +-org.slf4j:slf4j-api:1.7.11 (managed) < -- org.slf4j:slf4j-api:1.7.5
...

…only it goes on for screens and screens. What happened? Continue reading

Advertisements

Maven artifact scopes revisited

In a prior post about when to use which scope for a Maven artifact, I left out the provided scope, because, as I said at the time, “I don’t get as confused about that one so I left it out.” I spoke too soon.  : )

When I said that, I was only thinking of which classpath(s) the artifact would available to in its immediate project.  But there are two additional concerns besides the classpath availability: whether the artifact will be built into the resulting (.war) target, and whether artifacts depending on this one will inherit the dependency.  The latter issue is also affected by the <optional> tag.

The following table reflects my current understanding of how the scope and optionalness of a dependency affect:

  • The classpath(s) in which the artifact is available;
  • Whether the artifact is included in the war; and
  • When artifact A declares a dependency on artifact D: whether artifacts declaring a dependency on artifact A will also transitively get a dependency on artifact D.
Scope Optional? Classpaths in which artifact is available Artifact included in war** Dependency propagates transitively
(main) compile (main) runtime test*
compile No X X X X X
compile Yes X X X X
runtime No X X* X X
runtime Yes X X* X
test No X ? X
test Yes X ?
provided No X X
provided Yes X X
system No X X
system Yes X X

*Note that the “test” classpath is used both at test compile time and test runtime; it doesn’t seem to  be possible to specify that an artifact should be available at test runtime but not test compile time.

**I don’t think it has to necessarily be packaged as war… trying to make a concrete example.

A problem that didn’t show up until later

Earlier this fall, a problem showed up when the database guys first contributed Hibernate mappings.  It was unclear how the Hibernate mappings could be causing the error we were getting — and it turned out that they weren’t; rather, they were exposing a different error that had previously been there, undetected.

I find these types of things really annoying before they’re figured out, and fascinating afterward.  Here’s an email I sent out afterward:


From: Daniel Meyer
Sent: Wednesday, September 24, 2008 3:42 PM
Subject: RE: Build Failures: resolved

The “deployment fails the second time” issue has been resolved, and the CI servers are back in business.

For those who may be interested, the problem turned out to be a Maven config issue in [the functional test project]’s pom file.

(Note: you may need your techno-gibberish decoder goggles for the next details: )

In making ojdbc available to Liquibase for when we deliver schema changes, the ojdbc jar file got included in the war file.  The config has been this way for several weeks, but we didn’t notice it until we just in the last few days got to the point of contributing Hibernate mappings, which starts the Persistence service, which needs ojdbc to actually work.  At that time, the ojdbc jar in the war file fought with the ojdbc jar in JBoss’s server/default/lib directory, and just like when you were fighting with your sister over that cookie, nobody won and we got the NoClassDefFoundError.

(Or something vaguely similar to that.)

Thanks to [those who helped] for their help with this!

Now back to our regular programming…

-Daniel-

When to use which scope

I keep getting confused about which artifact scope to use: compile, test, or runtime?  So here’s a little algorithm to help me (it’s a conservative algorithm, trying to depend on libraries only to the extent necessary):

  • If the artifact is not not needed by the production code but only by tests, use test scope.
  • Otherwise, if production code needs the artifact at runtime but does not need to code against its classes and interfaces (for instance, if all references to the artifact in production code are in Spring bean files), use runtime scope.
  • Otherwise, if the production code needs to code against the artifact’s classes and interfaces, use compile scope.

So in my algorithm you consider test scope first, and open it up to runtime or compile scope only if necessary.

Ultimate power!

I use Eclipse’s External Tools feature to run a lot of Maven goals: clean install, eclipse:clean eclipse:eclipse, etc.  I have an external tool configuration set up for each of these common commands.

Sometimes, though, I want to try out a goal I don’t as often use.  Maven dependency:tree and dependency:list I use almost enough to have a special external tool configuration for each… but from time to time I find out about other Maven plugins that I want to try out but would rather not have to clutter up the external tools list with special entries for each one.

As an alternative, sometimes I try out these goals from the Windows command prompt, but it’s a pain to navigate to my project’s working directory each time before executing the command.  I probably experiment less with such commands than I would otherwise because of the effort to prepare to run them — I just get by without their output, without really thinking about it.

Yesterday, though, Keith was helping me get set up to do JBoss remote debugging in Eclipse (a whole ‘nother topic), and in the process of setting it up, one of us wondered if you could set up an external tool that just dropped you to a Windows command prompt so you could type whatever command you want.

You can!  Here’s my configuration, which I call Ultimate Power:

ultimate-power-eclipse-external-tool-configuration

When I run this, it puts a Windows command prompt sitting in my project directory in a Console window, and now I can type my command:

ultimate-power-in-action

Coo-hoo-hool!

How to locally add a repository in Maven

I’m experiencing an issue where if I have my producer send 100 JMS messages to my ActiveMQ queue while a consumer is listening, the listener will pick up maybe 90 of them and then no more.  This is not good!  I need to find out why this is.

I’m running ActiveMQ 5.1.0 at the moment, but one of the suggestions I found was to upgrade to the 5.2.0 Release Candidate.  I updated my pom to depend on that version, but since this artifact is not in the main Maven repository yet, Maven Eclipse couldn’t find it and I couldn’t build my project.

I had run into this type of thing before when I was trying out a Bitronix Transaction Manager 1.3 release candidate.  What I needed was to add this to my pom:

    <repositories>
        <repository>
            <id>activemq-5.2.0-release-candidates</id>
            <url>http://people.apache.org/~gtully/staging-repos/activemq-5.2.0</url>
            <snapshots>
                <enabled>false</enabled>
            </snapshots>
        </repository>
    </repositories>

Notes:

That worked — after running what we call “Maven Eclipse” (mvn eclipse:clean eclipse:eclipse), now activemq-core-5.2.0.jar shows up in my Referenced Libraries!

I thought they were out to get me

First, some background:

Keith just recently changed our can’t-mock-dis Startup service from a plain ol’ Singleton (in the Gang-of-Four sense, not in the Spring sense) to an Abstract Factory so that — woohoo! — we can now mock out the Startup service and its application-context-creatin’ slowness when we want to.

This gives us such great flexibility in the testing department, and the cost is a slightly wordier syntax when you’re getting an instance of the real StartupSvc.  This syntax changed from:

StartupSvc startupSvc = StartupSvc.getStartupSvc();

to

StartupSvc startupSvc = new StartupSvcFactoryImpl().getStartupSvc();

But these calls aren’t littered throughout the application code — it’s kicked off by ServletContextListeners and such — so there are really only a few places that even have to change.  It’s great!

A Happy Move

So anyway, earlier this week when Keith committed his changes, I had eventually Maven Eclipsed a couple of prototype .war projects I was working on and noticed that I needed to update to the new syntax for getting an instance of the StartupSvc.  I was glad to see that the abstract factory stuff had been committed.

But then today, I Maven Eclipsed again, and now Eclipse was saying it didn’t know what a StartupSvcFactoryImpl was and trying to get me to return to the old syntax.

Reversion Revulsion

It appeared that Keith had reverted his changes and removed the abstract factory.  Had he encountered a nasty problem and reverted it until we could figure out a solution?  I didn’t want to go back to the old way!  Why didn’t he tell me?!  (This last one was a bit hypocritical on my part, since I’m not very good yet myself at noticing when my changes are breaking changes that will affect others on the project and notifying them accordingly.)

Silly Me

Then I realized that, oops, hadn’t I just Maven Install’d the Startup module (with a minor change I needed to test with on my other project) to my local Maven repository from my local working copy?  And how long had it been since I had updated the source code of that project?

Oopsy oops, I had just made the StartupSvc revert, by installing an older version.  I just updated the Startup project from Subversion, then Maven Installed it again, then Maven Eclipsed my .war prototype… and the abstract factory is back.

Silly me!  :)

The Little Integration Test that Didn’t, part 2

In part 1, we walked through how we decided to add the rollback-for=”Exception” attribute to our transactional advice.  What we didn’t discuss is how these changes dovetailed with changes to the existing integration test.

The Integration Test, in its Natural Habitat

The integration test consists of three modules, simulating the layers in our system.  There is a persistence project, a domain project, and a service project.  The service-level project has classes named such that the transactional AOP advice applies to their methods.

So far, so good, right?

The Existing Rollback Test

The service-level project contained an integration test, a junit class with testSuccessfulCommit() and testRollback() methods.  The testRollback() method passed a special string such that, when the call got down to the persistence layer, the persistence layer would recognize it and throw a RuntimeException.  Then the integration test would catch that and verify that the data could not be found in the database — the idea being that if the thing we just wrote to the database couldn’t be found, the rollback must have succeeded.

Testing Rollback on Exceptions and Errors

When we changed the transaction advice to also roll back on Exception, we weren’t sure if that would remove the default of rolling back on RuntimeException and Error.  So we modified the DAO at the persistence layer, which could already throw a RuntimeException, to recognize requests for Exception and Error as well.  Then we modified the service-layer integration test to test the transactional behavior for those kinds of exceptions (with a bunch of Maven installs and (possibly unnecessary) Maven Eclipses being run between all these changes).

Two Problems

When we changed the transactional advice to include Exception, we were getting really odd failures in the service-level integration test.  These errors were caused by two problems in the integration test that banded together against us:

1. An Error of Commission: we never tried to write!

Here was what we’d updated the create() method to look like:


    public Long create(SimplePerson persistentObject) throws MyException {
        // Simulate something going wrong at the persistence layer
        if (persistentObject.getSsn().equals("Exception")) {
            throw new MyException();
        } else if (persistentObject.getSsn().equals("RuntimeException")) {
            throw new MyRuntimeException();
        } else if (persistentObject.getSsn().equals("Error")) {
            throw new MyError();
        }

        Session session = sessionFactory.getCurrentSession();
        return (Long) session.save(persistentObject);
    }

Do you see the problem?

If we passed the special SSN to have create() throw an exception, it would never even try to save the persistent object to disk.  So when (back at the service layer) we told it to throw an Exception, a rollback appeared to have occurred — leastways, the Person Id couldn’t be found in the database.

The persistence layer needed to really save the object:


    public Long create(SimplePerson persistentObject) throws MyException {
        Session session = sessionFactory.getCurrentSession();
        Long id = (Long) session.save(persistentObject);
        session.flush();

        // Simulate something going wrong at the persistence layer
        if (persistentObject.getSsn().equals("Exception")) {
            throw new MyException();
        } else if (persistentObject.getSsn().equals("RuntimeException")) {
            throw new MyRuntimeException();
        } else if (persistentObject.getSsn().equals("Error")) {
            throw new MyError();
        }

        return id;
    }

Now (after Maven installing this, I mean) our test would show the true state of things — once we cleaned up our dirty data!

2. Dirty Data

Yes, we’d unwittingly gotten into a state where data leftover from a failed test caused future runs of the test to fail. What would happen is that if we expected a test to throw an exception but it didn’t (due in our case  to me accidentally not passing one of the words the persistence layer was watching for in the SSN field), a record would be written and never cleaned up.

Complication 1: Not Realizing Which Assert Failed

I didn’t realize the dirty data problem for a while (resulting in a few extra rounds of changing the AOP advice beans in that project, Maven Installing that project, and re-running the tests at the integration test service layer), because when the test resulted in an AssertionError involving Assert.assertNull, I assumed it was the final assertNull at the end of the test, the one that tests whether the record finally ended up on disk or not.

Here’s what one of the test methods looked like:


    @Test
    public void testRuntimeExceptionRollback() throws MyException {
        final String ssn = "RuntimeException";
        Assert.assertNull(personSvc.findBySsn(ssn));
        SimplePerson person = new SimplePerson();

        //...
        try {
            personSvc.savePerson(person);
            Assert.fail("Should have gone boom");
        } catch (MyRuntimeException success) {
            SimplePerson person2 = personSvc.findBySsn(ssn);
            Assert.assertNull(person2);
        }
    }

But actually the assert that was failing was the one on line 4 where it checks for dirty data by asserting that the record should not already exist.  (Should be using the Assume class, which New Ben just told me about a day or few ago, for this type of pre-checking?)

Solution to Complication 1

The solution to this confusion was to put some text on these asserts so they could be easily differentiated:


        Assert.assertNull("Should start out null", personSvc.findBySsn(ssn));
        //...
            Assert.assertNull("Should still be null due to rollback", person2);

Complication 2: No Clean Way to Clean Up

We wanted to clean up all the records in the @Before and @After method, but you had to know the object Id in order to delete it.  With our existing DAO interface, we would have needed to loop calling findBySsn()… it was easier and cleaner to just add a deleteAll() method to the DAO, and (after exposing it through the domain layer project to the service layer project) then call that from our service layer test’s @Before and @After method.

We were finally in a position to test out our changes to the transactional advice.

The tests only run if…

When I did a Maven Install on my project, it didn’t run the tests.  (It had been doing this lack-of-thing for several days, and I just thought it was some weird side effect of it being a Maven multimodule project.)

Well it turns out that in our setup Maven’s using a surefire reports thing that only runs tests whose class names end in …Test.  My classnames usually do, but for this multimodule example, they weren’t really unit tests so I had called them BlahBlahExample.  (This worked fine to run them in Eclipse.)

So, I just renamed the classes to end in …Test, and now my tests run when I Maven Install ’em.

(Thanks for the tip, Keith!)

If you forgot to set your svn:keywords…

Somehow the Subversion auto-props didn’t cause my svn:keywords to automatically be set in my classes, and now I had 32 classes that, if I set the properties through Eclipse, I’d have to fix manually.

Me no like 32 repetitions of same thing.

So I looked for a way to do it from the command line.

Here’s a way that worked for me:

(I ran the following once from the project’s src/main/java, and once from src/test/java)

svn propset -R svn:keywords "Author Id Revision Date" .

Part of why this works so easily is that, guided by the Maven defaults, we’re set up with our Java source code split out from the other files into src/{main,test}/java.  If the XML configuration files were in the same directory with Java source code, it would take more work to set the property on just the Java files, I think.

Update 10/10/2008: Ben (“New Ben”, I mean) helped me find two good things!

Why auto-props weren’t working

Even though I had the line

*.java = svn:keywords=Author Id Revision Date

in the [auto-props] section of my Subversion config (C:\Documents and Settings\myUserName\Application Data\Subversion\config), it wasn’t adding those keywords on commit.

It turns out there was one additional thing to do: in the [miscellany] section of the same config file, I needed to uncomment the line that reads:

enable-auto-props = yes

Ha!

Mass Keyword-Adding Using Eclipse

The other thing Ben showed me was how to recursively set the keywords from within Eclipse.

  1. Right-click on the source folder (src/main/java, for instance) and choose Team -> Set Property…
  2. For Property name, enter svn:keywords (a warning appears at this point, but we will satisfy it soon)
  3. In the “Enter a text property” box, enter (in my case) Author Id Revision Date
  4. Check the Set property recursively box
  5. Click OK.

I then repeated this procedure on the src/test/java folder.

Thanks, Ben!