Why I finally ditched Hibernate native APIs for JPA

If you're looking for the short answer, go check out the Spring Data JPA project.  This is an incredible product that offers a tremendous productivity boost for projects using JPA.  If you're interested in the more gory details, read on below :)

I've been a Hibernate user since 2005 and have used JPA + Hibernate Annotations since 2006 when 1.0 of the JPA spec was released.  Unlike many others I did not immediately jump to the JPA APIs (EntityManager, PersistenceContext, etc) and continued using the native Hibernate APIs (Session, SessionFactory, etc).  JPA was still missing quite a few useful features such as a Criteria API and I wasn't ready to give that up just to use a "standard API."  When JPA 2.0 was released in late 2009 the feature sets of the two products were generally comparable making the decision a little tougher.  But since I've never really bought into the "vendor portability" promise of JPA, I continued happily on with native Hibernate APIs to much success.

Over the past month or two, I've come to the decision it's time to fully embrace JPA.

JPA has grown beyond its original purpose as an object-relational mapping framework into a more generic persistence API.  NoSQL / data grid solutions have become incredibly important and popular over the past few years.  Several JPA-based implemenations for these solutions have already been developed, including Google App Engine / Big Table and Hibernate's own Object/Grid Mapper (OGM).  I started to experiment with GAE about a month ago and was surprised how quickly I could be productive with its JPA implementation.  While JPA likely isn't the best fit for the diverse range of NoSQL implementations out there, the ease of use for JPA developers is undeniable.

While industry trends are important, I've finally found the killer app for JPA: the Spring Data JPA project.  At its core, SDJ is about generating JPQL at runtime so you don't have to write tedious queries.  Some of the awesome features include:

  • Out-of-the-box support for data pagination and sorting.
  • Query creation from method names.  Creating a method signature of findByEmailAddressAndLastName(String emailAddress, String lastName) creating a backing query that does exactly what you'd expect.
  • Specification API to define and combine predicates in a manner similar to the Criteria API.

Check out the reference documentation for Spring Data JPA.  The project just dropped its first 1.0 release candidate.  I'm excited to see what they'll come up with in future releases.

Starting with Git and github on Windows

My attempt at a very brief tutorial for starting with Git on Windows:

  1. Head over to http://github.com and create an account.
  2. Create a new repository next.  Please remember that unless you pay for a github account, any repository you create will be PUBLIC.  After creating the project, keep the page open so you can refer to "Next Steps" later.
  3. Download and install msysgit from http://code.google.com/p/msysgit/downloads/list; look for a file in form of Git-version-date.exe with the "Featured"tag.  Use the installer's defaults.
  4. Launch the "Git Bash" shell and setup your identity:
    • git config --global user.name "Your Name"
    • git config --global user.email "username@email.com"
  5. Next create a SSH key pair for yourself using the following command:
    • ssh-keygen -C "username@email.com" -t rsa
  6. The file will be created at c:\Users\username\.ssh\id_rsa.pub; copy the contents of this file and return to github.  Click on "Account Settings" and then "SSH Public Keys."  Add your key here by pasting the content into the provided text box.  Your local git installation and github accounts are now properly linked on this machine.
  7. Refer back to the "Next Steps" section mentioned earlier.  Open "Git Bash" and follow the steps to create a new project and push it to GitHub.  Your project is now ready.  You can write some useful code!

If you prefer to avoid the command line for Git operations, launch the "Git GUI" and follow these steps when you want to make a commit and push:

  1. Press the "Rescan" button to see your changes
  2. Select the files you wish to commit from the list in the upper-left and choose "Commit" -> "Stage To Commit"
  3. Enter a commit message at the bottom of the screen and press "Commit."  The changes are now saved in the local repository.
  4. Click the "Push" button and examine the popup window.  Note that the "Destination Repository" is remote and named "origin"; this refers to the repo at github.  Click "Push" again to commit.
  5. Your changes are now committed to github. You can verify this by browsing to the project home page.

I hope others find this tutorial useful.  A more detailed tutorial can be found here: http://kylecordes.com/2008/git-windows-go

HFCD for Flash Builder: Build Your Flex App 2-3x Faster

HFCD is an extension for Flash/Flex Builder that delegates compilation of your Flex application to a special "compiler daemon" which can run locally or on a remote machine.  The goal of the project is simple: faster builds!  HFCD is the brainchild of Clement Wong, the former compiler engineering lead on the Flex SDK team.  Here are a few useful things to understand about HFCD:

  • HF installs as a Flex Builder plugin which will delegate compilation to a separate OS-level process running either locally or on a remote machine.
  • The compiler daemon process is persistent, meaning it continues to run across multiple builds.  This allows the Java virtual machine to optimize compilation execution each time that a build is run.  The JVM is *very* good at this.
  • The Flex Builder plugin watches for file modifications and immediately pushes these changes to the compiler daemon process.  The daemon has an internal representation of the project file system and will launch internal incremental builds automatically when files change.

So, how fast is it really?  I benchmarked HFCD on two different machines.  I used the Flex 3.4.1 SDK for compilation and ran clean builds of my application each time.  My test project was a real-world Flex app currently in development consisting of about 15 modules and 350 MXML files and ActionScript classes.

2006 Intel Macbook Pro, Core Duo 2.16 GHz, 2 GB RAM, 7200 RPM HD, Leopard 10.5, 32-bit Java 5

  • Stock: 135 seconds average
  • HFCD (1st run): 155 seconds
  • HFCD (successive): 75 seconds average

Intel Core i7 920 @ 3.2 GHz, HT off, 12 GB RAM, dual 7200 RPM HD's in RAID 0, Vista 64-bit, 32-bit Java 6

  • Stock: 65 seconds average
  • HFCD (1st run): 52 seconds
  • HCFD (successive): 21 seconds average (!!!)

As indicated in the documentation, the performance of HFCD increases dramatically after the first build due to the numerous optimizations in HellFire and the JVM itself.  The Macbook was nearly 2x faster while the Windows box was just over 3x faster.  Very impressive and a real time saver!

I hope to post some new benchmarks soon.  I need to do some more research to get HFCD running on a 64-bit JVM as that isn't supported out of the box.  Also I'd like to configure my Macbook to delegate the compilation to my Windows box, especially to ascertain what kind impact the network topology has on build performance.

 

 

Subversive install for Eclipse has finally improved!

About a year ago the Subversive plugin moved into the Eclipse foundation and became the "official" SVN provider plugin for the platform. Unfortunately due to a licensing issue popular SVN connectors such as JavaHL and SVNKit cannot be shipped with the Eclipse IDE, so a fully-functional SVN Team Provider can't currently (and may never?) ship with Eclipse. The steps required previous to install and get Subversive working were pretty tedious in the past, so I'm happy to report that recently this experience has been greatly improved! There are now only two very straightforward steps to get Subversive running:

Launch the Eclipse plugin installer dialog and select "Subversive SVN Team Provider."

Install_subversive_svn_provider

After installation, restart the workbench.  Then open the "SVN Repository Explorer" perspective.  The following dialog will be displayed automatically:

Install_svn_connectors

Choose a SVN connector (I typically use the latest SVNKit), click Finish, restart the workbench one more time and you're ready to go.  Hopefully the licensing issues will be resolved eventually so that Eclipse can ship with the Subversive SVN Team Provider by default but at least in the mean time the installation's been made pretty painless.

Mixing and Matching Spring JdbcTemplate and HibernateTemplate

The JdbcTemplate and HibernateTemplate convenience classes from Spring really make working with the respective APIs a breeze. Unfortunately getting both of these classes to work together within a single Transaction is not straightforward. This comes up very frequently in JUnit tests where you want to verify Hibernate is working with the database in the way you expect, either by inserting data and letting Hibernate load it or by checking to see that Hibernate creates the data you expect. The same will hold true in application code where you need to add JDBC code alongside Hibernate code to meet various requirements. The testing scenarios are simple and illustrative so let's explore those.

One common use case is to persist an object with HibernateTemplate and then verify the data was inserted correctly using JdbcTemplate. Usually Hibernate will not flush the data out to the DB until the transaction commits, meaning that the query done by JdbcTemplate won't be able to see the new data. This one isn't hard to work around: just call HibernateTemplate.flush() to execute the SQL on demand so that subsequent calls to JdbcTemplate will see the new data.

The second use case is a lot tricker: let's say you want to create some data with JdbcTemplate and then make sure that calls to HibernateTemplate will see that data. By default this will not work. You can actually insert with JdbcTemplate, make a call to load the data with HibernateTemplate (it won't find it) and then make another call to JdbcTemplate which will show that the data is there. The problem is that since JdbcTemplate is injected with a DataSource it doesn't really have any knowledge of the transactions from HibernateTransactionManager; thus operations from the two templates are isolated from one another.

Fortunately Spring offers a solution in the TransactionAwareDataSourceProxy class. Just like the name imples, this class acts as a wrapper for an existing DataSource so that all collaborators will participate in Spring-managed transactions. Configuration of this class is trivial:

<bean id="dataSource" class="org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy">
<property name="targetDataSource">
<bean class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close">
...
</bean>
</property>
</bean>

Note: you may or may not want to define the "real" DataSource as an inner bean that doesn't get registered in the ApplicationContext itself. If you are autowiring your DataSource purely by type, having two different implementations of DataSource will be a problem for you. Workarounds include autowiring using @Qualifier or using @Resource to inject the bean by name.