ANSI Isolation Levels - DBMS Vendors don't 100% agree
Before we get into this we need to note there is some debate over relating ANSI isolation levels with databases that support multi-level concurrency control (MCC).
Quoting from wikipedia: http://en.wikipedia.org/wiki/Snapshot_isolation
"Unfortunately, the ANSI SQL-92 standard was written with a lock-based database in mind, and hence is rather vague when applied to MCC systems. Berenson et al. wrote a paper in 1995  critiquing the SQL standard, and cited snapshot isolation as an example of an isolation level that did not exhibit the standard anomalies described in the ANSI SQL-92 standard, yet still had anomalous behaviour when compared with serializable transactions."
Basically ANSI Isolation levels are not the be all and end all for some DBMS Vendors. ANSI SQL 1999 doesn't mention Lost Updates, Read Consistency or Snapshot Isolation for example.
Effects on Lazy Loading
You should be aware of the effects of the Transaction Isolation levels on all statements but it is worth looking specifically at Lazy loading here.
I think this issue has been clouded by some other vendor documentation and it is worth being a bit more explicit about how lazy loading is effected by the transaction isolation level.
When do you want a higher Isolation Level
So if the default is READ_COMMITTED when would you want a higher Isolation level such as SERIALIZABLE?
I think there are two cases and I'm only going to answer one of them. Specifically those cases are where you want a "read only/query only" transaction and when you want a "updatable" transaction.Read Only/Query Only SERIALIZABLE
There is an Oracle term called "Transaction level Read Consistency" and that is essentially what you get when you get a transaction at SERIALIZABLE. It is probably worth researching this term but the guts is that every statement in the transaction "sees the database as at the transaction start time". For me this is a good explanation of why we would want a "Read Only SERIALIZABLE Transaction".
For an example of when you want "Transaction level Read Consistency" lets take an accounting trial balance report and say that 2 queries are used to build the report. The first queries for the Debits and the second for the Credits. What we explicitly do not want is for a transaction to occur during our report and have the new data included in the second query but not the first. If this occured then our Total Debits would not match our Total Credits.
In summary we want both queries to see the database as at the same time (transaction start time).
Another approach to achieve the same goal but without using a SERIALIZABLE Transaction is to use predicates on the queries to make sure they both see "consistent" data and exclude newly commited data. What this means is that with some knowledge of the the application Developers can use Date or Timestamp type predicates to make sure the queries are consistent with respect to each other.
Another approach available in some environments is to ensure that updates do not occur during the reporting. This can happen if you have a job system and serialise the updating and reporting so that they don't occur concurrently.Updatable SERIALIZABLE
Unfortunately I am not going to explain why you want to do this. What I can say is that you should be aware that you will now have to use pessimistic locking and that you may be leaving the "sweet spot" of your database implementation with higher chances of facing deadlocks etc. Good luck :)
Using Transaction Isolation Level
// use the default Isolation level.
// Likely to be READ_COMMITTED
// start a transaction at SERIALIZABLE
To start a transaction at SERIALIZABLE you just need to specify it in the beginTransaction. Otherwise it will default to your default setting which is likely to be READ_COMMITTED.
You do not need to set the Isolation level back if you are using Ebeans connection pool. That will reset the Isolation level back automatically.