1) Have them consider the implications of having the virtual machine swapping stuff in and out of memory to disk with their code doing it. Sometimes its just as good and no extra work to let the VM do it. Sometimes it is much faster when they know something about the data characteristics and their optimizations are targeted that way.
2) Consider the possible use of multiple JVMs either cooperating or running in parallel with the scope of operation divided up somehow. Sometimes even running 4 JVMs running the same application on the same VM with each using 1/4 of the memory can be better. (Depends on your application's characteristics.)
3) Consider a memory management framework like TerraCotta's Big Memory. There are competitors in this space. For example, VMWare has Elastic Memory. They use memory outside the JVM's heap to store stuff and avoid the GC issue. Or they allocate very large heap objects and manage it independently of Java.
4) JVMs do ok(TM) if the memory gets taken up by live objects and they end up moving to the older generations for garbage collection. GC can be a problem. This is a black art. Be sure to kill a chicken and spread the feathers over the virtual machine before attempting to optimize GC. :) Oh ... and there are some really interesting comments here: Java very large heap sizes
5) Sometimes you set the JVM size and other factors beyond your control will keep it from growing that large. Be sure to do some testing to make sure it is really able to grow to the size you set.
And number 5 is the real key item. Test. Test. And test some more. You are out on the edges of explored territory.