I am doing a typical read from a DB, and I'm going to process and write to a file step on a Dataset that has many millions (>10 million) of records.
Is there anything from a Design or Architecture point of view that should be kept in mind?
Also are there any Java-Batch specific coding practices that need to be kept in mind? (apart from the general java best practices)
I am using IBM's implementation of JSR352 on Websphere liberty.