While jdbcTemplate.batchUpdate(...) is running I can see DB row count is increased gradually(by running count(*)
in the table), initially 2k then 3k and goes till 10k. 2k and 3k are not exact numbers sometimes I get 235 and then 4567.
I was expecting 10 k rows (batch size) to be committed in one shot. In my understanding, if initially, I get row count 0 then next row count should be 10k. I don't want one by one insert for performance reason that's why used batch update feature and seems it also doesn't commit all in one shot.
I want to send data(10k rows) to DB server only once for my batch size. for this is there anything I should specify in the configuration?
Below is the way I am writing jdbcTemplate
batch update batch size is 10k.
public void insertRows(...) {
...
jdbcTemplate.batchUpdate(query, new BatchPreparedStatementSetter(){
@Override public void
setValues(PreparedStatement ps, int i) throws SQLException {
...
}
@Override public int getBatchSize() {
if(data == null){
return 0;
}
return data.size();
}
});
}
Edit: Added @Transactional to isertRows method still I can see the same behavior. using Transnational it commits after 10k rows, but when I see count using with UR (select count(*) from mytable with ur) it shows data being updated gradually (2k 4k so on till 10k). That means data goes to a server in chunks (probably one bye one). How can i send everything in one shot. This question suggests it is achieved using rewriteBatchedStatements in mysql, is there anything similar we have in DB2 as well.
I am using DataSource implementation com.ibm.db2.jcc.DB2BaseDataSource