0

I am using preparedstatement batch update/insert to store data into database. But it is taking 3 minutes to persist 1000 entries into SQL Server Database. Am wondering if there is a way to improve the performance. Please check my code below:

SQL UPDATE STATEMENT

String sqlUpdate = "UPDATE details set a=? , b=? , c=? , d=?, e=? where f=?";
updatePstmt = conn.prepareStatement(sqlUpdate);

public static void updateCustomerDetailByBatch(HashMap<String, String []> updateCustDetails ) {

  final int BATCH_SIZE=1000;
  int batchCtr = 1;

  try{
     conn.setAutoCommit(false);
     MAFLogger.info("Number of Customer details to be updated: "+ updateCustDetails.size());

     for (Map.Entry<String, String []> custEntry: updateCustDetails.entrySet()) {

        String x = custEntry.getValue()[0];
        String y = custEntry.getValue()[1];
        String z = custEntry.getKey();
        String a = custEntry.getValue()[2];
        String b = custEntry.getValue()[3];
        String c = custEntry.getValue()[4];

        updatePstmt.setString(1, x);
        updatePstmt.setString(2, y);
        updatePstmt.setString(3, z);
        updatePstmt.setString(4, a);
        updatePstmt.setString(5, b);
        updatePstmt.setString(6, c);

        updatePstmt.addBatch();

        if (batchCtr % BATCH_SIZE == 0) {
           MAFLogger.debug("Batch Ctr is :  " + batchCtr+ " Updated Batch ");
           updatePstmt.executeBatch();
        }

        batchCtr++;
     }

     if (batchCtr % BATCH_SIZE != 0 ) {

        MAFLogger.debug("Execute remaining batch update statement contents:  "+ batchCtr);
        updatePstmt.executeBatch();

     }

     conn.setAutoCommit(true);

  }catch (SQLException sqlE) {

     MAFLogger.error("Batch update statement problem : " + sqlE);

  }

}

I have read different articles about adding and answers here in SO such as these link1 , link2 and link3 but there is no change. Appreciate if you can help out.

I am using Microsoft JDBC Driver downloadable on their website "sqljdbc_4.1.5605.100_enu.tar.gz"

Table Index

index_name          index_description                               index_keys
PK_CC_Details_TEMP  clustered, unique, primary key located on PRIMARY   f
Community
  • 1
  • 1
dimas
  • 2,487
  • 6
  • 40
  • 66
  • As I know executeBatch divides the execution automatically so you don't need to divide it by programmatically. At least jdbcTemplate divides automatically – Gurkan İlleez Dec 16 '16 at 18:04
  • I was following this example http://stackoverflow.com/questions/6892105/bulk-insert-in-java-using-prepared-statements-batch-update – dimas Dec 16 '16 at 18:08
  • Maybe you can change the batch size but it can be experimental only – Gurkan İlleez Dec 16 '16 at 18:15
  • Are you using Microsoft's JDBC driver? If so, it has its own bulk insert class that would probably be faster than individual statements within a JDBC "batch". – Gord Thompson Dec 16 '16 at 18:18
  • Yes I am using Microsoft's JDBC driver. I have updated my question regarding the driver I used – dimas Dec 16 '16 at 18:19
  • 1
    Have you checked the indexes on the table to ensure that you're not doing a full table scan for each UPDATE? – Gord Thompson Dec 16 '16 at 19:27
  • can you please provide pointers on how to do that? I am not a DBA but I know the basic – dimas Dec 16 '16 at 19:30
  • `sp_helpindex` will probably give you enough information. See [here](http://stackoverflow.com/a/10345469/2144390) for details. – Gord Thompson Dec 16 '16 at 19:40
  • 1
    Specifically, column `f` should be indexed because that's the predicate used for selecting the correct row. Also, if you have triggers in the table, these might slow down each update. – Mick Mnemonic Dec 16 '16 at 19:50
  • i could see the indexes but am not sure how this can help. – dimas Dec 16 '16 at 19:56
  • hi Gord/Mick, I have updated my question showing index for details table. Can you please check if this ok? – dimas Dec 16 '16 at 20:10
  • That index should avoid table scans. Questions in addition to the comment from @MickMnemonic regarding triggers: For the string values that you are updating, are any of them particularly large? Do you have a slow network connection to the SQL Server? Is the server under heavy load? You seem to be using a reasonable approach but ~5.6 updates per second is rather sluggish performance. – Gord Thompson Dec 16 '16 at 20:18
  • Nope I don't have any triggers and network "speed" is sufficient. As for the String values they are small ranging from length 15 to 50. Could it be the driver am using? I also saw this link https://msdn.microsoft.com/en-us/library/mt221490(v=sql.110).aspx. I am not sure if this is can be used for Bulk Update. – dimas Dec 16 '16 at 20:24

0 Answers0