0

I'm using Sybase ASE 15.5 and JDBC driver jconnect 4 and I'm experiencing slow insert with executebatch() with a batch size of +/-40 rows on a large table of 400 million rows with columns (integer, varchar(128),varchar(255)), primary key and clustered index on columns (1,2) and nonclustered index on columns (2,1). Each batch of +/-40 rows takes +/-200ms. Is the slowness related to the size of the table? I know that dropping the indexes can improve performance but unfortunately that is not an option. How can I improve the speed of insertion?

NOTE : This is part of the application live run, this is not a one shot migration, so I won't be using bcp tool.

EDIT : I have checked this answer for mysql but not sure it applies to Sybase ASE https://stackoverflow.com/a/13504946/8315843

Sybuser
  • 735
  • 10
  • 27
  • I'm not sure what you mean by `+/-40 rows` and `+/-200ms` (I would normally take these to mean a 80-row range, and a 400ms range ...??); or are you saying it takes 200ms to insert 40 rows? – markp-fuso Jul 16 '17 at 18:09
  • 200ms for 40 rows if I'm not mistaken. I will check that again. – Sybuser Jul 16 '17 at 18:18
  • It takes 240ms but does not insert anything. I have opened a new question https://stackoverflow.com/q/45150506/8315843 – Sybuser Jul 17 '17 at 17:48
  • The slowness was only related to this weird issue above. I gave up on this issue, reloaded a fresh dump and the issue disappeared, and insertion time dropped to 30ms. – Sybuser Jul 18 '17 at 20:27

1 Answers1

2

There are many reasons why the inserts could be slow, eg:

  • each insert statement is having to be parsed/compiled; the ASE 15.x optimizer attempts to do a lot more work than the previous ASE 11/12 optimizer w/ net result being that compiles (generally) take longer to perform
  • the batch is not wrapped in a single transaction, so each insert has to wait for a separate write to the log to complete
  • you've got a slow network connection between the client host and the dataserver host
  • there's some blocking going on
  • the table has FK constraints that need to be checked for each insert
  • there's a insert trigger on the table (w/ the obvious question of what is the trigger doing and how long does it take to perform its operations)

Some ideas to consider re: speeding up the inserts:

  • use prepared statements; the first insert is compiled into a lightweight procedure (think 'temp procedure'); follow-on inserts (using the prepared statement) benefit from not having to be compiled
  • make sure a batch of inserts are wrapped in a begin/commit tran wrapper; this tends to defer the log write(s) until the commit tran is issued; fewer writes to the log means less time waiting for the log write to be acknowledged
  • if you have a (relatively) slow network connection between the application and dataserver hosts, look at using a larger packet size; fewer packets means less time waiting for round-trip packet processing/waiting
  • look into if/how jdbc supports the bulk-copy libraries (basically implement bcp-like behavior via jdbc) [I don't work with jdbc so I'm only guessing this might be avaialble]

Some of the above is covered in these SO threads:

Getting ExecuteBatch to execute faster

JDBC Delete & Insert using batch

Efficient way to do batch INSERTS with JDBC

markp-fuso
  • 28,790
  • 4
  • 16
  • 36
  • Each batch of +/-40 rows is part of a transaction so it's +/-40 addBatch() + 1 commit – Sybuser Jul 16 '17 at 18:12
  • Already using prepared statements. No foreign keys, no triggers, no blocking queries. – Sybuser Jul 16 '17 at 18:22
  • Are you using the `ENABLE_BULK_LOAD` option in the connection? Look at this [example](https://stackoverflow.com/questions/2112395/sybase-jconnect-enable-bulk-load-usage) – Adam Leszczyński Jul 17 '17 at 16:55