0

I am using the latest(sqljdbc42) SQL Server JDBC driver available for official download. The SQL Server source dbs can vary from 2008 to 2016.

I flipped through several existing threads about setFetchSize(...) and it seems :

  1. If there are 1000 rows and the fetch size is 100, the result set will have only 100 records in memory, at a time and will make 10 network trips to fetch the next 100 records, when rs.next() is called
  2. It's unclear whether the SQL Server JDBC driver honors the fetch size

I am writing a custom library for our internal use. It will select all the data from specific tables, iterate over the results and write it to a stream. I get an OutOfMemoryError when I try to run it for several tables(each one with thousands to hundred thousand rows) but when not if I am iterating over a single table with several thousand rows. I suspect somewhere, a large table is causing the issue.

While I will continue to debug my code, I wish to know if the setFetchSize(...) really works with the latest SQL Server jdbc driver.

Note : The table is NOT having any incremental columns/surrogate keys on which I can manually paginate, I need something out-of-box

*****Edit-1*****

As specified in the comment by @a_horse_with_no_name, I think 'responseBuffering=adaptive' should be explicitly specified and the document specifies some ways to do it. The challenge is :

  1. I receive a custom DB connection pool instance as an argument and I have to use it to get a Connection object, thus, there is no way I can specify 'responseBuffering=adaptive' in the db connect URL or else where. All I get is a Connection object for usage
  2. The only relevant(?) method I can use is setClientInfo(...) but I am unsure if that would help
Community
  • 1
  • 1
Kaliyug Antagonist
  • 3,512
  • 9
  • 51
  • 103

0 Answers0