1

I need to access (read in) data repeatedly from a database in my java codes, and therefore adopted JDBC. However, it seems to me that using JDBC takes up a lot of memory. I tried to be careful about closing the objects created for JDBC (ResultSet, Statemenet), but still it seems to hog a lot of memeory especially compared to reading in input from a textfile. Does anybody know the best way to reduce memeory consumption? Thanks.

BalusC
  • 1,082,665
  • 372
  • 3,610
  • 3,555
flyingfromchina
  • 9,571
  • 12
  • 35
  • 38
  • 1
    Even if JDBC were a memory hog... what choice do you have? It's the main Java database library, and all the DB abstraction layers use it. – Powerlord Nov 13 '09 at 16:06
  • How much is "a lot", how much data do you process and what do you do with the data you get back from queries ? – nos Nov 13 '09 at 17:58

8 Answers8

5

JDBC does not eat up very much memory for itself. It caches several pieces of metadata, but the vast majority of memory is taken up by the query results usually. The idea of it is just providing a standard interface of accessing your data, it is not very much of an implementation in its own right. That's why you need JDBC drivers that implement the specifics for each database product.

JDBC being not resource intensive is supported by the fact that you can run JDBC even on mobile devices with JavaME - a very resource limited environment.

So while you might appreciate the easier handling of data with frameworks like Hibernate or JPA, you should not worry about JDBC consuming any amount of resources worth mentioning in the context of the overall application.

As they say: Don't try to optimize things that are not a problem :)

Daniel Schneller
  • 13,728
  • 5
  • 43
  • 72
4

JDBC is not a memory hog. The data it returns can be a huge memory hog.

One common problem with JDBC is that the result sets, which contain the huge amount of data (and are not always memory optimized), are not handled or closed correctly. In order to prevent memory leaks from ResultSet object lying around, the developer must take careful steps to make sure the memory is released before moving on. (Java handling most memory clean-up means this is a blind spot for most developers, so it isn't surprising.)

Consider using this code. It uses the "try/catch try/finally" pattern to definitively close the result set:

try{
    Statement stmt = conn.createStatement();
    try {
        ResultSet rs = stmt.createQuery("some sql");
        try {
            // ResultSet processing goes here
        } finally {
            rs.close();
        }
    } finally {
        stmt.close();
    }
}
catch ( SQLException ex ) { // exception processing for any problems. }

This guarantees that the result set is closed - even if a exception is thrown.

Jonathan B
  • 51
  • 1
  • 1
    @BalusC - I can see why you might say that. A vanilla design has the finally{} at the end, after the catch. This design solves a few problems: not checking for nulls and avoiding the "missed close()" trap for a second exception. First, all of the close methods ALSO throw SQLException, this centralizes the catch block for them. The "missed close()" is tricker - if you use a single finally that also has a try/catch in it, and that method has more than one close(), any exception skips some of the close() statements. This may not be perfect code, so better ideas are always welcome! – Jonathan B Nov 14 '09 at 00:36
  • can anyone point to a better bullet-proof approach??? with all this nested try catch, I always get the impression that I'm missing some combination that might leave resources open... – opensas Aug 21 '11 at 11:18
2

It is very unlikely that 'JDBC' is a memory hog. JDBC is just a pipe. The amount of memory used will depend on what database you are talking to and, more importantly, what sort of queries you are running, and whether you are using cursors correctly.

You might look at Hibernate for help in managing memory usage when working with large amounts of data.

bmargulies
  • 97,814
  • 39
  • 186
  • 310
  • Can you explain how hibernate would use less memory than raw JDBC? Level 2 cache? – Sam Barnum Nov 13 '09 at 15:57
  • @Sam Barnum: See the section Fetching Strategies on this page: http://docs.jboss.org/hibernate/core/3.3/reference/en/html/performance.html – Powerlord Nov 13 '09 at 16:04
  • That would be the first time that I hear someone advocating Hibernate over JDBC to improve memory footprints... Since Hibernate builds upon JDBC, it can only perform as well as JDBC itself. Or am I missing something? – Lukas Eder Apr 03 '12 at 10:46
  • The OP is probably fetching some gargantuan ResultSet all at once, instead of using cursors or other incremental techniques. Hibernate is one library that embodies strategies for incremental retrieval. @LukasEder. It can use JDBC sensibly, as could the OP. – bmargulies Apr 03 '12 at 19:10
2

If you ever encounter this problem then it's probably your own code which is a memory hog. This can happen if you haul the complete database contents into Java's memory. That's actually not JDBC's fault.

To reduce memory consumption, best is to fire as specific as possible SQL queries. Just only query the data you actually need, nothing else.

If you really need all the data of a table, then you're probably using the wrong tool with JDBC. Decent DB servers ships with more decent import/export tools for that. Give it a look first.

BalusC
  • 1,082,665
  • 372
  • 3,610
  • 3,555
2

I have to agree what others have said here, that JDBC itself doesn't use a large memory footprint. However if you're concerned about a particular JDBC operation using a lot of memory you could use either JConsole or a profiler to see how much heap is used during said method execution.

Jason Gritman
  • 5,251
  • 4
  • 30
  • 38
2

I would like to add an important point here, after running into memory problems myself:

I was reading large datasets from a MySQL database (about 600,000 rows of 20 columns) and kept running out of heap space. I thought I could fix it by changing the fetch size, but setting the fetch size on the PreparedStatement did nothing.

What I discovered was that the MySQL JDBC driver I was using (version 5.1.15) doesn't implement fetch sizes. In fact, for every MySQL query it loads the entire ResultSet into memory. However, if you set the fetch size to Integer.MIN_VALUE then the driver will load 1 row at a time from the MySQL server....with one caveat: you cannot execute any other statements on the connection until the ResultSet is closed.

It's documented here:

http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html

That being said, if you have control over your MySQL server you can set the 'useCursorFetch' to true and the server will return the 'defaultFetchSize' number of rows instead of all of them. This is true as of version 5.0.2 of MySQL. The developer claims it's an experimental hack though, so be warned: http://forums.mysql.com/read.php?39,137457,137457#msg-137457

Anyway, moral of the story here is to check the JDBC driver your using for any peculiarities.

homebrew
  • 101
  • 6
1

Its more a factor of the data you are loading than the JDBC library. Since JDBC calls involve a lot of data, I'd check that you don't have a lot of objects sitting around in a collection that isn't being released and making it to the generation 2 of the heap.

Kelly S. French
  • 12,198
  • 10
  • 63
  • 93
0

One time JDBC can use a lot of memory is if you're doing queries which return many rows from the database, and scrolling result sets are not supported by your JDBC driver. This causes all rows to be retrieved from the server, and potentially loaded into RAM.

The solution is to split your queries into smaller batches, or enable scrolling result sets if possible.

Sam Barnum
  • 10,559
  • 3
  • 54
  • 60