2

I need to confirm a theory. I'm learning JSP/Java.

After looking at an existing application (I didn't write), I noticed something that I think is causing our performance problem. Or at least some of it.

It works like this:

1) user opens search page.

2) search page (by default) brings down ALL ROWS. 329,000 of them. Yes. 329K. Into an ArrayList. Each item in the ArrayList is a custom JavaBean tied to the DB table.

3) The ArrayList is then passed to a PaginateResultSet.

4) PaginateResultSet (prs) (329k rows) is then stored in a session variable: session.setAttribute("resultSet", prs);

5) Each additional "Next Page" then grabs 20 rows from the getAttribute("resultSet") and sends to Ext data grid.

OK, now, am I wrong in thinking that giant result set is stored in the SERVER's memory for each thread of each user? So if that result set takes up 20 megs of ram, and we have 20 concurrent users, we now have 400 megs taken from the server?

Isn't it a bad idea to pass THAT much data in session attributes?

Thanks for any pointers.

cbmeeks
  • 11,248
  • 22
  • 85
  • 136

1 Answers1

3

It is certainly a bad idea to duplicate the entire DB table contents into Java's memory, let alone in an user session in a multiuser environment.

You need to paginate at the DB level and store only the rows of interest in the request scope. How exactly to do this depends on the DB interface and the DB used. If you're using basic JDBC, you may find this answer useful. For core Hibernate or JPA, see this answer.

Community
  • 1
  • 1
BalusC
  • 1,082,665
  • 372
  • 3,610
  • 3,555
  • Thanks. That's what I think too. They are using Hibernate but I'm not sure to what level yet. That 329k rows is growing FAST. Last week it was 270,000. So this method really starts to break down when we grow to more users. – cbmeeks Dec 07 '10 at 20:40
  • Yeah, they are definitely paginating at the SESSION level. Do you think this could cause Tomcat to stop responding as well? – cbmeeks Dec 09 '10 at 12:38
  • With that much of records, certainly. Google also doesn't duplicate zillion of rows from the DB into Java memory for every unique user. It would already die after only one or two visits. – BalusC Dec 09 '10 at 12:40