Обсуждение: Java Out-of-memory errors on attempts to read tables with millions of rows

Поиск
Список
Период
Сортировка

Java Out-of-memory errors on attempts to read tables with millions of rows

От
Rich Cullingford
Дата:
Greetings,
We have several tables (in a PG 7.3.3 database on RH Linux 7.3) with 2M+
rows (each row 300-400 bytes in length) that we SELECT into a JDBC
ResultSet for display to the user. We expected that the driver would not
actually transmit data from the database until the application began
issuing getXXX() calls. (IIRC, this is the way the Oracle driver works,
and we had created a buffering mechanism to use it.) Instead, the driver
appears to be attempting to create the whole rowset in Java memory
before returning, and the application runs out of memory. (Java has been
configured to use up to 1.5G on the machine this occurs on.)

Now the SELECT is preceded by a COUNT of the rows that the same query
would return, so perhaps that's what's causing the problem. But the
question is, is this the way a ResultSet is supposed to work? Are there
any configuration options available that modify this behavior? Are there
commercial implementations of PG JDBC that don't have this problem?
(Shame on me, but I have to ask. :)

Any help will be greatly appreciated!

                                 Rich Cullingford
                                 rculling@sysd.com


Re: Java Out-of-memory errors on attempts to read tables with millions of rows

От
Evil Azrael
Дата:
I think you want to use a Cursor for browsing the data.

Christoph Nelles


Am Montag, 14. Juli 2003 um 18:53 schrieben Sie:

RC> Greetings,
RC> We have several tables (in a PG 7.3.3 database on RH Linux 7.3) with 2M+
RC> rows (each row 300-400 bytes in length) that we SELECT into a JDBC
RC> ResultSet for display to the user. We expected that the driver would not
RC> actually transmit data from the database until the application began
RC> issuing getXXX() calls. (IIRC, this is the way the Oracle driver works,
RC> and we had created a buffering mechanism to use it.) Instead, the driver
RC> appears to be attempting to create the whole rowset in Java memory
RC> before returning, and the application runs out of memory. (Java has been
RC> configured to use up to 1.5G on the machine this occurs on.)

RC> Now the SELECT is preceded by a COUNT of the rows that the same query
RC> would return, so perhaps that's what's causing the problem. But the
RC> question is, is this the way a ResultSet is supposed to work? Are there
RC> any configuration options available that modify this behavior? Are there
RC> commercial implementations of PG JDBC that don't have this problem?
RC> (Shame on me, but I have to ask. :)

RC> Any help will be greatly appreciated!

RC>                                  Rich Cullingford
RC>                                  rculling@sysd.com


RC> ---------------------------(end of broadcast)---------------------------
RC> TIP 9: the planner will ignore your desire to choose an index scan if your
RC>       joining column's datatypes do not match



--
Mit freundlichen Grüssen
Evil Azrael                            mailto:evilazrael@evilazrael.de