Re: Out of memory error on huge resultset

Поиск
Список
Период
Сортировка
От Barry Lind
Тема Re: Out of memory error on huge resultset
Дата
Msg-id 3DA7029D.5080001@xythos.com
обсуждение исходный текст
Ответ на Out of memory error on huge resultset  ("Nick Fankhauser" <nickf@ontko.com>)
Ответы Re: Out of memory error on huge resultset  (Doug Fields <dfields-postgres@pexicom.com>)
Список pgsql-jdbc

Doug Fields wrote:

>
> It reads the entire result set from the database backend and caches it
> in a horrible Vector (which should really be a List and which should at
> least make an attempt to get the # of rows ahead of time to avoid all
> the resizing problems).
>

The problem here is that we would then need two completely different
implementations for jdbc1 and jdbc2/3 since List is not part of jdk1.1.
  We could build our own List implementation that works on jdk1.1, but I
am not sure the gain in performance is worth it.  If you could do some
testing and come back with some numbers of the differences in
performance between ResultSets implemented with Vectors and Lists that
would probably give us enough information to guage how to proceed on
this suggested improvement.

> Then, it doles it out from memory as you go through the ResultSet with
> the next() method.
>
> I would have hoped (but was wrong) that it streamed - WITHOUT LOADING
> THE WHOLE THING - through the result set as each row is returned from
> the backend, thus ensuring that you never use much more memory than one
> line. EVEN IF you have to keep the connection locked.
>

This had actually been tried in the past (just getting the records from
the server connection as requested), but this behavior violates the spec
and broke many peoples applications.  The problem is that if you don't
use cursors, you end up tying up the connection until you finish
fetching all rows.  So code like the following no longer works:

get result set
while (rs.next()) {
   get some values from the result
   use them to update/insert some other table using a preparedstatement
}

Since the connection is locked until all the results are fetched, you
can't use the connection to perform the update/insert you want to do for
each itteration of the loop.

> The latter is what I expected it to do. The former is what it does. So,
> it necessitates you creating EVERY SELECT query which you think has more
> than a few rows (or which you think COULD have more than a few rows,
> "few" being defined by our VM memory limits) into a cursor based query.
> Really klugy. I intend to write a class to do that for every SELECT
> query for me automatically.
>
> Cheers,
>
> Doug
>

--Barry



В списке pgsql-jdbc по дате отправления:

Предыдущее
От: Barry Lind
Дата:
Сообщение: Re: [GENERAL] Prepared statement performance...
Следующее
От: Aaron Mulder
Дата:
Сообщение: Re: Out of memory error on huge resultset