Re: JDBC and processing large numbers of rows
От | Dave Cramer |
---|---|
Тема | Re: JDBC and processing large numbers of rows |
Дата | |
Msg-id | 1084359414.1536.149.camel@localhost.localdomain обсуждение исходный текст |
Ответ на | Re: JDBC and processing large numbers of rows (Guido Fiala <guido.fiala@dka-gmbh.de>) |
Список | pgsql-jdbc |
Guido, No, this isn't the case, if you use cursors inside a transaction then you will be able to have an arbitrarily large cursor open ( of any size AFAIK ) --dc-- On Wed, 2004-05-12 at 02:37, Guido Fiala wrote: > Reading all this i'd like to know if all this isn't just a tradeof between > _where_ the memory is consumed? > > If your JDBC-client holds all in memory - it gets an OutOfMem-Exception. > > If your backend uses Cursors - it caches the whole resultset and probably > starts swapping and gets slow (needs the memory of all users). > > If you use Limit and Offset the database has to do more to find the > data-snippet and in worst case (last few records) still needs temporary the > whole resultset? (not sure here) > > Is that just a "choose your poison" ? At least in the first case the memory of > the Client _gets_ used too and not all load to the backend, on the other side > - most the the user does not really read all the data at all, so it puts > unnecessary load on all the hardware. > > Really like to know what the best way to go is then... > > Guido > > ---------------------------(end of broadcast)--------------------------- > TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/docs/faqs/FAQ.html > > > > !DSPAM:40a1c98a223941159885930! > > -- Dave Cramer 519 939 0336 ICQ # 14675561
В списке pgsql-jdbc по дате отправления: