JDBC and processing large numbers of rows
От | David Wall |
---|---|
Тема | JDBC and processing large numbers of rows |
Дата | |
Msg-id | 024101c4376d$d567d050$3201a8c0@rasta обсуждение исходный текст |
Ответ на | Can't insert more than 80 registers!? (Carlos Barroso <miguel.barroso@mail.pt>) |
Ответы |
Re: JDBC and processing large numbers of rows
|
Список | pgsql-jdbc |
This is more towards JDBC than PG in particular, but since we support PG, Oracle and eventually other DBMS via JDBC, I thought I'd see if anybody has any good ideas on how do the following in a somewhat portable way via JDBC (by large, I'm talking about thousands to millions of rows -- significant amounts of data) 1) Often people need to "page through" large result set. Some DBs support the LIMIT,OFFSET construct of SELECT, and while not super efficient, at least it only returns the window of the result set that is of interest. How do most people handle such paging through of data? I've heard people cache the entire result set, but that's not practical because of the memory overhead and, with the web, not even knowing when to get rid of the resultset. Do people use work tables or other schemes to "pre-paginate" the results but store it in a temporary table? 2) How do people process many records in a large resultset? For example, scanning through a list of rows and then doing some processing based on those rows, and then perhaps updating the rows to indicate that they've been processed. 3) How do people use cursors in JDBC? Being able to FETCH seems like a nice way to handle question #2 above in a batch program, since only a subset of rows needs to be retrieved from the db at a time. Cursors probably don't work for question #1 above since keeping a transaction alive across page views is generally frowned upon and even hard to accomplish since it means locking up a connection to the db for each paging user. Any pointers would be appreciated. Thanks, David
В списке pgsql-jdbc по дате отправления: