Workarounds for getBinaryStream returning ByteArrayInputStream on bytea
От | Александър Шопов |
---|---|
Тема | Workarounds for getBinaryStream returning ByteArrayInputStream on bytea |
Дата | |
Msg-id | 1290631846.3659.9.camel@dalgonosko обсуждение исходный текст |
Ответы |
Re: Workarounds for getBinaryStream returning ByteArrayInputStream
on bytea
Improved JDBC driver part 2 |
Список | pgsql-jdbc |
Hi everyone, I have a table containing file contents in bytea columns. The functionality I am trying to achieve is having a result set containing such columns, iterating over them and streaming them while zipping them. The problem is that I get ByteArrayInputStream from ResultSet.getBinaryStream. Thus iterating over many rows, each containing more than 10MB of data smashes the heap. In peak times I will have several such processes. I am using postgresql-8.4-702.jdbc3.jar against a PG 8.4.5 installation. I looked at the current source of driver. Jdbc3ResultSet extends AbstractJdbc3ResultSet extends AbstractJdbc2ResultSet which is the place that provides implementation for getBinaryStream which returns ByteArrayInputStream on bytea columns, and BlobInputStream on blob columns. On skimming it seems that BlobInputStream does indeed stream the bytes instead of reading them in memory (chunks for reads are 4k). So what am I options? Refactor the DB schema to use blobs rather than bytea? Is it impossible to have bytea read in chunks? Kind regards: al_shopov
В списке pgsql-jdbc по дате отправления: