Обсуждение: JDBC: ERROR: portal "C_2" does not exist
Greetings, When I do a resultSet.next(), I am getting an exception: ERROR: portal "C_2" does not exist The facts: 64-bit Linux JDK 1.8.0_322 postgresql-42.3.3.jar psql (PostgreSQL) 14.2 conn.setAutoCommit(false); PreparedStatement pstat = conn.prepareStatement(sql); pstat.setFetchSize(50); I do a query that will have thousands of records. I get the first 50 just fine. When I do: rset.next() the 51st time, I get an exception: ERROR: portal "C_2" does not exist There is no C_2 anywhere in my code or query. Sure appreciate any help! Thanks! Blake McBride
When auto commit is diabled and using a non-zero fetch size, the pgjdbc driver creates a portal to read the query results from the server in chunks of fetch size. The portals have a string identifier and the driver generates them sequentially per connection with a "C_" prefix, so "C_1", "C_2", "C_3", ... etc.
As you read results via ResultSet.next(), the driver checks if it has the next row already buffered. If not, it reads from the named portal to retrieve the next chunk of fetchSize quantity of rows.
A fetch size of 50 and the error happening at row 51 likely means that something either closed the portal or the connection itself is not correct.
Are you using a connection pool like pgbouncer in between your Java application and the database server?
If so, the connection pooler must be transaction aware to ensure that the same connection is used throughout the entire transaction.
If not, try creating a minimal example that reproduces the error.
Regards,
-- Sehrope Sarkuni
Founder & CEO | JackDB, Inc. | https://www.jackdb.com/
Hi Blake
On Fri, 11 Mar 2022 at 14:04, Sehrope Sarkuni <sehrope@jackdb.com> wrote:
When auto commit is diabled and using a non-zero fetch size, the pgjdbc driver creates a portal to read the query results from the server in chunks of fetch size. The portals have a string identifier and the driver generates them sequentially per connection with a "C_" prefix, so "C_1", "C_2", "C_3", ... etc.
As you read results via ResultSet.next(), the driver checks if it has the next row already buffered. If not, it reads from the named portal to retrieve the next chunk of fetchSize quantity of rows.
A fetch size of 50 and the error happening at row 51 likely means that something either closed the portal or the connection itself is not correct.
Are you using a connection pool like pgbouncer in between your Java application and the database server?
If so, the connection pooler must be transaction aware to ensure that the same connection is used throughout the entire transaction.
Actually the recommendation for pgbouncer and I presume other poolers is to turn off prepared statements
Disabling prepared statements in JDBC
The proper way to do it for JDBC is adding the prepareThreshold=0
parameter to the connection string.
Dave
Thank you for the info, Sehrope!
I checked. I am not closing anything. However, I am using C3P0. Perhaps that is configured wrong. I'll look there. Just knowing it should work is a big help.
It is hard for me to give a sample because it is part of a larger system and I am using utilities in that system rather than raw JDBC. However, I am the author of those utilities so it's not a blackbox. (They're at https://github.com/blakemcbride/Kiss)
I checked. I am not closing anything. However, I am using C3P0. Perhaps that is configured wrong. I'll look there. Just knowing it should work is a big help.
It is hard for me to give a sample because it is part of a larger system and I am using utilities in that system rather than raw JDBC. However, I am the author of those utilities so it's not a blackbox. (They're at https://github.com/blakemcbride/Kiss)
Thank you!
Blake
On Fri, Mar 11, 2022 at 1:04 PM Sehrope Sarkuni <sehrope@jackdb.com> wrote:
When auto commit is diabled and using a non-zero fetch size, the pgjdbc driver creates a portal to read the query results from the server in chunks of fetch size. The portals have a string identifier and the driver generates them sequentially per connection with a "C_" prefix, so "C_1", "C_2", "C_3", ... etc.
As you read results via ResultSet.next(), the driver checks if it has the next row already buffered. If not, it reads from the named portal to retrieve the next chunk of fetchSize quantity of rows.
A fetch size of 50 and the error happening at row 51 likely means that something either closed the portal or the connection itself is not correct.
Are you using a connection pool like pgbouncer in between your Java application and the database server?
If so, the connection pooler must be transaction aware to ensure that the same connection is used throughout the entire transaction.
If not, try creating a minimal example that reproduces the error.Regards,-- Sehrope SarkuniFounder & CEO | JackDB, Inc. | https://www.jackdb.com/
>Just knowing it should work is a big help.
That is true.
The error <<portal "C_2" does not exist>> should not happen in the regular app flow.
The symptom reminds me "do not close refcursor after reading if fetchsize has been set"
https://github.com/pgjdbc/pgjdbc/pull/2371 (v42.3.2+),
so I wonder if the issue reproduces with v42.3.1
We can't heal wong load-balancer configurations at the driver level,
however, we could make the error message easier to understand and diagnose.
I've filed https://github.com/pgjdbc/pgjdbc/issues/2468 for enhancing the user-facing message.
Yet another option to diagnose the issue is to capture trace logs from the driver.
Vladimir
I have narrowed the problem down. In pseudo code, here is what I am trying to do (in my sample program): conn.setAutoCommit(false) PreparedStatement pstat = conn.prepareStatement(sql) pstat.setFetchSize(50) select all records in a large file for each record: read one field from the record through a second statement, add a record to a different table XXX execute the second statement YYY commit the change ZZZ loop So, if I am just spinning through the records (not do XXX, YYY, and ZZZ), it works. But, I want to add records to a second table. I need the commit because I don't want thousands of new records in one commit. And, in terms of the reads, I want to read records 50 at a time - not all at once. It seems I can do one or the other but not both - or can I? Thanks! Blake On Fri, Mar 11, 2022 at 1:56 PM Blake McBride <blake1024@gmail.com> wrote: > > Thank you for the info, Sehrope! > > I checked. I am not closing anything. However, I am using C3P0. Perhaps that is configured wrong. I'll look there. Just knowing it should work is a big help. > > It is hard for me to give a sample because it is part of a larger system and I am using utilities in that system ratherthan raw JDBC. However, I am the author of those utilities so it's not a blackbox. (They're at https://github.com/blakemcbride/Kiss) > > Thank you! > > Blake > > > > On Fri, Mar 11, 2022 at 1:04 PM Sehrope Sarkuni <sehrope@jackdb.com> wrote: >> >> When auto commit is diabled and using a non-zero fetch size, the pgjdbc driver creates a portal to read the query resultsfrom the server in chunks of fetch size. The portals have a string identifier and the driver generates them sequentiallyper connection with a "C_" prefix, so "C_1", "C_2", "C_3", ... etc. >> >> As you read results via ResultSet.next(), the driver checks if it has the next row already buffered. If not, it readsfrom the named portal to retrieve the next chunk of fetchSize quantity of rows. >> >> A fetch size of 50 and the error happening at row 51 likely means that something either closed the portal or the connectionitself is not correct. >> >> Are you using a connection pool like pgbouncer in between your Java application and the database server? >> >> If so, the connection pooler must be transaction aware to ensure that the same connection is used throughout the entiretransaction. >> >> If not, try creating a minimal example that reproduces the error. >> >> Regards, >> -- Sehrope Sarkuni >> Founder & CEO | JackDB, Inc. | https://www.jackdb.com/ >> >> >>
Sorry, the email system wrapped some of my lines. Here is a corrected version: I have narrowed the problem down. In pseudo code, here is what I am trying to do (in my sample program): conn.setAutoCommit(false) PreparedStatement pstat = conn.prepareStatement(sql) pstat.setFetchSize(50) select all records in a large file for each record: read one field from the record through a second statement, add a record to a different table - XXX execute the second statement - YYY commit the change - ZZZ loop So, if I am just spinning through the records (not do XXX, YYY, and ZZZ), it works. But, I want to add records to a second table. I need the commit because I don't want thousands of new records in one commit. And, in terms of the reads, I want to read records 50 at a time - not all at once. It seems I can do one or the other but not both - or can I? Thanks! Blake On Fri, Mar 11, 2022 at 9:45 PM Blake McBride <blake1024@gmail.com> wrote: > > I have narrowed the problem down. In pseudo code, here is what I am > trying to do (in my sample program): > > conn.setAutoCommit(false) > PreparedStatement pstat = conn.prepareStatement(sql) > pstat.setFetchSize(50) > select all records in a large file > for each record: > read one field from the record > through a second statement, add a record to a different table XXX > execute the second statement > YYY > commit the change > ZZZ > loop > > So, if I am just spinning through the records (not do XXX, YYY, and > ZZZ), it works. But, I want to add records to a second table. I need > the commit because I don't want thousands of new records in one > commit. And, in terms of the reads, I want to read records 50 at a > time - not all at once. > > It seems I can do one or the other but not both - or can I? > > Thanks! > > Blake > > > > On Fri, Mar 11, 2022 at 1:56 PM Blake McBride <blake1024@gmail.com> wrote: > > > > Thank you for the info, Sehrope! > > > > I checked. I am not closing anything. However, I am using C3P0. Perhaps that is configured wrong. I'll look there. Just knowing it should work is a big help. > > > > It is hard for me to give a sample because it is part of a larger system and I am using utilities in that system ratherthan raw JDBC. However, I am the author of those utilities so it's not a blackbox. (They're at https://github.com/blakemcbride/Kiss) > > > > Thank you! > > > > Blake > > > > > > > > On Fri, Mar 11, 2022 at 1:04 PM Sehrope Sarkuni <sehrope@jackdb.com> wrote: > >> > >> When auto commit is diabled and using a non-zero fetch size, the pgjdbc driver creates a portal to read the query resultsfrom the server in chunks of fetch size. The portals have a string identifier and the driver generates them sequentiallyper connection with a "C_" prefix, so "C_1", "C_2", "C_3", ... etc. > >> > >> As you read results via ResultSet.next(), the driver checks if it has the next row already buffered. If not, it readsfrom the named portal to retrieve the next chunk of fetchSize quantity of rows. > >> > >> A fetch size of 50 and the error happening at row 51 likely means that something either closed the portal or the connectionitself is not correct. > >> > >> Are you using a connection pool like pgbouncer in between your Java application and the database server? > >> > >> If so, the connection pooler must be transaction aware to ensure that the same connection is used throughout the entiretransaction. > >> > >> If not, try creating a minimal example that reproduces the error. > >> > >> Regards, > >> -- Sehrope Sarkuni > >> Founder & CEO | JackDB, Inc. | https://www.jackdb.com/ > >> > >> > >>
On Fri, Mar 11, 2022 at 8:47 PM Blake McBride <blake1024@gmail.com> wrote:
conn.setAutoCommit(false)
PreparedStatement pstat = conn.prepareStatement(sql)
pstat.setFetchSize(50)
select all records in a large file
for each record:
read one field from the record
through a second statement, add a record to a different table - XXX
execute the second statement - YYY
commit the change - ZZZ
loop
It seems I can do one or the other but not both - or can I?
Correct. What you are trying to do requires that you open two sessions to the database (conn_read, conn_write) and execute the INSERT command on the conn_write connection while looping through data on the conn_read one.
David J.
Thanks, David.
I found a way around the problem (without needing another connection) by locally caching and closing result sets thus no result set is ever pending when another takes place. It intelligently caches the data in-memory or in a temporary file. The code is at https://github.com/blakemcbride/Kiss/tree/master/src/main/core/org/kissweb/database if anyone is interested. The actual caching occurs in Cursor.cacheAllRecords() and Cursor.nextCachedRecord().
On Fri, Mar 11, 2022 at 11:16 PM David G. Johnston <david.g.johnston@gmail.com> wrote:
>
> On Fri, Mar 11, 2022 at 8:47 PM Blake McBride <blake1024@gmail.com> wrote:
>>
>>
>> conn.setAutoCommit(false)
>> PreparedStatement pstat = conn.prepareStatement(sql)
>> pstat.setFetchSize(50)
>> select all records in a large file
>> for each record:
>> read one field from the record
>> through a second statement, add a record to a different table - XXX
>> execute the second statement - YYY
>> commit the change - ZZZ
>> loop
>>
>> It seems I can do one or the other but not both - or can I?
>>
>
> Correct. What you are trying to do requires that you open two sessions to the database (conn_read, conn_write) and execute the INSERT command on the conn_write connection while looping through data on the conn_read one.
>
> David J.
>
I found a way around the problem (without needing another connection) by locally caching and closing result sets thus no result set is ever pending when another takes place. It intelligently caches the data in-memory or in a temporary file. The code is at https://github.com/blakemcbride/Kiss/tree/master/src/main/core/org/kissweb/database if anyone is interested. The actual caching occurs in Cursor.cacheAllRecords() and Cursor.nextCachedRecord().
Thanks!
Blake McBride
On Fri, Mar 11, 2022 at 11:16 PM David G. Johnston <david.g.johnston@gmail.com> wrote:
>
> On Fri, Mar 11, 2022 at 8:47 PM Blake McBride <blake1024@gmail.com> wrote:
>>
>>
>> conn.setAutoCommit(false)
>> PreparedStatement pstat = conn.prepareStatement(sql)
>> pstat.setFetchSize(50)
>> select all records in a large file
>> for each record:
>> read one field from the record
>> through a second statement, add a record to a different table - XXX
>> execute the second statement - YYY
>> commit the change - ZZZ
>> loop
>>
>> It seems I can do one or the other but not both - or can I?
>>
>
> Correct. What you are trying to do requires that you open two sessions to the database (conn_read, conn_write) and execute the INSERT command on the conn_write connection while looping through data on the conn_read one.
>
> David J.
>
Not sure if anyone is interested, however, I found and corrected a bug in the caching routines described below having to do with binary data.
--blake
On Sun, Mar 13, 2022 at 2:39 PM Blake McBride <blake1024@gmail.com> wrote:
Thanks, David.
I found a way around the problem (without needing another connection) by locally caching and closing result sets thus no result set is ever pending when another takes place. It intelligently caches the data in-memory or in a temporary file. The code is at https://github.com/blakemcbride/Kiss/tree/master/src/main/core/org/kissweb/database if anyone is interested. The actual caching occurs in Cursor.cacheAllRecords() and Cursor.nextCachedRecord().Thanks!Blake McBride
On Fri, Mar 11, 2022 at 11:16 PM David G. Johnston <david.g.johnston@gmail.com> wrote:
>
> On Fri, Mar 11, 2022 at 8:47 PM Blake McBride <blake1024@gmail.com> wrote:
>>
>>
>> conn.setAutoCommit(false)
>> PreparedStatement pstat = conn.prepareStatement(sql)
>> pstat.setFetchSize(50)
>> select all records in a large file
>> for each record:
>> read one field from the record
>> through a second statement, add a record to a different table - XXX
>> execute the second statement - YYY
>> commit the change - ZZZ
>> loop
>>
>> It seems I can do one or the other but not both - or can I?
>>
>
> Correct. What you are trying to do requires that you open two sessions to the database (conn_read, conn_write) and execute the INSERT command on the conn_write connection while looping through data on the conn_read one.
>
> David J.
>