Re: JDBC prepared statement: a 32767 limit of arguments number

Поиск
Список
Период
Сортировка
От Sehrope Sarkuni
Тема Re: JDBC prepared statement: a 32767 limit of arguments number
Дата
Msg-id CAH7T-apsDAkfS=n_HM1NEG+=Spn8Xqjb=iZth8f1vLTY7BrmiQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: JDBC prepared statement: a 32767 limit of arguments number  ("David G. Johnston" <david.g.johnston@gmail.com>)
Ответы Re: JDBC prepared statement: a 32767 limit of arguments number  (Dave Cramer <davecramer@postgres.rocks>)
Список pgsql-jdbc
On Fri, Mar 11, 2022 at 6:59 AM Vladislav Malyshkin <mal@gromco.com> wrote:
Yes, I tried addBatch + executeBatch. It was very inconvenient in my specific case. I used all over the place postgresql extension

  insert into table(c1, c2, c3, ...) values (....) RETURNING *

this way I can "get back" the data inserted. The PG extension INSERT/UPDATE/DELETE ... RETURNING *
was so convenient to: 1. check  the data, 2. autoincrements, 3. Return updated data, etc, that I started to use in with almost all insert/update/delete.
The  #executeBatch returns the number of rows changed, not the data as with the RETURNING *

You can insert in bulk _and_ get the RETURNING for auto generated values back using arrays for parameters. One array per column:

=> CREATE TABLE t (a serial, b int, c text);
CREATE TABLE

=> INSERT INTO t (b, c)
   SELECT t.b, t.c
   FROM UNNEST(
          '{100,200,300}'::int[],
          '{a,b,c}'::text[]
        ) AS t(b,c)
   RETURNING *;
 a |  b  | c
---+-----+---
 1 | 100 | a
 2 | 200 | b
 3 | 300 | c
(3 rows)

INSERT 0 3


That works fine when executed via the JDBC driver as a regular query with a result set. You can parameterize the arrays either yourself or via the built-in APIs.

The number of actual parameters to the query at the protocol level will be the number of columns which presumably is less than the 32K limit. The maximum number of values (i.e. number of columns X number of you want to insert) is limited by the maximum message size on the wire protocol and the size of your serialized columns. Unless the size of the fields is huge, you should have no issues with 10,000s of rows though.

Rather than separate array parameters, you can even (ab)use JSON to pass in everything as one gigantic parameter:

=> INSERT INTO t (b,c)
   SELECT (t#>>'{0}')::int, (t#>>'{1}')::text
   FROM json_array_elements('[[100,"a"],[200,"b"],[300,"c"]]') AS t
   RETURNING *;
 a |  b  | c
---+-----+---
 4 | 100 | a
 5 | 200 | b
 6 | 300 | c
(3 rows)

INSERT 0 3


Neither of these will be fast as COPY but it does allow you to do just about any SQL and use extremely large numbers of parameters.

Regards,
-- Sehrope Sarkuni
Founder & CEO | JackDB, Inc. | https://www.jackdb.com/

 

В списке pgsql-jdbc по дате отправления:

Предыдущее
От: Dave Cramer
Дата:
Сообщение: Re: JDBC prepared statement: a 32767 limit of arguments number
Следующее
От: Dave Cramer
Дата:
Сообщение: Re: JDBC prepared statement: a 32767 limit of arguments number