Re: JDBC prepared statement: a 32767 limit of arguments number

From: Dave Cramer <davecramer(at)postgres(dot)rocks>
To: Sehrope Sarkuni <sehrope(at)jackdb(dot)com>
Cc: Vladislav Malyshkin <mal(at)gromco(dot)com>, Vladimir Sitnikov <sitnikov(dot)vladimir(at)gmail(dot)com>, "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "pgsql-jdbc(at)lists(dot)postgresql(dot)org" <pgsql-jdbc(at)lists(dot)postgresql(dot)org>
Subject: Re: JDBC prepared statement: a 32767 limit of arguments number
Date: 2022-03-11 13:54:59
Message-ID: CADK3HH+oEcsian6BEx3FEV3MhOmRXktVRPq4fAR8zgPj61YFKA@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-jdbc

Hi Sehrope,

On Fri, 11 Mar 2022 at 08:47, Sehrope Sarkuni <sehrope(at)jackdb(dot)com> wrote:

> On Fri, Mar 11, 2022 at 6:59 AM Vladislav Malyshkin <mal(at)gromco(dot)com>
> wrote:
>
>> Yes, I tried addBatch + executeBatch. It was very inconvenient in my
>> specific case. I used all over the place postgresql extension
>>
>> insert into table(c1, c2, c3, ...) values (....) RETURNING *
>>
>> this way I can "get back" the data inserted. The PG extension
>> INSERT/UPDATE/DELETE ... RETURNING *
>> was so convenient to: 1. check the data, 2. autoincrements, 3. Return
>> updated data, etc, that I started to use in with almost all
>> insert/update/delete.
>> The #executeBatch returns the number of rows changed, not the data as
>> with the RETURNING *
>>
>
> You can insert in bulk _and_ get the RETURNING for auto generated values
> back using arrays for parameters. One array per column:
>
> => CREATE TABLE t (a serial, b int, c text);
> CREATE TABLE
>
> => INSERT INTO t (b, c)
> SELECT t.b, t.c
> FROM UNNEST(
> '{100,200,300}'::int[],
> '{a,b,c}'::text[]
> ) AS t(b,c)
> RETURNING *;
> a | b | c
> ---+-----+---
> 1 | 100 | a
> 2 | 200 | b
> 3 | 300 | c
> (3 rows)
>
> INSERT 0 3
>
> That works fine when executed via the JDBC driver as a regular query with
> a result set. You can parameterize the arrays either yourself or via the
> built-in APIs.
>
> The number of actual parameters to the query at the protocol level will be
> the number of columns which presumably is less than the 32K limit. The
> maximum number of values (i.e. number of columns X number of you want to
> insert) is limited by the maximum message size on the wire protocol and the
> size of your serialized columns. Unless the size of the fields is huge, you
> should have no issues with 10,000s of rows though.
>
> Rather than separate array parameters, you can even (ab)use JSON to pass
> in everything as one gigantic parameter:
>
> => INSERT INTO t (b,c)
> SELECT (t#>>'{0}')::int, (t#>>'{1}')::text
> FROM json_array_elements('[[100,"a"],[200,"b"],[300,"c"]]') AS t
> RETURNING *;
> a | b | c
> ---+-----+---
> 4 | 100 | a
> 5 | 200 | b
> 6 | 300 | c
> (3 rows)
>
> INSERT 0 3
>
> Neither of these will be fast as COPY but it does allow you to do just
> about any SQL and use extremely large numbers of parameters.
>

I'd be willing to bet it's pretty close to COPY. Do we have any numbers ?

Dave Cramer
www.postgres.rocks

In response to

Responses

Browse pgsql-jdbc by date

  From Date Subject
Next Message Sehrope Sarkuni 2022-03-11 16:46:32 Re: JDBC prepared statement: a 32767 limit of arguments number
Previous Message Sehrope Sarkuni 2022-03-11 13:46:41 Re: JDBC prepared statement: a 32767 limit of arguments number