From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, Erik Rijkers <er(at)xs4all(dot)nl>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: logrep stuck with 'ERROR: int2vector has too many elements' |
Date: | 2023-01-15 20:17:16 |
Message-ID: | 1688010.1673813836@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Andres Freund <andres(at)anarazel(dot)de> writes:
> On 2023-01-15 14:39:41 -0500, Tom Lane wrote:
>> But I suppose we are stuck with that, seeing that this datatype choice
>> is effectively part of the logrep protocol now. I think the only
>> reasonable solution is to get rid of the FUNC_MAX_ARGS restriction
>> in int2vectorin. We probably ought to back-patch that as far as
>> pg_publication_rel.prattrs exists, too.
> Are you thinking of introducing another, or just "rely" on too long arrays to
> trigger errors when forming tuples?
There's enough protections already, eg repalloc will complain if you
try to go past 1GB. I'm thinking of the attached for HEAD (it'll
take minor mods to back-patch).
regards, tom lane
Attachment | Content-Type | Size |
---|---|---|
remove-int2vector-limit.patch | text/x-diff | 3.5 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2023-01-15 20:22:12 | Re: Sampling-based timing for EXPLAIN ANALYZE |
Previous Message | Andrey Chudnovsky | 2023-01-15 20:03:32 | Re: [PoC] Federated Authn/z with OAUTHBEARER |