From: | Yugo Nagata <nagata(at)sraoss(dot)co(dot)jp> |
---|---|
To: | Daniil Davydov <3danissimo(at)gmail(dot)com> |
Cc: | Pgsql Hackers <pgsql-hackers(at)postgresql(dot)org>, Alexander Lakhin <exclusion(at)gmail(dot)com>, jian he <jian(dot)universality(at)gmail(dot)com> |
Subject: | Re: Prevent internal error at concurrent CREATE OR REPLACE FUNCTION |
Date: | 2025-07-03 14:18:12 |
Message-ID: | 20250703231812.b8cd0c2830fe742abce2e2b1@sraoss.co.jp |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, 1 Jul 2025 18:56:11 +0700
Daniil Davydov <3danissimo(at)gmail(dot)com> wrote:
> Hi,
>
> On Tue, Jul 1, 2025 at 5:47 PM Yugo Nagata <nagata(at)sraoss(dot)co(dot)jp> wrote:
> >
> > On Mon, 30 Jun 2025 18:32:47 +0700
> > Daniil Davydov <3danissimo(at)gmail(dot)com> wrote:
> >
> > > On Mon, Jun 30, 2025 at 3:47 PM Yugo Nagata <nagata(at)sraoss(dot)co(dot)jp> wrote:
> > > >
> > > > I agree with Tom Lane that the behavior is expected, although it would be better
> > > > if the error message were more user-friendly. We could improve it by checking the
> > > > unique constraint before calling index_insert in CatalogIndexInsert.
> > > >
> > >
> > > As far as I understand, unique constraint checking is specific for
> > > each index access method.
> > > Thus, to implement the proposed idea, you will have to create a
> > > separate callback for check_unique function.
> > > It doesn't seem like a very neat solution, but there aren't many other
> > > options left.
> >
> > I believe check_exclusion_or_unique_constraint() can be used independently of
> > a specific index access method.
> >
> > > I would suggest intercepting the error (via PG_CATCH), and if it has
> > > the ERRCODE_UNIQUE_VIOLATION code, change the error message (more
> > > precisely, throw another error with the desired message).
> > > If we caught an error during the CatalogTupleInsert call, we can be
> > > sure that the problem is in concurrent execution, because before the
> > > insertion, we checked that such a tuple does not exist.
> > >
> > > What do you think? And in general, are you going to fix this behavior
> > > within this thread?
> >
> > Initially, I wasn't planning to do so, but I gave it a try and wrote a
> > patch to fix the issue based on my idea.
>
> Thanks for the patch! Some comments on it :
> 1)
> I found two typos :
> + if (HeapTupleIsHeapOenly(heapTuple) && !onlySummarized)
> and
> + sartisfied = check_unique_constraint(heapRelation,
Thank you for pointing out them. Fixed.
> 2)
> CatalogIndexInsert is kinda "popular" function. It can be called in
> different situations, not in all of which a violation of unique
> constraint means an error due to competitiveness.
>
> For example, with this patch such a query : "CREATE TYPE mood AS ENUM
> ('happy', 'sad', 'happy');"
> Will throw this error : "operation failed due to a concurrent command"
> Of course, it isn't true
You're right — this error is not caused by a concurrent command.
However, I believe the error message in cases like creating an ENUM type with
duplicate labels could be improved to explain the issue more clearly, rather
than just reporting it as a unique constraint violation.
In any case, a unique constraint violation in a system catalog is not necessarily
due to concurrent DDL. Therefore, the error message shouldn't suggest that as the
only cause. Instead, it should clearly report the constraint violation as the primary
issue, and mention concurrent DDL as just one possible explanation in HINT.
I've updated the patch accordingly to reflect this direction in the error message.
ERROR: operation failed due to duplicate key object
DETAIL: Key (proname, proargtypes, pronamespace)=(fnc, , 2200) already exists in unique index pg_proc_proname_args_nsp_index.
HINT: Another command might have created a object with the same key in a concurrent session.
However, as a result, the message ends up being similar to the current one raised
by the btree code, so the overall improvement in user-friendliness might be limited.
> That is why I suggested handling unique violations exactly inside
> ProcedureCreate - the only place where we can be sure about reasons of
> error.
If we were to fix the error message outside of CatalogIndexInsert, we would need to
modify CatalogTupleInsert, CatalogTupleUpdate, and related functions to allow them to
report the failure appropriately.
You suggested using PG_TRY/PG_CATCH, but these do not suppress the error message from
the btree code, so this approach seems not to fully address the issue.
Moreover, the places affected are not limited to ProcedureCreate, for example,
concurrent CREATE TABLE commands can also lead to the same situation, and possibly
other commands as well. Therefore, I think it would be sufficient if the
improved message in CatalogIndexInsert makes sense on its own.
Regards,
Yugo Nagata
--
Yugo Nagata <nagata(at)sraoss(dot)co(dot)jp>
Attachment | Content-Type | Size |
---|---|---|
v7-0004-Improve-error-reporting-for-unique-key-violations.patch | text/x-diff | 4.0 KB |
v7-0003-Improve-error-reporting-for-concurrent-updates-on.patch | text/x-diff | 4.0 KB |
v7-0002-Prevent-internal-error-caused-by-concurrent-ALTER.patch | text/x-diff | 2.2 KB |
v7-0001-Prevent-internal-error-at-concurrent-CREATE-OR-RE.patch | text/x-diff | 3.3 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | jian he | 2025-07-03 14:31:13 | Re: pg_restore --no-policies should not restore policies' comment |
Previous Message | Ashutosh Bapat | 2025-07-03 14:07:18 | Re: Adding basic NUMA awareness |