Re: Improve error message for duplicate labels in enum types

From: Rahila Syed <rahilasyed90(at)gmail(dot)com>
To: Yugo Nagata <nagata(at)sraoss(dot)co(dot)jp>
Cc: Pgsql Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Improve error message for duplicate labels in enum types
Date: 2025-07-04 02:12:58
Message-ID: CAH2L28vQ24HsL4WcNys1FtHSVBMRbTnFDhbbmM9mvZPz4tawjQ@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi Yugo,

> Currently, when creating an enum type, duplicate labels are caught by a
> unique
> index on pg_enum, resulting in a generic error message.
>
> postgres=# create type t as enum ('a','b','a');
> ERROR: duplicate key value violates unique constraint
> "pg_enum_typid_label_index"
> DETAIL: Key (enumtypid, enumlabel)=(16418, a) already exists.
>
> I propose adding an explicit check for duplicate labels during enum
> creation,
> so that a more user-friendly and descriptive error message can be produced,
> similar to what is already done in ALTER TYPE ... ADD VALUE
> or ALTER TYPE ... RENAME VALUE .. TO ....
>
> With the attached patch applied, the error message becomes:
>
> ERROR: label "a" used more than once
>

Thank you for sharing the patch.
+1 to the idea of improving the error message.

Please take the following points mentioned into consideration.

1. I’m considering whether there might be a more efficient way to handle
this.
The current method adds an extra loop to check for duplicates, in addition
to the existing duplicate index check,
even when no duplicates are present. Would it be possible to achieve this
by wrapping the following
insert call in a PG_TRY() and PG_CATCH() block and logging more descriptive
error in the PG_CATCH() block?

CatalogTuplesMultiInsertWithInfo(pg_enum, slot, slotCount,

indstate);

2. If we choose to include the check in the 0001 patch you provided, would
it make more sense to place
it earlier in the function, before assigning OIDs to the labels and running
qsort? This way, we could
catch duplicates sooner and prevent unnecessary processing.

Thank you,
Rahila Syed

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Japin Li 2025-07-04 02:32:20 Re: Inconsistent LSN format in pg_waldump output
Previous Message Richard Guo 2025-07-04 01:41:35 Re: Pathify RHS unique-ification for semijoin planning