| From: | Mihail Nikalayeu <mihailnikalayeu(at)gmail(dot)com> |
|---|---|
| To: | Andres Freund <andres(at)anarazel(dot)de> |
| Cc: | Antonin Houska <ah(at)cybertec(dot)at>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, Srinath Reddy Sadipiralla <srinath2133(at)gmail(dot)com>, Matthias van de Meent <boekewurm+postgres(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Robert Treat <rob(at)xzilla(dot)net> |
| Subject: | Re: Adding REPACK [concurrently] |
| Date: | 2026-04-14 16:55:17 |
| Message-ID: | CADzfLwU8Qw6LXFHO7Tbjc-O7o+tM26jdnOJBWqYLu61rf7bO+g@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Hello!
On Tue, Apr 14, 2026 at 3:58 PM Andres Freund <andres(at)anarazel(dot)de> wrote:
> I still think this needs to be in the deadlock detector. The lock cycle just
> needs to be a bit more complicated for a hack in JoinWaitQueue not to work.
> There's no guarantee that the wait that triggers the deadlock is actually on
> the relation being repacked.
I have started prototyping a way to declare a "future" lock which the
deadlock detector treats as a hard edge.
But I currently stuck on issues related to the fact that SUE doesn't
force weak locks (fast-path) to go through
FastPathTransferRelationLocks, so the deadlock detector can't handle
the case when another backend tries to execute LOCK TABLE repack_test
IN SHARE UPDATE EXCLUSIVE MODE;
Also, VACUUM takes the same lock.
I'm not sure how to deal with this in a non-hacky way. One option is
to force SUE to transfer locks if relation it is trying to lock
relation which marked with "future lock". But I am not sure it is good
enough or covers all tricky cases (multiple backends in the loop).
Best regards,
Mikhail.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Alexander Lakhin | 2026-04-14 18:00:01 | Re: pg_plan_advice |
| Previous Message | Melanie Plageman | 2026-04-14 16:21:57 | Re: Two issues leading to discrepancies in FSM data on the standby server |