Re: Adding REPACK [concurrently]

From: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>
To: Antonin Houska <ah(at)cybertec(dot)at>
Cc: Mihail Nikalayeu <mihailnikalayeu(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Robert Treat <rob(at)xzilla(dot)net>
Subject: Re: Adding REPACK [concurrently]
Date: 2026-02-25 16:25:54
Message-ID: 202602251618.dpgvox64vziz@alvherre.pgsql
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 2026-Feb-25, Antonin Houska wrote:

> > Hmm, so on the apply side when reading the file, we would first reach
> > each toast attribute value, which we know to insert directly to the
> > toast table (keeping track of each individually toast pointer as we do
> > so); then we reach the heap tuple itself, we [... somehow ...] interpret
> > these external indirect toast pointers and substitute the toast pointers
> > that we created. So we never have to construct the entire tuple, or
> > indeed do anything else with the toasted values other than insert them
> > into the toast table.
>
> Yes, that's what I mean.

Makes sense. Would you be able to try and implement that?

> The problem I see here is that for UPDATE you need the old tuple to determine
> if its TOAST value should be deleted or if the new tuple should reuse it -
> this is how I understand toast_tuple_init(). So the worker would have to store
> all the changes somewhere temporarily until it can fully apply the changes
> (i.e. until the initial copy and index build is complete).

Ah, you're right, that won't work.

--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/
Tom: There seems to be something broken here.
Teodor: I'm in sackcloth and ashes... Fixed.
http://postgr.es/m/482D1632.8010507@sigaev.ru

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Maksim.Melnikov 2026-02-25 16:37:24 Dump statistic issue with index on expressions
Previous Message Antonin Houska 2026-02-25 16:03:19 Re: Adding REPACK [concurrently]