| From: | Álvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org> |
|---|---|
| To: | David Klika <david(dot)klika(at)atlas(dot)cz> |
| Cc: | ah(at)cybertec(dot)at, jian(dot)universality(at)gmail(dot)com, pgsql-hackers(at)lists(dot)postgresql(dot)org, mihailnikalayeu(at)gmail(dot)com, rob(at)xzilla(dot)net |
| Subject: | Re: Adding REPACK [concurrently] |
| Date: | 2025-12-04 15:43:44 |
| Message-ID: | 202512041531.4yz4szwnfsqk@alvherre.pgsql |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Hello David,
Thanks for your interest in this.
On 2025-Dec-04, David Klika wrote:
> Let's consider a large table where 80% blocks are fine (filled enough by
> live tuples). The table could be scanned from the beginning (left side) to
> identify "not enough filled" blocks and also from the end (right side) to
> process live tuples by moving them to the blocks identified by the left side
> scan. The work is over when both scan reaches the same position.
If you only have a small number of pages that have this problem, then
you don't actually need to do anything -- the pages will be marked free
by regular vacuuming, and future inserts or updates can make use of
those pages. It's not a problem to have a small number of pages in
empty state for some time.
So if you're trying to do this, the number of problematic pages must be
large.
Now, the issue with what you propose is that you need to make either the
old tuples or the new tuples visible to concurrent transactions. If at
any point they are both visible, or none of them is visible, then you
have potentially corrupted the results that would be obtained by a query
that's scanning the table and halfway through.
The other point is that you need to keep indexes updated. That is, you
need to make the indexes point to both the old and new, until you remove
the old tuples from the table, then remove those index pointers.
This process bloats the indexes, which is not insignificant, considering
that the number of tuples to process is large. If there are several
indexes, this makes your process take even longer.
You can fix the concurrency problem by holding a lock on the table that
ensures nobody is reading the table until you've finished. But we don't
want to have to hold such a lock for long! And we already established
that the number of pages to check is large, which means you're going to
work for a long time.
So, I'm not really sure that it's practical to implement what you
suggest.
--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Nathan Bossart | 2025-12-04 15:51:10 | Re: pgsql: Add pg_atomic_unlocked_write_u64 |
| Previous Message | Andres Freund | 2025-12-04 15:33:52 | Re: Segmentation fault on proc exit after dshash_find_or_insert |