Re: Adding skip scan (including MDAM style range skip scan) to nbtree

From: Tomas Vondra <tomas(at)vondra(dot)me>
To: Matthias van de Meent <boekewurm+postgres(at)gmail(dot)com>
Cc: Peter Geoghegan <pg(at)bowt(dot)ie>, Mark Dilger <mark(dot)dilger(at)enterprisedb(dot)com>, Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: Adding skip scan (including MDAM style range skip scan) to nbtree
Date: 2025-05-10 14:59:01
Message-ID: a8dc09f7-91fd-4d3f-becf-1d3504e76c1a@vondra.me
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 5/10/25 13:14, Matthias van de Meent wrote:
> ...
>
> I've attached a patch that makes IndexAmRoutine a static const*,
> removing it from rd_indexcxt, and returning some of the index ctx
> memory usage to normal:
>
> count (patch 1) | total_bytes | combined_size
> -----------------+-------------+---------------
> 87 | | 171776
> 10 | 2048 | 20480
> 40 | 1024 | 40960
> 4 | 2240 | 8960
> 33 | 3072 | 101376
>
> Another patch on top of that, switching rd_indexcxt to
> GenerationContext (from AllocSet) sees the following improvement
>
> count (patch 2) | total_bytes | combined_size
> ------------------+-------------+---------------
> 87 | | 118832
> 22 | 1680 | 36960
> 11 | 1968 | 21648
> 50 | 1024 | 51200
> 4 | 2256 | 9024
>
> Also tracked: total memctx-tracked memory usage on a fresh connection [0]:
>
> 3ba2cdaa: 2006024 / 1959 kB
> Master: 2063112 / 2015 kB
> Patch 1: 2040648 / 1993 kB
> Patch 2: 1976440 / 1930 kB
>
> There isn't a lot of space on master to allocate new memory before it
> reaches a (standard linux configuration) 128kB boundary - only 33kB
> (assuming no other memory tracking overhead). It's easy to allocate
> that much, and go over, causing malloc to extend with sbrk by 128kB.
> If we then get back under because all per-query memory was released,
> the newly allocated memory won't have any data anymore, and will get
> released again immediately (default: release with sbrk when the top
>> =128kB is free), thus churning that memory area.
>
> We may just have been lucky before, and your observation that
> MALLOC_TOP_PAD_ >= 4MB fixes the issue reinforces that idea.
>
> If patch 1 or patch 1+2 fixes this regression for you, then that's
> another indication that we exceeded this threshold in a bad way.
>

Thanks! I think this explanation seems very plausible. I repeated the
tests and the results agree with it too. Here's what I got for the two
older commits before/after skip scan, and then 0001 and 0001+0002:

old head 0001 0001+0002
mode clients 3ba2cdaa454 99ddf8615c2 54c23341b31 9a6f6679e67
----------------------------------------------------------------------
prepared 1 10858 3534 11109 3324
4 25311 11307 25325 10928
32 38869 14194 39423 13626
----------------------------------------------------------------------
simple 1 2676 1865 2534 1883
4 8355 6140 8012 6160
32 11827 7216 12046 7322

This is the bid=0 case, the bid=1 is very similar, I'm leaving it out to
keep this simple (and because formatting those tables is tedious). A
nicer table is in the attached PDF.

Relative to 3ba2cdaa454 it looks like this:

head 0001 0001+0002
mode clients 99ddf8615c2 54c23341b31 9a6f6679e67
--------------------------------------------------------
prepared 1 33% 102% 31%
4 45% 100% 43%
32 37% 101% 35%
--------------------------------------------------------
simple 1 70% 95% 70%
4 73% 96% 74%
32 61% 102% 62%

So clearly, 0001 helps a lot, essentially eliminating the regression.
But 0002 makes it slow again, so the generation context is not a good
match here (perhaps the rd_indexcxt allocation pattern is different).

Based on this I tried a couple additional experiments:

a) switch rd_indexcxt to ALLOCSET_DEFAULT_SIZES, speculating that maybe
one larger malloc() is cheaper than multiple smaller ones

b) increasing the ALLOC_CHUNK_FRACTION from 1/4 to 1/2, so that fewer
chunks need to be allocated as a separate block

c) switch rd_indexcxt to ALLOCSET_MEDIUM_SIZES, which is the same as
SMALL_SIZES, but INITSIZE is 2kB, combined with the CHUNK_FRACTION
adjustment from (b)

The results are in the second table in the PDF. None of it helped very
much, unfortunately. The (a) is even slower than master in some cases.
(b) helps in some cases, but not as much as 0001. And even 2kB blocks
make it slow again.

So I guess something like 0001 might be the way to go ...

But doesn't it also highlight how fragile this memory allocation is? The
skip scan patch didn't do anything wrong - it just added a couple
fields, using a little bit more memory. I think we understand allocating
more memory may need more time, but we expect the effect to be somewhat
proportional. Which doesn't seem to be the case here.

Many other patches add fields somewhere, it seems like bad luck the skip
scan happened to trigger this behavior. It's quite likely other patches
ran into the same issue, except that no one noticed. Maybe the skip scan
did that in much hotter code, not sure.

Of course, this is not "our" issue - it seems to be glibc specific
(based on my experience with allocators in other libc libraries). Still,
it's a long-standing behavior, and I doubt it's likely to change. But
considering glibc is what most systems use, maybe we should add some
protections?

I recall there were proposals to add optional mallopt() call to set the
M_TOP_PAD when running on glibc. Maybe we should revive that. I also had
a patch to add a "memory pool", which fixed this as a side effect.

regards

--
Tomas Vondra

Attachment Content-Type Size
results.pdf application/pdf 32.2 KB

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Stepan Neretin 2025-05-10 15:02:06 Re: Suggestion to add --continue-client-on-abort option to pgbench
Previous Message ikedarintarof 2025-05-10 13:45:31 Suggestion to add --continue-client-on-abort option to pgbench