Lowering the minimum value for maintenance_work_mem

From: Andres Freund <andres(at)anarazel(dot)de>
To: pgsql-hackers(at)postgresql(dot)org, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, John Naylor <jcnaylor(at)gmail(dot)com>
Subject: Lowering the minimum value for maintenance_work_mem
Date: 2024-05-16 20:54:58
Message-ID: 20240516205458.ohvlzis5b5tvejru@awork3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


In the subthread at [1] I needed to trigger multiple rounds of index vacuuming
within one vacuum.

It turns out that with the new dead tuple implementation, that got actually
somewhat expensive. Particularly if all tuples on all pages get deleted, the
representation is just "too dense". Normally that's obviously very good, but
for testing, not so much:

With the minimum setting of maintenance_work_mem=1024kB, a simple table with
narrow rows, where all rows are deleted, the first cleanup happens after
3697812 dead tids. The table for that has to be > ~128MB.

Needing a ~128MB table to be able to test multiple cleanup passes makes it
much more expensive to test and consequently will lead to worse test coverage.

I think we should consider lowering the minimum setting of
maintenance_work_mem to the minimum of work_mem. For real-life workloads
maintenance_work_mem=1024kB is going to already be quite bad, so we don't
protect users much by forbidding a setting lower than 1MB.

Just for comparison, with a limit of 1MB, < 17 needed to do the first cleanup
pass after 174472 dead tuples. That's a 20x improvement. Really nice.


Andres Freund

[1\ https://postgr.es/m/20240516193953.zdj545efq6vabymd%40awork3.anarazel.de


Browse pgsql-hackers by date

  From Date Subject
Next Message Jacob Champion 2024-05-16 20:57:05 Re: commitfest.postgresql.org is no longer fit for purpose
Previous Message Jacob Champion 2024-05-16 20:54:22 Re: commitfest.postgresql.org is no longer fit for purpose