Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem

From: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>
To: Claudio Freire <klaussfreire(at)gmail(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Daniel Gustafsson <daniel(at)yesql(dot)se>, Andres Freund <andres(at)anarazel(dot)de>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Anastasia Lubennikova <a(dot)lubennikova(at)postgrespro(dot)ru>, Anastasia Lubennikova <lubennikovaav(at)gmail(dot)com>, PostgreSQL-Dev <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem
Date: 2018-02-08 23:39:19
Message-ID: 20180208233919.vrbkbcbfh5buzo3h@alvherre.pgsql
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Claudio Freire wrote:

> I don't like looping, though, seems overly cumbersome. What's worse?
> maintaining that fragile weird loop that might break (by causing
> unexpected output), or a slight slowdown of the test suite?
>
> I don't know how long it might take on slow machines, but in my
> machine, which isn't a great machine, while the vacuum test isn't fast
> indeed, it's just a tiny fraction of what a simple "make check" takes.
> So it's not a huge slowdown in any case.

Well, what about a machine running tests under valgrind, or the weird
cache-clobbering infuriatingly slow code? Or buildfarm members running
on really slow hardware? These days, a few people have spent a lot of
time trying to reduce the total test time, and it'd be bad to lose back
the improvements for no good reason.

I grant you that the looping I proposed is more complicated, but I don't
see any reason why it would break.

Another argument against the LOCK pg_class idea is that it causes an
unnecessary contention point across the whole parallel test group --
with possible weird side effects. How about a deadlock?

Other than the wait loop I proposed, I think we can make a couple of
very simple improvements to this test case to avoid a slowdown:

1. the DELETE takes about 1/4th of the time and removes about the same
number of rows as the one using the IN clause:
delete from vactst where random() < 3.0 / 4;

2. Use a new temp table rather than vactst. Everything is then faster.

3. Figure out the minimum size for the table that triggers the behavior
you want. Right now you use 400k tuples -- maybe 100k are sufficient?
Don't know.

--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Amit Langote 2018-02-09 00:17:59 Re: update tuple routing and triggers
Previous Message Thomas Munro 2018-02-08 20:26:23 Re: JIT compiling with LLVM v10.0