Re: Another modest proposal for reducing CLOBBER_CACHE_ALWAYS runtime

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: David Rowley <dgrowleyml(at)gmail(dot)com>
Cc: PostgreSQL Developers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Another modest proposal for reducing CLOBBER_CACHE_ALWAYS runtime
Date: 2021-05-10 18:30:52
Message-ID: 706833.1620671452@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

David Rowley <dgrowleyml(at)gmail(dot)com> writes:
> On Mon, 10 May 2021 at 18:04, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> real 293m31.054s
>> to
>> real 1m47.807s
>> Yes, really.

> That's quite impressive.

> I've very much in favour of this change. Making it more realistic to
> run the regression tests on a CLOBBER_CACHE_ALWAYS builds before a
> commit is a very worthy goal and this is a big step towards that.
> Nice.

It occurred to me to check hyrax's results on the older branches
(it also tests v12 and v13), and the slope of the curve is bad:

Branch Latest "check" phase runtime

HEAD 13:56:11
v13 11:00:33
v12 6:05:30

Seems like we'd better do something about that.

About 2.5 hours worth of the jump from 12 to 13 can be blamed on
the privileges test, looks like. The slowdown in that evidently
can be blamed on 0c882e52a86, which added this:

+-- results below depend on having quite accurate stats for atest12
+SET default_statistics_target = 10000;
VACUUM ANALYZE atest12;
+RESET default_statistics_target;

The slow queries in that test all cause the planner to apply the
"leak()" function to every histogram entry for atest12, so this
one change caused a 100X increase in the amount of work there.
I find it a bit remarkable that we barely noticed that in normal
operation. In CCA mode, though, each leak() call takes circa 100ms
(at least on my workstation), so kaboom.

Anyway, I'm now feeling that what I should do with this patch
is wait for the release cycle to finish and then apply it to
v13 as well as HEAD. The other patch I proposed, to cut
opr_sanity's runtime, may be too invasive for back-patch.

More generally, there is an upward creep in the test runtimes
that doesn't seem to be entirely accounted for by our constantly
adding more tests. I am suspicious that plpgsql may be largely
to blame for this. The smoking gun I found for that is the
runtimes for the plpgsql_control test, which hasn't changed
*at all* since it was added in v11; but hyrax shows these
runtimes:

HEAD:
test plpgsql_control ... ok 56105 ms
v13:
test plpgsql_control ... ok 46879 ms
v12:
test plpgsql_control ... ok 30809 ms

In normal builds that test's time has held pretty steady.
So I'm not sure what's underneath this rock, but I plan
to try to find out.

regards, tom lane

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2021-05-10 18:38:05 Re: Reducing opr_sanity test's runtime under CLOBBER_CACHE_ALWAYS
Previous Message Peter Eisentraut 2021-05-10 18:26:56 libpq_pipeline in tmp_install