|From:||Andres Freund <andres(at)anarazel(dot)de>|
|To:||Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>|
|Cc:||"Drouvot, Bertrand" <bertranddrouvot(dot)pg(at)gmail(dot)com>, Alexander Lakhin <exclusion(at)gmail(dot)com>, Andrey Borodin <amborodin86(at)gmail(dot)com>, pgsql-hackers(at)lists(dot)postgresql(dot)org|
|Subject:||Re: regression coverage gaps for gist and hash indexes|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
On 2023-04-02 13:03:51 -0400, Tom Lane wrote:
> Andres Freund <andres(at)anarazel(dot)de> writes:
> > On 2023-04-02 12:38:32 -0400, Tom Lane wrote:
> >> If they have to run serially then that means that their runtime
> >> contributes 1-for-1 to the total runtime of the core regression tests,
> >> which is not nice at all.
> > Agreed, it's not nice. At least reasonably quick (74ms and 54ms on one run
> > here)...
> Oh, that's less bad than I expected. The discussion in the other thread
> was pointing in the direction of needing hundreds of ms to make indexes
> that are big enough to hit all the code paths.
Well, the tests here really just try to hit the killtuples path, not some of
the paths discussed in . It needs just enough index entries to encounter a
page split (which then is avoided by pruning tuples).
Looks like the test in  could be made a lot cheaper by changing effective_cache_size
for just that test:
* In 'auto' mode, check if the index has grown too large to fit in cache,
* and switch to buffering mode if it has.
* To avoid excessive calls to smgrnblocks(), only check this every
* BUFFERING_MODE_SWITCH_CHECK_STEP index tuples.
* In 'stats' state, switch as soon as we have seen enough tuples to have
* some idea of the average tuple size.
if ((buildstate->buildMode == GIST_BUFFERING_AUTO &&
buildstate->indtuples % BUFFERING_MODE_SWITCH_CHECK_STEP == 0 &&
effective_cache_size < smgrnblocks(RelationGetSmgr(index),
(buildstate->buildMode == GIST_BUFFERING_STATS &&
buildstate->indtuples >= BUFFERING_MODE_TUPLE_SIZE_STATS_TARGET))
* Index doesn't fit in effective cache anymore. Try to switch to
* buffering build mode.
> >> Can we move them to some other portion of our test suite, preferably one
> >> that's not repeated four or more times in each buildfarm run?
> > Not trivially, at least. Right now 027_stream_regress.pl doesn't run other
> > tests, so we'd not cover the replay portion if moved the tests to
> > contrib/btree_gist or such.
> Yeah, I was imagining teaching 027_stream_regress.pl to run additional
> scripts that aren't in the core tests.
Not opposed to that, but I'm not quite sure about what we'd use as
infrastructure. A second schedule?
I think the tests patches I am proposing here are quite valuable to run
without replication involved as well, the killtuples path isn't trivial, so
having it be covered by the normal regression tests makes sense to me.
> (I'm still quite unhappy that 027_stream_regress.pl uses the core tests at
> all, really, as they were never particularly designed to cover what it cares
> about. The whole thing is extremely inefficient and it's no surprise that
> its coverage is scattershot.)
I don't think anybody would claim it's great as-is. But I still think that
having a meaningful coverage of replay is a heck of a lot better than not
having any, even if it's not a pretty or all that fast design. And the fact
that 027_stream_regress.pl runs with a small shared_buffers actually shook out
a few bugs...
I don't think we'd want to use a completely separate set of tests for
027_stream_regress.pl, typical tests will provide coverage on both the primary
and the standby, I think, and would just end up being duplicated between the
main regression test and something specific for 027_stream_regress.pl. But I
could imagine that it's worth maintaining a distinct version of
parallel_schedule that removes a tests that aren't likely to provide benenfits
Btw, from what I can tell, the main bottleneck for the main regression test
right now is the granular use of groups. Because the parallel groups have
fixed size limit, there's a stall waiting for the slowest test at the end of
each group. If we instead limited group concurrency solely in pg_regress,
instead of the schedule, a quick experiment suggest we could get a good bit of
speedup. And remove some indecision paralysis about which group to add a new
test to, as well as removing the annoyance of having to count the number of
tests in a group manually.
|Next Message||Andres Freund||2023-04-02 18:42:26||Re: O(n) tasks cause lengthy startups and checkpoints|
|Previous Message||Tomas Vondra||2023-04-02 17:46:57||Re: logical decoding and replication of sequences, take 2|