Re: Random pg_upgrade test failure on drongo

From: Alexander Lakhin <exclusion(at)gmail(dot)com>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: "Hayato Kuroda (Fujitsu)" <kuroda(dot)hayato(at)fujitsu(dot)com>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, "andrew(at)dunslane(dot)net" <andrew(at)dunslane(dot)net>
Subject: Re: Random pg_upgrade test failure on drongo
Date: 2024-01-04 12:00:01
Message-ID: f0d303f1-e380-5988-91c7-74b755abd4bb@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hello Amit,

03.01.2024 14:42, Amit Kapila wrote:
>
>> So I started to think about other approach: to perform unlink as it's
>> implemented now, but then wait until the DELETE_PENDING state is gone.
>>
> There is a comment in the code which suggests we shouldn't wait
> indefinitely. See "However, we won't wait indefinitely for someone
> else to close the file, as the caller might be holding locks and
> blocking other backends."

Yes, I saw it, but initially I thought that we have a transient condition
there, so waiting in open() (instead of failing immediately) seemed like a
good idea then...

>> And the internal process is ... background writer (BgBufferSync()).
>>
>> So, I tried just adding bgwriter_lru_maxpages = 0 to postgresql.conf and
>> got 20 x 10 tests passing.
>>
>> Thus, it we want just to get rid of the test failure, maybe it's enough to
>> add this to the test's config...
>>
> What about checkpoints? Can't it do the same while writing the buffers?

As we deal here with pg_upgrade/pg_restore, it must not be very easy to get
the desired effect, but I think it's not impossible in principle.
More details below.
What happens during the pg_upgrade execution is essentially:
1) CREATE DATABASE "postgres" WITH TEMPLATE = template0 OID = 5 ...;
-- this command flushes file buffers as well
2) ALTER DATABASE postgres OWNER TO ...
3) COMMENT ON DATABASE "postgres" IS ...
4)     -- For binary upgrade, preserve pg_largeobject and index relfilenodes
    SELECT pg_catalog.binary_upgrade_set_next_index_relfilenode('2683'::pg_catalog.oid);
    SELECT pg_catalog.binary_upgrade_set_next_heap_relfilenode('2613'::pg_catalog.oid);
    TRUNCATE pg_catalog.pg_largeobject;
--  ^^^ here we can get the error "could not create file "base/5/2683": File exists"
...

We get the effect discussed when the background writer process decides to
flush a file buffer for pg_largeobject during stage 1.
(Thus, if a checkpoint somehow happened to occur during CREATE DATABASE,
the result must be the same.)
And another important factor is shared_buffers = 1MB (set during the test).
With the default setting of 128MB I couldn't see the failure.

It can be reproduced easily (on old Windows versions) just by running
pg_upgrade in a loop (I've got failures on iterations 22, 37, 17 (with the
default cluster)).
If an old cluster contains dozen of databases, this increases the failure
probability significantly (with 10 additional databases I've got failures
on iterations 4, 1, 6).

Please see the reproducing script attached.

Best regards,
Alexander

Attachment Content-Type Size
pg_upgrade_error-repro.txt text/plain 757 bytes

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Ashutosh Bapat 2024-01-04 12:21:23 Re: speed up a logical replica setup
Previous Message Matthias van de Meent 2024-01-04 11:24:04 Re: Revisiting {CREATE INDEX, REINDEX} CONCURRENTLY improvements