From: | RECHTÉ Marc <marc(dot)rechte(at)meteo(dot)fr> |
---|---|
To: | "Hayato Kuroda (Fujitsu)" <kuroda(dot)hayato(at)fujitsu(dot)com> |
Cc: | pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: Logical replication timeout |
Date: | 2024-12-23 09:31:09 |
Message-ID: | 541316667.217562157.1734946269343.JavaMail.zimbra@meteo.fr |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> Can you enable the parameter "streaming" to on on your system [1]? It allows to
> stream the in-progress transactions to the subscriber side. I feel this can avoid
> the case that there are many .spill files on the publisher side.
> Another approach is to tune the logical_decoding_work_mem parameter [2].
> This specifies the maximum amount of memory used by the logical decoding, and
> some changes are spilled when it exceeds the limitation. Naively, this setting
> can reduce the number of files.
> [1]: https://www.postgresql.org/docs/14/sql-createsubscription.html
> [2]: https://www.postgresql.org/docs/14/runtime-config-resource.html#GUC-LOGICAL-DECODING-WORK-MEM
> Best regards,
> Hayato Kuroda
> FUJITSU LIMITED
Dear Hayato,
Thanks for your suggestions that were both already tested. In our (real) case (a single transaction with 12 millions sub-transactions):
1) setting the subscription as streaming, just delay a bit the spill file surge. It does not prevent the creation of spill files.
2) we set logical_decoding_work_mem to 20GB, which probably also delayed the problem, but did not solve it.
The real problem is spill file deletions that can take days in this particular case !
Marc
From | Date | Subject | |
---|---|---|---|
Next Message | Andrey Borodin | 2024-12-23 09:51:19 | Re: transaction lost when delete clog file after normal shutdown |
Previous Message | Hayato Kuroda (Fujitsu) | 2024-12-23 09:12:46 | RE: Logical replication timeout |