Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

From: Alexey Kondratov <a(dot)kondratov(at)postgrespro(dot)ru>
To: Kuntal Ghosh <kuntalghosh(dot)2007(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>
Cc: vignesh C <vignesh21(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions
Date: 2019-11-12 10:42:35
Message-ID: 6b0edf8b-0b33-f862-dfb2-d8bb2b568465@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 04.11.2019 13:05, Kuntal Ghosh wrote:
> On Mon, Nov 4, 2019 at 3:32 PM Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
>> So your result shows that with "streaming on", performance is
>> degrading? By any chance did you try to see where is the bottleneck?
>>
> Right. But, as we increase the logical_decoding_work_mem, the
> performance improves. I've not analyzed the bottleneck yet. I'm
> looking into the same.

My guess is that 64 kB is just too small value. In the table schema used
for tests every rows takes at least 24 bytes for storing column values.
Thus, with this logical_decoding_work_mem value the limit should be hit
after about 2500+ rows, or about 400 times during transaction of 1000000
rows size.

It is just too frequent, while ReorderBufferStreamTXN includes a whole
bunch of logic, e.g. it always starts internal transaction:

/*
 * Decoding needs access to syscaches et al., which in turn use
 * heavyweight locks and such. Thus we need to have enough state around to
 * keep track of those.  The easiest way is to simply use a transaction
 * internally.  That also allows us to easily enforce that nothing writes
 * to the database by checking for xid assignments. ...
 */

Also it issues separated stream_start/stop messages around each streamed
transaction chunk. So if streaming starts and stops too frequently it
adds additional overhead and may even interfere with current in-progress
transaction.

If I get it correctly, then it is rather expected with too small values
of logical_decoding_work_mem. Probably it may be optimized, but I am not
sure that it is worth doing right now.

Regards

--
Alexey Kondratov

Postgres Professional https://www.postgrespro.com
Russian Postgres Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message k.jamison@fujitsu.com 2019-11-12 10:49:49 RE: [Patch] Optimize dropping of relation buffers using dlist
Previous Message Masahiko Sawada 2019-11-12 10:34:12 Re: [HACKERS] Block level parallel vacuum