Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Dilip Kumar <dilipbalaut(at)gmail(dot)com>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions
Date: 2019-10-22 17:12:26
Message-ID: 20191022171226.cmrv4vle3ghm4wdm@development
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Oct 22, 2019 at 10:30:16AM +0530, Dilip Kumar wrote:
>On Fri, Oct 18, 2019 at 5:32 PM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>>
>> On Mon, Oct 14, 2019 at 3:09 PM Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
>> >
>> > On Thu, Oct 3, 2019 at 4:03 AM Tomas Vondra
>> > <tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
>> > >
>> > >
>> > > Sure, I wasn't really proposing to adding all stats from that patch,
>> > > including those related to streaming. We need to extract just those
>> > > related to spilling. And yes, it needs to be moved right after 0001.
>> > >
>> > I have extracted the spilling related code to a separate patch on top
>> > of 0001. I have also fixed some bugs and review comments and attached
>> > as a separate patch. Later I can merge it to the main patch if you
>> > agree with the changes.
>> >
>>
>> Few comments
>> -------------------------
>> 0001-Add-logical_decoding_work_mem-to-limit-ReorderBuffer
>> 1.
>> + {
>> + {"logical_decoding_work_mem", PGC_USERSET, RESOURCES_MEM,
>> + gettext_noop("Sets the maximum memory to be used for logical decoding."),
>> + gettext_noop("This much memory can be used by each internal "
>> + "reorder buffer before spilling to disk or streaming."),
>> + GUC_UNIT_KB
>> + },
>>
>> I think we can remove 'or streaming' from above sentence for now. We
>> can add it later with later patch where streaming will be allowed.
>Done
>>
>> 2.
>> @@ -206,6 +206,18 @@ CREATE SUBSCRIPTION <replaceable
>> class="parameter">subscription_name</replaceabl
>> </para>
>> </listitem>
>> </varlistentry>
>> +
>> + <varlistentry>
>> + <term><literal>work_mem</literal> (<type>integer</type>)</term>
>> + <listitem>
>> + <para>
>> + Limits the amount of memory used to decode changes on the
>> + publisher. If not specified, the publisher will use the default
>> + specified by <varname>logical_decoding_work_mem</varname>. When
>> + needed, additional data are spilled to disk.
>> + </para>
>> + </listitem>
>> + </varlistentry>
>>
>> It is not clear why we need this parameter at least with this patch?
>> I have raised this multiple times [1][2].
>
>I have moved it out as a separate patch (0003) so that if we need that
>we need this for the streaming transaction then we can keep this.
>>

I'm OK with moving it to a separate patch. That being said I think
ability to control memory usage for individual subscriptions is very
useful. Saying "We don't need such parameter" is essentially equivalent
to saying "One size fits all" and I think we know that's not true.

Imagine a system with multiple subscriptions, some of them mostly
replicating OLTP changes, but one or two replicating tables that are
updated in batches. What we'd have is to allow higher limit for the
batch subscriptions, but much lower limit for the OLTP ones (which they
should never hit in practice).

With a single global GUC, you'll either have a high value - risking
OOM when the OLTP subscriptions happen to decode a batch update, or a
low value affecting the batch subscriotions.

It's not strictly necessary (and we already have such limit), so I'm OK
with treating it as an enhancement for the future.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tomas Vondra 2019-10-22 17:22:10 Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions
Previous Message Fabien COELHO 2019-10-22 15:27:02 Re: pgbench - refactor init functions with buffers