Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

From: Dilip Kumar <dilipbalaut(at)gmail(dot)com>
To: Ajin Cherian <itsajin(at)gmail(dot)com>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Erik Rijkers <er(at)xs4all(dot)nl>, Kuntal Ghosh <kuntalghosh(dot)2007(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Michael Paquier <michael(at)paquier(dot)xyz>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions
Date: 2020-07-10 05:11:19
Message-ID: CAFiTN-thzsWAfhbrU5W0bNDjJyrgVrqO3E38+X9ELdqPc=DmtQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Jul 10, 2020 at 9:21 AM Ajin Cherian <itsajin(at)gmail(dot)com> wrote:
>
>
>
> On Thu, Jul 9, 2020 at 1:30 PM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>>
>>
>> > I think if the GUC is set then maybe we can bypass this check so that
>> > it can try to stream every single change?
>> >
>>
>> Yeah and probably we need to do something for the check "while
>> (rb->size >= logical_decoding_work_mem * 1024L)" as well.
>>
>>
> I have made this change, as discussed, the regression tests seem to run fine. I have added a debug that records the streaming for each transaction >number. I also had to bypass certain asserts in ReorderBufferLargestTopTXN() as now we are going through the entire list of transactions and not just picking the biggest transaction .

So if always_stream_logical is true then we are always going for the
streaming even if the size is not reached and that is good. And if
always_stream_logical is set then we are setting ctx->streaming=true
that is also good. So now I don't think we need to change this part
of the code, because when we bypass the memory limit and set the
ctx->streaming=true it will always select the streaming option unless
it is impossible. With your changes sometimes due to incomplete toast
changes, if it can not pick the largest top txn for streaming it will
hang forever in the while loop, in that case, it should go for
spilling.

while (rb->size >= logical_decoding_work_mem * 1024L)
{
/*
* Pick the largest transaction (or subtransaction) and evict it from
* memory by streaming, if supported. Otherwise, spill to disk.
*/
if (ReorderBufferCanStream(rb) &&
(txn = ReorderBufferLargestTopTXN(rb)) != NULL)

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Ajin Cherian 2020-07-10 05:31:19 Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions
Previous Message Thomas Munro 2020-07-10 05:10:59 Re: Support for NSS as a libpq TLS backend