Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

From: Dilip Kumar <dilipbalaut(at)gmail(dot)com>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Michael Paquier <michael(at)paquier(dot)xyz>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions
Date: 2020-01-02 05:50:10
Message-ID: CAFiTN-uT5YZE0egGhKdTteTjcGrPi8hb=FMPpr9_hEB7hozQ-Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Dec 30, 2019 at 3:43 PM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>
> On Sun, Dec 29, 2019 at 1:34 PM Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
> >
> > I have observed some more issues
> >
> > 1. Currently, In ReorderBufferCommit, it is always expected that
> > whenever we get REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM, we must
> > have already got REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT and in
> > SPEC_CONFIRM we send the tuple we got in SPECT_INSERT. But, now those
> > two messages can be in different streams. So we need to find a way to
> > handle this. Maybe once we get SPEC_INSERT then we can remember the
> > tuple and then if we get the SPECT_CONFIRM in the next stream we can
> > send that tuple?
> >
>
> Your suggestion makes sense to me. So, we can try it.
Sure.
>
> > 2. During commit time in DecodeCommit we check whether we need to skip
> > the changes of the transaction or not by calling
> > SnapBuildXactNeedsSkip but since now we support streaming so it's
> > possible that before we decode the commit WAL, we might have already
> > sent the changes to the output plugin even though we could have
> > skipped those changes. So my question is instead of checking at the
> > commit time can't we check before adding to ReorderBuffer itself
> >
>
> I think if we can do that then the same will be true for current code
> irrespective of this patch. I think it is possible that we can't take
> that decision while decoding because we haven't assembled a consistent
> snapshot yet. I think we might be able to do that while we try to
> stream the changes. I think we need to take care of all the
> conditions during streaming (when the logical_decoding_workmem limit
> is reached) as we do in DecodeCommit. This needs a bit more study.
I agree.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Kohei KaiGai 2020-01-02 06:39:51 Re: TRUNCATE on foreign tables
Previous Message Tom Lane 2020-01-02 04:01:12 Re: Decade indication