Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Michael Paquier <michael(at)paquier(dot)xyz>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions
Date: 2020-03-04 09:03:40
Message-ID: CAA4eK1K+n6d7hhKn5jzpxNWRT51RA6mKoyo+aqvDerqPZgDDuA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Mar 4, 2020 at 10:28 AM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>
> On Wed, Mar 4, 2020 at 3:16 AM Tomas Vondra
> <tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
> >
> > The first thing I realized that WAL-logging of assignments in v12 does
> > both the "old" logging (using dedicated message) and "new" with
> > toplevel-XID embedded in the first message. Yes, the patch was wrong,
> > because it eliminated all calls to ProcArrayApplyXidAssignment() and so
> > it was trivial to crash the replica due to KnownAssignedXids overflow.
> > But I don't think re-introducing XLOG_XACT_ASSIGNMENT message is the
> > right fix.
> >
> > I actually proposed doing this (having both ways to log assignments) so
> > that there's no regression risk with (wal_level < logical). But IIRC
> > Andres objected to it, argumenting that we should not log the same piece
> > of information in two very different ways at the same time (IIRC it was
> > discussed on the FOSDEM dev meeting, so I don't have a link to share).
> > And I do agree with him ...
> >
>
> So, aren't we worried about the overhead of the amount of WAL and
> performance impact for the transactions? We might want to check the
> pgbench read-write test to see if that will add any significant
> overhead.
>

I have briefly looked at the original patch and it seems the
additional overhead is only when subtransactions are involved, so
ideally, it shouldn't impact default pgbench, but there is no harm in
checking. It might be that we need to build a custom script with
subtransactions involved to measure the impact, but I think it is
worth checking

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Dilip Kumar 2020-03-04 09:10:20 Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions
Previous Message Fujii Masao 2020-03-04 09:02:17 Re: Identifying user-created objects