From: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | "Hayato Kuroda (Fujitsu)" <kuroda(dot)hayato(at)fujitsu(dot)com>, Duncan Sands <duncan(dot)sands(at)deepbluecap(dot)com>, "pgsql-bugs(at)lists(dot)postgresql(dot)org" <pgsql-bugs(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Logical replication 'invalid memory alloc request size 1585837200' after upgrading to 17.5 |
Date: | 2025-05-29 17:26:32 |
Message-ID: | CAD21AoBDDGkHLryP1TJeh9QM3DB5ipLLThGfC6P_mk_bsxSA8A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On Thu, May 29, 2025 at 1:22 AM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>
> On Thu, May 29, 2025 at 11:37 AM Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> wrote:
> >
> > On Wed, May 28, 2025 at 10:20 PM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> > >
> > > On Thu, May 29, 2025 at 12:22 AM Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> wrote:
> > > >
> > > > On Mon, May 26, 2025 at 4:19 AM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> > > > >
> > > > > On Mon, May 26, 2025 at 2:52 PM Hayato Kuroda (Fujitsu)
> > > > > <kuroda(dot)hayato(at)fujitsu(dot)com> wrote:
> > > > > >
> > > > > > > If the above hypothesis is true, we need to consider another idea so
> > > > > > > that we can execute invalidation messages in both cases.
> > > > > >
> > > > > > The straightforward fix is to check the change queue as well when the transaction
> > > > > > has invalidation messages. 0003 implemented that. One downside is that traversing
> > > > > > changes can affect performance. Currently we iterates all of changes even a
> > > > > > single REORDER_BUFFER_CHANGE_INVALIDATION. I cannot find better solutions for now.
> > > > > >
> > > > >
> > > > > It can impact the performance for large transactions with fewer
> > > > > invalidations, especially the ones which has spilled changes because
> > > > > it needs to traverse the entire list of changes again at the end.
> > > >
> > > > What if we remember all executed REORDER_BUFFER_CHANGE_INVALIDATION in
> > > > a queue while replaying the transaction so that we can execute them at
> > > > the end in a non-error path, instead of re-traversing the entire list
> > > > of changes to execute the inval messages?
> > > >
> > >
> > > The current proposed patch (v4) is also traversing only the required
> > > inval messages, as it has maintained a separate queue for that. So,
> > > what will be the advantage of forming such a queue during the
> > > processing of changes? Are you imagining a local instead of a queue at
> > > ReorderBufferTXN level? I feel we still need at ReorderBufferTXN level
> > > to ensure that we can execute those changes across streaming blocks,
> > > otherwise, the cleanup of such a local queue would be tricky and add
> > > to maintenance effort.
> >
> > Hmm, right. It seems that we keep accumulating inval messages across
> > streaming blocks.
> >
> > > One disadvantage of the approach you suggest is that the changes in
> > > the new queue won't be accounted for in logical_decoding_work_mem
> > > computation, which can be done in the proposed approach, although the
> > > patch hasn't implemented it as of now.
> >
> > If we serialize the new queue to the disk, we would need to restore
> > them in PG_CATCH() block in order to execute all inval messages, which
> > is something that I'd like to avoid as it would involve many
> > operations that could end up in an error.
> >
> > If each ReorderBufferTXN has only non-distributed inval messages in
> > txn->invalidation and distribute only txn->invalidations to other
> > transactions, the scope of influence of a single Inval Message is
> > limited to transactions that are being decoded at the same time. How
> > much is there chance the size of txn->invalidations reach 1GB? Given
> > the size of SharedInvalidationMessage is 16 bytes, we need about 67k
> > inval messages generated across concurrent transactions.
> >
>
> I agree that chances are much lower than current if txn->invalidations
> doesn't contain invalidations from other transactions, but it is not
> clear what exactly you are trying to advocate by it. Are you trying to
> advocate that we should maintain a member similar to txn->invalidation
> (say txn->distributed_invals) instead of a queue?
Yes, because I guess it's much simpler. I think it would not be a good
idea to introduce a new concept of accounting the memory usage of the
distributed inval messages too and serializing them, at least on back
branches. I think that In case where the txn->distriubted_inval is
about to overflow (not has to be 1GB) we can invalidate all caches
instread.
Regards,
--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2025-05-29 18:09:18 | Re: [EXT] Re: GSS Auth issue when user member of lots of AD groups |
Previous Message | PG Bug reporting form | 2025-05-29 16:59:18 | BUG #18940: PostgreSQL 18beta1 fails 'collate.windows.win1252' regression when building with MSYS/mingw |