Re: logical decoding : exceeded maxAllocatedDescs for .spill files

From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Kuntal Ghosh <kuntalghosh(dot)2007(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Noah Misch <noah(at)leadboat(dot)com>, Amit Khandekar <amitdkhan(dot)pg(at)gmail(dot)com>, Alvaro Herrera from 2ndQuadrant <alvherre(at)alvh(dot)no-ip(dot)org>, Andres Freund <andres(at)anarazel(dot)de>, Juan José Santamaría Flecha <juanjo(dot)santamaria(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>
Subject: Re: logical decoding : exceeded maxAllocatedDescs for .spill files
Date: 2020-02-04 09:10:26
Message-ID: CAA4eK1+vrHbCzbk7VvELvzy_9wfeJf8hoUZQQyUOhE750hWTDw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Feb 4, 2020 at 10:15 AM Kuntal Ghosh <kuntalghosh(dot)2007(at)gmail(dot)com> wrote:
>
> On Sun, Jan 12, 2020 at 9:51 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> >
> > Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> writes:
> > > On Sat, Jan 11, 2020 at 10:53:57PM -0500, Tom Lane wrote:
> > >> remind me where the win came from, exactly?
> >
> > > Well, the problem is that in 10 we allocate tuple data in the main
> > > memory ReorderBuffer context, and when the transaction gets decoded we
> > > pfree() it. But in AllocSet that only moves the data to the freelists,
> > > it does not release it entirely. So with the right allocation pattern
> > > (sufficiently diverse chunk sizes) this can easily result in allocation
> > > of large amount of memory that is never released.
> >
> > > I don't know if this is what's happening in this particular test, but I
> > > wouldn't be surprised by it.
> >
> > Nah, don't think I believe that: the test inserts a bunch of tuples,
> > but they look like they will all be *exactly* the same size.
> >
> > CREATE TABLE decoding_test(x integer, y text);
> > ...
> >
> > FOR i IN 1..10 LOOP
> > BEGIN
> > INSERT INTO decoding_test(x) SELECT generate_series(1,5000);
> > EXCEPTION
> > when division_by_zero then perform 'dummy';
> > END;
> >
> I performed the same test in pg11 and reproduced the issue on the
> commit prior to a4ccc1cef5a04 (Generational memory allocator).
>
> ulimit -s 1024
> ulimit -v 300000
>
> wal_level = logical
> max_replication_slots = 4
>
> And executed the following code snippet (shared by Amit Khandekar
> earlier in the thread).
>
..
>
> SELECT data from pg_logical_slot_get_changes('test_slot', NULL, NULL) LIMIT 10;
>
> I got the following error:
> ERROR: out of memory
> DETAIL: Failed on request of size 8208.
>
> After that, I applied the "Generational memory allocator" patch and
> that solved the issue. From the error message, it is evident that the
> underlying code is trying to allocate a MaxTupleSize memory for each
> tuple. So, I re-introduced the following lines (which are removed by
> a4ccc1cef5a04) on top of the patch:
>
> --- a/src/backend/replication/logical/reorderbuffer.c
> +++ b/src/backend/replication/logical/reorderbuffer.c
> @@ -417,6 +417,9 @@ ReorderBufferGetTupleBuf(ReorderBuffer *rb, Size tuple_len)
>
> alloc_len = tuple_len + SizeofHeapTupleHeader;
>
> + if (alloc_len < MaxHeapTupleSize)
> + alloc_len = MaxHeapTupleSize;
>
> And, the issue got reproduced with the same error:
> WARNING: problem in Generation Tuples: number of free chunks 0 in
> block 0x7fe9e9e74010 exceeds 1018 allocated
> .....
> ERROR: out of memory
> DETAIL: Failed on request of size 8208.
>
> I don't understand the code well enough to comment whether we can
> back-patch only this part of the code.
>

I don't think we can just back-patch that part of code as it is linked
to the way we are maintaining a cache (~8MB) for frequently allocated
objects. See the comments around the definition of
max_cached_tuplebufs. But probably, we can do something once we reach
such a limit, basically, once we know that we have already allocated
max_cached_tuplebufs number of tuples of size MaxHeapTupleSize, we
don't need to allocate more of that size. Does this make sense?

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message nuko yokohama 2020-02-04 09:40:45 Re: Implementing Incremental View Maintenance
Previous Message Michael Paquier 2020-02-04 08:20:36 Re: Add %x to PROMPT1 and PROMPT2