Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions
Date: 2019-10-02 22:32:54
Message-ID: 20191002223254.wgftz4rzwd2m57gn@development
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Oct 02, 2019 at 04:27:30AM +0530, Amit Kapila wrote:
>On Tue, Oct 1, 2019 at 7:21 PM Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
>wrote:
>
>> On Tue, Oct 01, 2019 at 06:55:52PM +0530, Amit Kapila wrote:
>> >
>> >On further testing, I found that the patch seems to have problems with
>> >toast. Consider below scenario:
>> >Session-1
>> >Create table large_text(t1 text);
>> >INSERT INTO large_text
>> >SELECT (SELECT string_agg('x', ',')
>> >FROM generate_series(1, 1000000)) FROM generate_series(1, 1000);
>> >
>> >Session-2
>> >SELECT * FROM pg_create_logical_replication_slot('regression_slot',
>> >'test_decoding');
>> >SELECT * FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL);
>> >*--kaboom*
>> >
>> >The second statement in Session-2 leads to a crash.
>> >
>>
>> OK, thanks for the report - will investigate.
>>
>
>It was an assertion failure in ReorderBufferCleanupTXN at below line:
>+ /* Check we're not mixing changes from different transactions. */
>+ Assert(change->txn == txn);
>

Can you still reproduce this issue with the patch I sent on 28/9? I have
been unable to trigger the failure, and it seems pretty similar to the
failure you reported (and I fixed) on 28/9.

>> >Other than that, I am not sure if the changes related to spill to disk
>> >after logical_decoding_work_mem works for toast table as I couldn't hit
>> >that code for toast table case, but I might be missing something. As
>> >mentioned previously, I feel there should be some way to test whether this
>> >patch works for the cases it claims to work. As of now, I have to check
>> >via debugging. Let me know if there is any way, I can test this.
>> >
>>
>> That's one of the reasons why I proposed to move the statistics (which
>> say how many transactions / bytes were spilled to disk) from a later
>> patch in the series. I don't think there's a better way.
>>
>>
>I like that idea, but I think you need to split that patch to only get the
>stats related to the spill. It would be easier to review if you can
>prepare that atop of
>0001-Add-logical_decoding_work_mem-to-limit-ReorderBuffer.
>

Sure, I wasn't really proposing to adding all stats from that patch,
including those related to streaming. We need to extract just those
related to spilling. And yes, it needs to be moved right after 0001.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Pavel Trukhanov 2019-10-03 01:33:34 Re: Improve handling of pg_stat_statements handling of bind "IN" variables
Previous Message Andres Freund 2019-10-02 20:46:30 Re: WIP: Generic functions for Node types using generated metadata