From: | Dilip Kumar <dilipbalaut(at)gmail(dot)com> |
---|---|
To: | Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Subject: | Re: Assertion failure while streaming toasted data |
Date: | 2021-05-25 10:11:15 |
Message-ID: | CAFiTN-uJ3s5rb29oP-UW_YqvFJcRoy9=Nk0p0prJed6C-xv9vQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, May 25, 2021 at 3:33 PM Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com> wrote:
>> The attached patch should fix the issue, now the output is like below
>>
>
> Thanks. This looks fine to me. We should still be able to stream multi-insert transactions (COPY) as and when the copy buffer becomes full and is flushed. That seems to be a reasonable restriction to me.
>
> We should incorporate the regression test in the final patch. I am not entirely sure if what I have done is acceptable (or even works in all scenarios). We could possibly have a long list of tuples instead of doing the exponential magic. Or we should consider lowering the min value for logical_decoding_work_mem and run these tests with a much lower value. In fact, that's how I caught the problem in the first place. I had deliberately lowered the value to 1kB so that streaming code kicks in very often and even for small transactions.
Thanks for confirming, I will come up with the test and add that to
the next version of the patch.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Bharath Rupireddy | 2021-05-25 10:21:09 | Re: Skipping logical replication transactions on subscriber side |
Previous Message | Pavan Deolasee | 2021-05-25 10:02:59 | Re: Assertion failure while streaming toasted data |