| From: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
|---|---|
| To: | Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru> |
| Cc: | Alexey Kondratov <a(dot)kondratov(at)postgrespro(dot)ru>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Erik Rijkers <er(at)xs4all(dot)nl>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
| Subject: | Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions |
| Date: | 2019-09-16 20:57:19 |
| Message-ID: | 20190916205719.mnyytrsredcuf2or@development |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
On Mon, Sep 16, 2019 at 10:29:18PM +0300, Konstantin Knizhnik wrote:
>
>
>On 16.09.2019 19:54, Alexey Kondratov wrote:
>>On 30.08.2019 18:59, Konstantin Knizhnik wrote:
>>>
>>>I think that instead of defining savepoints it is simpler and more
>>>efficient to use
>>>
>>>BeginInternalSubTransaction +
>>>ReleaseCurrentSubTransaction/RollbackAndReleaseCurrentSubTransaction
>>>
>>>as it is done in PL/pgSQL (pl_exec.c).
>>>Not sure if it can pr
>>>
>>
>>Both BeginInternalSubTransaction and DefineSavepoint use
>>PushTransaction() internally for a normal subtransaction start. So
>>they seems to be identical from the performance perspective, which
>>is also stated in the comment section:
>
>Yes, definitely them are using the same mechanism and most likely
>provides similar performance.
>But BeginInternalSubTransaction does not require to generate some
>savepoint name which seems to be redundant in this case.
>
>
>>
>>Anyway, I've performed a profiling of my apply worker (flamegraph is
>>attached) and it spends the vast amount of time (>90%) applying
>>changes. So the problem is not in the savepoints their-self, but in
>>the fact that we first apply all changes and then abort all the
>>work. Not sure, that it is possible to do something in this case.
>>
>
>Looks like the only way to increase apply speed is to do it in
>parallel: make it possible to concurrently execute non-conflicting
>transactions.
>
True, although it seems like a massive can of worms to me. I'm not aware
a way to identify non-conflicting transactions in advance, so it would
have to be implemented as optimistic apply, with a detection and
recovery from conflicts.
I'm not against doing that, and I'm willing to spend some time on revies
etc. but it seems like a completely separate effort.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tom Lane | 2019-09-16 21:10:25 | Re: Define jsonpath functions as stable |
| Previous Message | Stephen Frost | 2019-09-16 19:38:47 | Re: block-level incremental backup |