Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

From: Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Alexey Kondratov <a(dot)kondratov(at)postgrespro(dot)ru>
Cc: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Erik Rijkers <er(at)xs4all(dot)nl>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions
Date: 2019-08-30 15:59:32
Message-ID: 322e40c4-8ca7-6c34-2544-28a6d95989c2@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


>
> FWIW my understanding is that the speedup comes mostly from
> elimination of
> the serialization to a file. That however requires savepoints to handle
> aborts of subtransactions - I'm pretty sure I'd be trivial to create a
> workload where this will be much slower (with many aborts of large
> subtransactions).
>
>

I think that instead of defining savepoints it is simpler and more
efficient to use

BeginInternalSubTransaction +
ReleaseCurrentSubTransaction/RollbackAndReleaseCurrentSubTransaction

as it is done in PL/pgSQL (pl_exec.c).
Not sure if it can pr

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Eisentraut 2019-08-30 19:10:10 Re: base backup client as auxiliary backend process
Previous Message Fabien COELHO 2019-08-30 14:55:05 Re: pg_upgrade: Error out on too many command-line arguments