|From:||Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>|
|To:||"Andrey V(dot) Lepikhov" <a(dot)lepikhov(at)postgrespro(dot)ru>|
|Subject:||Re: POC: postgres_fdw insert batching|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
On Fri, Jul 10, 2020 at 09:28:44AM +0500, Andrey V. Lepikhov wrote:
>On 6/28/20 8:10 PM, Tomas Vondra wrote:
>>Now, the primary reason why the performance degrades like this is that
>>while FDW has batching for SELECT queries (i.e. we read larger chunks of
>>data from the cursors), we don't have that for INSERTs (or other DML).
>>Every time you insert a row, it has to go all the way down into the
>You added new fields into the PgFdwModifyState struct. Why you didn't
>reused ResultRelInfo::ri_CopyMultiInsertBuffer field and
>CopyMultiInsertBuffer machinery as storage for incoming tuples?
Because I was focused on speeding-up inserts, and that is not using
CopyMultiInsertBuffer I think. I agree the way the tuples are stored
may be improved, of course.
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
|Next Message||Jaka Jančar||2020-07-12 00:14:27||StartupMessage parameters - free-form or not?|
|Previous Message||Peter Geoghegan||2020-07-12 00:08:31||Re: Default setting for enable_hashagg_disk|