| From: | jian he <jian(dot)universality(at)gmail(dot)com> | 
|---|---|
| To: | Matheus Alcantara <matheusssilv97(at)gmail(dot)com> | 
| Cc: | Tomas Vondra <tomas(at)vondra(dot)me>, pgsql-hackers(at)postgresql(dot)org | 
| Subject: | Re: postgres_fdw: Use COPY to speed up batch inserts | 
| Date: | 2025-10-29 03:10:55 | 
| Message-ID: | CACJufxFXhAuFDfQS=N0nCaFFTi08__cfwh6E7LhDuuJiAEo=Ww@mail.gmail.com | 
| Views: | Whole Thread | Raw Message | Download mbox | Resend email | 
| Thread: | |
| Lists: | pgsql-hackers | 
On Sat, Oct 25, 2025 at 2:27 AM Matheus Alcantara
<matheusssilv97(at)gmail(dot)com> wrote:
>
> On this new version I also added some regress tests on postgres_fdw.sql
>
In the CopyFrom function, we have the CopyInsertMethod, CIM_SINGLE is slower
than CIM_MULTI, I think.
We should do performance tests for the case where the COPY statement is limited
to use CIM_SINGLE.
You can use triggers to make COPY can only use the CIM_SINGLE copymethod.
for example:
create function dummy() returns trigger as $$ begin return new; end $$
language plpgsql;
create trigger dummy
    before insert or update on batch_table_3
    for each row execute procedure dummy();
My local tests show that when batch_size is greater than 2, COPY performs faster
than batch inserts into a foreign table, even though COPY can only use
CIM_SINGLE.
However, my tests were done with an enable-assert build, since I
encountered issues compiling the release build.
anyway, I am sharing my test script.
| Attachment | Content-Type | Size | 
|---|---|---|
| fdw_copy_test.sql | application/sql | 2.0 KB | 
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Chao Li | 2025-10-29 03:11:05 | Fix incorrect const qualification for tbm_add_tuples() and itemptr_to_uint64() | 
| Previous Message | John Naylor | 2025-10-29 02:50:45 | Re: Proposal for enabling auto-vectorization for checksum calculations |