From: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
---|---|
To: | Justin Pryzby <pryzby(at)telsasoft(dot)com> |
Cc: | Andres Freund <andres(at)anarazel(dot)de>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: should INSERT SELECT use a BulkInsertState? |
Date: | 2020-10-22 12:29:53 |
Message-ID: | CANP8+jKmvtaq8a1YkKsfXpWOc3keN2k7K84Su+y--WZO6D_jFQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, 16 Oct 2020 at 22:05, Justin Pryzby <pryzby(at)telsasoft(dot)com> wrote:
> > > I made this conditional on BEGIN BULK/SET bulk, so I'll solicit comments on that.
I think it would be better if this was self-tuning. So that we don't
allocate a bulkinsert state until we've done say 100 (?) rows
inserted.
If there are other conditions under which this is non-optimal
(Andres?), we can also autodetect that and avoid them.
You should also use table_multi_insert() since that will give further
performance gains by reducing block access overheads. Switching from
single row to multi-row should also only happen once we've loaded a
few rows, so we don't introduce overahads for smaller SQL statements.
--
Simon Riggs http://www.EnterpriseDB.com/
From | Date | Subject | |
---|---|---|---|
Next Message | torikoshia | 2020-10-22 12:32:00 | Re: Get memory contexts of an arbitrary backend process |
Previous Message | Dilip Kumar | 2020-10-22 12:26:55 | Re: [HACKERS] Custom compression methods |