Re: [EXTERNAL] Re: Inserts and bad performance

From: David Rowley <dgrowleyml(at)gmail(dot)com>
To: "Godfrin, Philippe E" <Philippe(dot)Godfrin(at)nov(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: [EXTERNAL] Re: Inserts and bad performance
Date: 2021-11-25 01:13:26
Message-ID: CAApHDvrEXuo+xhQY2mUmiihXS_2gaj9+Jq+kyjBUyDkMeoLETQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Thu, 25 Nov 2021 at 08:59, Godfrin, Philippe E
<Philippe(dot)Godfrin(at)nov(dot)com> wrote:
> Hi Tom. Good point about the index paging out of the buffer. I did that and no change. I do have the shared buffers at 40GB, so there’s a good bit there, but I also did all those things on the page you referred, except for using copy. At this point the data has not been scrubbed, so I’m trapping data errors and duplicates. I am curios though, as sidebar, why copy is considered faster than inserts. I was unable to get COPY faster than around 25K inserts a second (pretty fast anyway). Frankly, initially I was running 3 concurrent insert jobs and getting 90K ins/sec ! but after a certain number of records, the speed just dropped off.

EXPLAIN (ANALYZE, BUFFERS) works with INSERTs. You just need to be
aware that using ANALYZE will perform the actual insert too. So you
might want to use BEGIN; and ROLLBACK; if it's not data that you want
to keep.

SET track_io_timing = on; might help you too.

David

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message James Sewell 2021-11-25 03:28:24 Re: Max connections reached without max connections reached
Previous Message Michael Lewis 2021-11-25 00:58:00 Re: [EXTERNAL] Re: Inserts and bad performance