Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

From: Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>
To: Heikki Linnakangas <heikki(at)enterprisedb(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit
Date: 2008-03-10 12:16:27
Message-ID: 47D5269B.8060103@postnewspapers.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-patches pgsql-performance

Heikki Linnakangas wrote:
> You must be having an exception handler block in that pl/pgsql
> function, which implicitly creates a new subtransaction on each
> invocation of the exception handler block, so you end up with hundreds
> of thousands of committed subtransactions.
I've just confirmed that that was indeed the issue, and coding around
the begin block dramatically cuts the runtimes of commands executed
after the big import function.

Thanks again!

--
Craig Ringer

In response to

Browse pgsql-patches by date

  From Date Subject
Next Message Andrew Dunstan 2008-03-10 12:24:08 Re: Include Lists for Text Search
Previous Message Craig Ringer 2008-03-10 11:29:01 Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2008-03-10 14:33:58 Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit
Previous Message Craig Ringer 2008-03-10 11:29:01 Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit