Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

From: "Pavan Deolasee" <pavan(dot)deolasee(at)gmail(dot)com>
To: "Heikki Linnakangas" <heikki(at)enterprisedb(dot)com>
Cc: "Craig Ringer" <craig(at)postnewspapers(dot)com(dot)au>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit
Date: 2008-03-11 06:24:42
Message-ID: 2e78013d0803102324i66fa5376rca1bc3d250bc8317@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-patches pgsql-performance

On Mon, Mar 10, 2008 at 4:31 PM, Heikki Linnakangas
<heikki(at)enterprisedb(dot)com> wrote:
> According
> to oprofile, all the time is spent in TransactionIdIsInProgress. I think
> it would be pretty straightforward to store the committed subtransaction
> ids in a sorted array, instead of a linked list, and binary search.

Assuming that in most of the cases, there will be many committed and few aborted
subtransactions, how about storing the list of *aborted* subtransactions ?

Thanks,
Pavan

--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

In response to

Browse pgsql-patches by date

  From Date Subject
Next Message Peter Eisentraut 2008-03-11 07:27:07 Re: [PATCHES] Fix for large file support (nonsegment mode support)
Previous Message Bruce Momjian 2008-03-11 03:03:58 Re: Terminating a backend

Browse pgsql-performance by date

  From Date Subject
Next Message Albert Cervera Areny 2008-03-11 08:34:30 Re: count * performance issue
Previous Message Greg Smith 2008-03-11 04:14:32 Re: UPDATE 66k rows too slow