Skip site navigation (1) Skip section navigation (2)

Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

From: "Pavan Deolasee" <pavan(dot)deolasee(at)gmail(dot)com>
To: "Heikki Linnakangas" <heikki(at)enterprisedb(dot)com>
Cc: "Craig Ringer" <craig(at)postnewspapers(dot)com(dot)au>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit
Date: 2008-03-11 06:24:42
Message-ID: 2e78013d0803102324i66fa5376rca1bc3d250bc8317@mail.gmail.com (view raw or flat)
Thread:
Lists: pgsql-patchespgsql-performance
On Mon, Mar 10, 2008 at 4:31 PM, Heikki Linnakangas
<heikki(at)enterprisedb(dot)com> wrote:
> According
>  to oprofile, all the time is spent in TransactionIdIsInProgress. I think
>  it would be pretty straightforward to store the committed subtransaction
>  ids in a sorted array, instead of a linked list, and binary search.

Assuming that in most of the cases, there will be many committed and few aborted
subtransactions, how about storing the list of *aborted* subtransactions ?


Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB     http://www.enterprisedb.com

In response to

pgsql-performance by date

Next:From: Albert Cervera ArenyDate: 2008-03-11 08:34:30
Subject: Re: count * performance issue
Previous:From: Greg SmithDate: 2008-03-11 04:14:32
Subject: Re: UPDATE 66k rows too slow

pgsql-patches by date

Next:From: Peter EisentrautDate: 2008-03-11 07:27:07
Subject: Re: [PATCHES] Fix for large file support (nonsegment mode support)
Previous:From: Bruce MomjianDate: 2008-03-11 03:03:58
Subject: Re: Terminating a backend

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group