Re: Reusing Dead Tuples:

From: Janardhan <jana-reddy(at)mediaring(dot)com(dot)sg>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: PostgresSQL Hackers Mailing List <pgsql-hackers(at)postgresql(dot)org>, Janardhan <jana-reddy(at)mediaring(dot)com(dot)sg>
Subject: Re: Reusing Dead Tuples:
Date: 2002-12-18 08:23:15
Message-ID: 3E003073.7060608@mediaring.com.sg
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Tom Lane wrote:

>Janardhan <jana-reddy(at)mediaring(dot)com(dot)sg> writes:
>
>
>>Does it breaks any other things if all the index entries pointing the
>>dead tuple are removed before reusing the dead tuple?.
>>
>>
>
>Possibly you could make that work, but I think you'll find the
>efficiency advantage you were chasing to be totally gone. The locking
>scheme is heavily biased against you, and the index AMs don't offer an
>API designed for efficient retail index-tuple deletion.
>
>Of course that just says that you're swimming against the tide of
>previous optimization efforts. But the thing you need to face up to
>is you are taking what had been background maintenance tasks (viz,
>VACUUM) and moving them into the foreground critical path. This *will*
>slow down your foreground applications.
>
> regards, tom lane
>
>
>
today i could able to complete the patch and it is working only for
b-tree. i have added a new method am_delete
to the API and bt_delete to the B-tree index to delete a single entry.
for the timebeing this works only with
b-tree indexs.

Regarding the complexity of deleting a tuple from b-tree , it is same
or less then that of
inserting a tuple into a B-tree( since delete does not require spliting
the page). The approach is slightly
different to that of lazy vacuum. lazy vacuum scan entire index table to
delete the dead entries.
here it search for the pariticilar entry similer to that of insert .
here locking may not have much impact. It locks only single buffer to
delete the index entry.

Regarding the efficiency, if the entire Index table is in buffered then
it does not require any
additional IO , only extra CPU is required to delete entries in index table.
I am using postgres in a application where is there is heavy updates for
group of tables(small size), before inserting
a single record in huge table. this entire thing constitue single
transaction. currently as time goes on the transaction
processing speed decreases till the database is vacuumed.

Using this new patch i am hoping the trasaction processing time will
remain constant irrespective of time. Only i need to do
vaccum once i delete large number of entries from some of the tables.

regards, jana

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message postgresql 2002-12-18 09:20:40 a problem in authority
Previous Message Al Sutton 2002-12-18 08:03:35 Re: [mail] Re: Update on replication