Re: 8.3.0 Core with concurrent vacuum fulls

From: "Pavan Deolasee" <pavan(dot)deolasee(at)gmail(dot)com>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: "Heikki Linnakangas" <heikki(at)enterprisedb(dot)com>, "Gavin M(dot) Roy" <gmr(at)myyearbook(dot)com>, "Alvaro Herrera" <alvherre(at)commandprompt(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: 8.3.0 Core with concurrent vacuum fulls
Date: 2008-03-06 09:07:01
Message-ID: 2e78013d0803060107s2bb4ee40j6a971f61285077b6@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Mar 5, 2008 at 9:29 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
>
> [ thinks some more... ] I guess we could use a flag array dimensioned
> MaxHeapTuplesPerPage to mark already-processed tuples, so that you
> wouldn't need to search the existing arrays but just index into the flag
> array with the tuple's offsetnumber.
>

We can actually combine this and the page copying ideas. Instead of copying
the entire page, we can just copy the line pointers array and work on the copy.
ISTM that the only place where we change the tuple contents is when we
collapse the redirected line pointers and that we can do at the end, on the
original page.

The last step which we run inside a critical section would then be just like
invoking heap_xlog_clean with the information collected in the first pass.

Thanks,
Pavan

--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Richard Huxton 2008-03-06 10:19:00 Behaviour of to_tsquery(stopwords only)
Previous Message Magnus Hagander 2008-03-06 08:56:03 Re: Problem with site doc search