Re: 8.3.0 Core with concurrent vacuum fulls

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Pavan Deolasee" <pavan(dot)deolasee(at)gmail(dot)com>
Cc: "Heikki Linnakangas" <heikki(at)enterprisedb(dot)com>, "Gavin M(dot) Roy" <gmr(at)myyearbook(dot)com>, "Alvaro Herrera" <alvherre(at)commandprompt(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: 8.3.0 Core with concurrent vacuum fulls
Date: 2008-03-06 18:00:25
Message-ID: 18945.1204826425@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

"Pavan Deolasee" <pavan(dot)deolasee(at)gmail(dot)com> writes:
> On Wed, Mar 5, 2008 at 9:29 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> [ thinks some more... ] I guess we could use a flag array dimensioned
>> MaxHeapTuplesPerPage to mark already-processed tuples, so that you
>> wouldn't need to search the existing arrays but just index into the flag
>> array with the tuple's offsetnumber.

> We can actually combine this and the page copying ideas. Instead of copying
> the entire page, we can just copy the line pointers array and work on the copy.

I think that just makes things more complex and fragile. I like
Heikki's idea, in part because it makes the normal path and the WAL
recovery path guaranteed to work alike. I'll attach my work-in-progress
patch for this --- it doesn't do anything about the invalidation
semantics problem but it does fix the critical-section-too-big problem.

regards, tom lane

Attachment Content-Type Size
heap_prune_refactor.patch.gz application/octet-stream 6.5 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2008-03-06 18:09:32 Re: Intended behaviour of SET search_path with SQL functions?
Previous Message Bruce Momjian 2008-03-06 17:28:14 Re: Psql command-line completion bug