Re: Piggybacking vacuum I/O

From: "Pavan Deolasee" <pavan(dot)deolasee(at)gmail(dot)com>
To: "Heikki Linnakangas" <heikki(at)enterprisedb(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Piggybacking vacuum I/O
Date: 2007-01-25 11:17:01
Message-ID: 2e78013d0701250317u77c15dfdkcf991a84e30b238d@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 1/25/07, Heikki Linnakangas <heikki(at)enterprisedb(dot)com> wrote:
>
> Pavan Deolasee wrote:
> >
> > Also is it worth optimizing on the total read() system calls which might
> > not
> > cause physical I/O, but
> > still consume CPU ?
>
> I don't think it's worth it, but now that we're talking about it: What
> I'd like to do to all the slru files is to replace the custom buffer
> management with mmapping the whole file, and letting the OS take care of
> it. We would get rid of some guc variables, the OS would tune the amount
> of memory used for clog/subtrans dynamically, and we would avoid the
> memory copying. And I'd like to do the same for WAL.

Yes, we can do that. One problem though is mmaping wouldn't work when
CLOG file is extended and some of the backends may not see the extended
portion. But may be we can start with a sufficiently large initialized file
and
mmap the whole file.

Another simpler solution for VACUUM would be to read the entire CLOG file
in local memory. Most of the transaction status queries can be satisfied
from
this local copy and the normal CLOG is consulted only when the status is
unknown (TRANSACTION_STATUS_IN_PROGRESS)

Thanks,
Pavan

--

EnterpriseDB http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Hannu Krosing 2007-01-25 11:31:53 Re: Recursive Queries
Previous Message Dawid Kuroczko 2007-01-25 11:12:46 Re: tsearch in core patch, for inclusion