Re: bg worker: patch 1 of 6 - permanent process

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Markus Wanner <markus(at)bluegap(dot)ch>
Cc: Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>, PostgreSQL-development Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: bg worker: patch 1 of 6 - permanent process
Date: 2010-09-14 18:41:55
Message-ID: AANLkTim4guTCun73_OMsEZhDo7m_u_SkYnuDt-OSunhx@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Sep 14, 2010 at 2:26 PM, Markus Wanner <markus(at)bluegap(dot)ch> wrote:
> On 09/14/2010 08:06 PM, Robert Haas wrote:
>> One idea I had was to have autovacuum workers stick around for a
>> period of time after finishing their work.  When we need to autovacuum
>> a database, first check whether there's an existing worker that we can
>> use, and if so use him.  If not, start a new one.  If that puts us
>> over the max number of workers, kill of the one that's been waiting
>> the longest.  But workers will exit anyway if not reused after a
>> certain period of time.
>
> That's pretty close to how bgworkers are implemented, now. Except for the
> need to terminate after a certain period of time. What is that intended to
> be good for?

To avoid consuming system resources forever if they're not being used.

> Especially considering that the avlauncher/coordinator knows the current
> amount of work (number of jobs) per database.
>
>> The idea here would be to try to avoid all the backend startup costs:
>> process creation, priming the caches, etc.  But I'm not really sure
>> it's worth the effort.  I think we need to look for ways to further
>> reduce the overhead of vacuuming, but this doesn't necessarily seem
>> like the thing that would have the most bang for the buck.
>
> Well, the pressure has simply been bigger for Postgres-R.
>
> It should be possible to do benchmarks using Postgres-R and compare against
> a max_idle_background_workers = 0 configuration that leads to termination
> and re-connecting for ever remote transaction to be applied.

Well, presumably that would be fairly disastrous. I would think,
though, that you would not have a min/max number of workers PER
DATABASE, but an overall limit on the upper size of the total pool - I
can't see any reason to limit the minimum size of the pool, but I
might be missing something.

> However, that's
> not going to say anything about whether or not it's worth it for autovacuum.

Personally, my position is that if someone does something that is only
a small improvement on its own but which has the potential to help
with other things later, that's a perfectly legitimate patch and we
should try to accept it. But if it's not a clear (even if small) win
then the bar is a lot higher, at least in my book.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Markus Wanner 2010-09-14 18:59:10 Re: bg worker: patch 1 of 6 - permanent process
Previous Message Markus Wanner 2010-09-14 18:26:16 Re: bg worker: patch 1 of 6 - permanent process