Re: bg worker: patch 1 of 6 - permanent process

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Markus Wanner <markus(at)bluegap(dot)ch>
Cc: Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>, PostgreSQL-development Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: bg worker: patch 1 of 6 - permanent process
Date: 2010-09-15 17:23:30
Message-ID: AANLkTik1io6YCAaBT5f+5PMUCRsXB2KfwTY8wNZSWg_v@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Sep 15, 2010 at 2:48 AM, Markus Wanner <markus(at)bluegap(dot)ch> wrote:
>> Hmm.  So what happens if you have 1000 databases with a minimum of 1
>> worker per database and an overall limit of 10 workers?
>
> The first 10 databases would get an idle worker. As soon as real jobs
> arrive, the idle workers on databases that don't have any pending jobs get
> terminated in favor of the databases for which there are pending jobs.
> Admittedly, that mechanism isn't too clever, yet. I.e. if there always are
> enough jobs for one database, the others could starve.

I haven't scrutinized your code but it seems like the
minimum-per-database might be complicating things more than necessary.
You might find that you can make the logic simpler without that. I
might be wrong, though.

I guess the real issue here is whether it's possible to, and whether
you're interested in, extracting a committable subset of this work,
and if so what that subset should look like. There's sort of a
chicken-and-egg problem with large patches; if you present them as one
giant monolithic patch, they're too large to review. But if you break
them down into smaller patches, it doesn't really fix the problem
unless the pieces have independent value. Even in the two years I've
been involved in the project, a number of different contributors have
gone through the experience of submitting a patch that only made sense
if you assumed that the follow-on patch was also going to get
accepted, and as no one was willing to assume that, the first patch
didn't get committed either. Where people have been able to break
things down into a series of small to medium-sized incremental
improvements, things have gone more smoothly. For example, Simon was
able to get a batch to start the bgwriter during archive recovery
committed to 8.4. That didn't have a lot of independent value, but it
had some, and it paved the way for Hot Standby in 9.0. Had someone
thought of a way to decompose that project into more than two truly
independent pieces, I suspect it might have even gone more smoothly
(although of course that's an arguable point and YMMV).

>> I still think maybe we ought to try
>> to crack the nut of allowing backends to rebind to a different
>> database.  That would simplify things here a good deal, although then
>> again maybe it's too complex to be worth it.
>
> Also note that it would re-introduce some of the costs we try to avoid with
> keeping the connected bgworker around.

How?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2010-09-15 17:25:41 Re: [BUGS] BUG #5305: Postgres service stops when closing Windows session
Previous Message Simon Riggs 2010-09-15 17:04:11 Re: [COMMITTERS] pgsql: Use a latch to make startup process wake up and replay