Re: [v9.3] Extra Daemons (Re: elegant and effective way for running jobs inside a database)

From: Amit kapila <amit(dot)kapila(at)huawei(dot)com>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, "'Boszormenyi Zoltan'" <zb(at)cybertec(dot)at>, "'Jaime Casanova'" <jaime(at)2ndquadrant(dot)com>, "'Kohei KaiGai'" <kaigai(at)kaigai(dot)gr(dot)jp>, Andrew Dunstan <andrew(at)dunslane(dot)net>, Robert Haas <robertmhaas(at)gmail(dot)com>, "'David E(dot) Wheeler'" <david(at)justatheory(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, 'Hans-Jürgen Schönig' <hs(at)cybertec(dot)at>
Subject: Re: [v9.3] Extra Daemons (Re: elegant and effective way for running jobs inside a database)
Date: 2012-09-22 04:14:40
Message-ID: 6C0B27F7206C9E4CA54AE035729E9C38285339D7@szxeml509-mbs
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Friday, September 21, 2012 6:50 PM Alvaro Herrera wrote:
Excerpts from Amit Kapila's message of vie sep 21 02:26:49 -0300 2012:
> On Thursday, September 20, 2012 7:13 PM Alvaro Herrera wrote:

> > > Well, there is a difficulty here which is that the number of processes
> >> connected to databases must be configured during postmaster start
> >> (because it determines the size of certain shared memory structs). So
> >> you cannot just spawn more tasks if all max_worker_tasks are busy.
> >> (This is a problem only for those workers that want to be connected as
> >> backends. Those that want libpq connections do not need this and are
> >> easier to handle.)
>

>> If not above then where there is a need of dynamic worker tasks as mentioned by Simon?

> Well, I think there are many uses for dynamic workers, or short-lived
> workers (start, do one thing, stop and not be restarted).

> In my design, a worker is always restarted if it stops; otherwise there
> is no principled way to know whether it should be running or not (after
> a crash, should we restart a registered worker? We don't know whether
> it stopped before the crash.) So it seems to me that at least for this
> first shot we should consider workers as processes that are going to be
> always running as long as postmaster is alive. On a crash, if they have
> a backend connection, they are stopped and then restarted.

a. Is there a chance that it would have made shared memory inconsitent after crash like by having lock on some structure and crash before releasing it?
If such is case, do we need reinitialize the shared memory as well with worker restart?

b. do these worker tasks be able to take any new jobs, or whatever they are started with they will do only those jobs?

With Regards,
Amit Kapila.

With Regards,
Amit Kapila.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Bruce Momjian 2012-09-22 04:15:53 Re: [ADMIN] pg_upgrade from 9.1.3 to 9.2 failed
Previous Message Amit kapila 2012-09-22 03:57:08 Re: [WIP] Patch : Change pg_ident.conf parsing to be the same as pg_hba.conf