From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Unportable implementation of background worker start |
Date: | 2017-04-21 17:49:27 |
Message-ID: | 20625.1492796967@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Andres Freund <andres(at)anarazel(dot)de> writes:
> On 2017-04-21 12:50:04 -0400, Tom Lane wrote:
>> Attached is a lightly-tested draft patch that converts the postmaster to
>> use a WaitEventSet for waiting in ServerLoop. I've got mixed emotions
>> about whether this is the direction to proceed, though. It adds at least
>> a couple of kernel calls per postmaster signal delivery, and probably to
>> every postmaster connection acceptance (ServerLoop iteration), to fix
>> problems that are so corner-casey that we've never even known we had them
>> till now.
> I'm not concerned much about the signal delivery paths, and I can't
> quite imagine that another syscall in the accept path is going to be
> measurable - worth ensuring though.
> ...
> On the other hand most types of our processes do SetLatch() in just
> nearly all the relevant signal handlers anyway, so they're pretty close
> to behaviour already.
True. Maybe I'm being too worried.
>> * If we are in PM_WAIT_DEAD_END state, then we don't want to accept
>> - * any new connections, so we don't call select(), and just sleep.
>> + * any new connections, so we don't call WaitEventSetWait(), and just
>> + * sleep. XXX not ideal
>> */
> Couldn't we just deactive the sockets in the set instead?
Yeah, I think it'd be better to do something like that. The pg_usleep
call has the same issue of possibly not responding to interrupts. The
risks are a lot less, since it's a much shorter wait, but I would rather
eliminate the separate code path in favor of doing it honestly. Didn't
seem like something to fuss over in the first draft though.
>> + ServerWaitSet = CreateWaitEventSet(PostmasterContext, MAXLISTEN + 1);
> Why are we using MAXLISTEN, rather than the actual number of things to
> listen to?
It'd take more code (ie, an additional scan of the array) to predetermine
that. I figured the space-per-item in the WaitEventSet wasn't enough to
worry about ... do you think differently?
> Random note: Do we actually have any code that errors out if too many
> sockets are being listened to?
Yes, see StreamServerPort, about line 400.
> I kind of would like, in master, take a chance of replace all the work
> done in signal handlers, by just a SetLatch(), and do it outside of
> signal handlers instead. Forking from signal handlers is just plain
> weird.
Yeah, maybe it's time. But in v11, and not for back-patch.
>> +#ifndef FD_CLOEXEC
>> +#define FD_CLOEXEC 1
>> +#endif
> Hm? Are we sure this is portable? Is there really cases that have
> F_SETFD, but not CLOEXEC?
Copied-and-pasted from our only existing use of FD_CLOEXEC, in libpq.
Might well be obsolete but I see no particular reason not to do it.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2017-04-21 18:08:21 | Re: Unportable implementation of background worker start |
Previous Message | Tom Lane | 2017-04-21 17:38:32 | Re: Unportable implementation of background worker start |