From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila(at)huawei(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Васильев Дмитрий <d(dot)vasilyev(at)postgrespro(dot)ru>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Performance degradation in commit ac1d794 |
Date: | 2016-03-18 08:04:28 |
Message-ID: | 20160318080428.meaaq237dcee4eip@alap3.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
On 2016-03-17 09:01:36 -0400, Robert Haas wrote:
> 0001: Looking at this again, I'm no longer sure this is a bug.
> Doesn't your patch just check the same conditions in the opposite
> order?
Which is important, because what's in what pfds[x] depends on
wakeEvents. Folded it into a later patch; it's not harmful as long as
we're only ever testing pfds[0].
> 0003: Mostly boring. But the change to win32_latch.c seems to remove
> an unrelated check.
Argh.
> 0004:
>
> + * drain it everytime WaitLatchOrSocket() is used. Should the
> + * pipe-buffer fill up in some scenarios - widly unlikely - we're
>
> every time
> wildly
>
> Why is it wildly (or widly) unlikely?
Because SetLatch (if the owner) check latch->is_set before adding to the
pipe, and latch_sigusr1_handler() only writes to the pipe if the current
process is in WaitLatchOrSocket's loop (via the waiting check). Expanded
comment.
>
> + * Check again wether latch is set, the arrival of a signal/self-byte
>
> whether. Also not clearly related to the patch's main purpose.
After the change there's no need to re-compute the current timestamp
anymore, that does seem beneficial and kinda related.
> /* at least one event occurred, so check masks */
> + if (FD_ISSET(selfpipe_readfd, &input_mask))
> + {
> + /* There's data in the self-pipe, clear it. */
> + drainSelfPipe();
> + }
>
> The comment just preceding this added hunk now seems to be out of
> place, and maybe inaccurate as well.
Hm. Which comment are you exactly referring to?
/* at least one event occurred, so check masks */
seems not to fit the bill?
> I think the new code could have
> a bit more detailed comment. My understanding is something like /*
> Since we didn't clear the self-pipe before attempting to wait,
> select() may have returned immediately even though there has been no
> recent change to the state of the latch. To prevent busy-looping, we
> must clear the pipe before attempting to wait again. */
Isn't that explained at the top, in
/*
* Check if the latch is set already. If so, leave loop immediately,
* avoid blocking again. We don't attempt to report any other events
* that might also be satisfied.
*
* If someone sets the latch between this and the poll()/select()
* below, the setter will write a byte to the pipe (or signal us and
* the signal handler will do that), and the poll()/select() will
* return immediately.
*
* If there's a pending byte in the self pipe, we'll notice whenever
* blocking. Only clearing the pipe in that case avoids having to
* drain it every time WaitLatchOrSocket() is used. Should the
* pipe-buffer fill up in some scenarios - wildly unlikely - we're
* still ok, because the pipe is in nonblocking mode.
?
I've updated the last paragraph to
* If there's a pending byte in the self pipe, we'll notice whenever
* blocking. Only clearing the pipe in that case avoids having to
* drain it every time WaitLatchOrSocket() is used. Should the
* pipe-buffer fill up we're still ok, because the pipe is in
* nonblocking mode. It's unlikely for that to happen, because the
* self pipe isn't filled unless we're blocking (waiting = true), or
* from inside a signal handler in latch_sigusr1_handler().
I've also applied the same optimization to windows. Less because I found
that interesting in itself, and more because it makes the WaitEventSet
easier.
Attached is a significantly revised version of the earlier series. Most
importantly I have:
* Unified the window/unix latch implementation into one file (0004)
* Provided a select(2) implementation for the WaitEventSet API
* Provided a windows implementation for the WaitEventSet API
* Reduced duplication between the implementations a good bit by
splitting WaitEventSetWait into WaitEventSetWait and
WaitEventSetWaitBlock. Only the latter is implemented separately for
each readiness primitive
* Added a backward-compatibility implementation of WaitLatchOrSocket
using the WaitEventSet stuff. Less because I thought that to be
terribly important, and more because it makes the patch a *lot*
smaller. We collected a fair amount of latch users.
This is still not fully ready. The main reamining items are testing (the
windows stuff I've only verified using cross-compiling with mingw) and
documentation.
I'd greatly appreciate a look.
Amit, you offered testing on windows; could you check whether 3/4/5
work? It's quite likely that I've screwed up something.
Robert you'd mentioned on IM that you've a use-case for this somewhere
around multiple FDWs. If somebody has started working on that, could you
ask that person to check whether the API makes sense?
Greetings,
Andres Freund
Attachment | Content-Type | Size |
---|---|---|
0001-Make-it-easier-to-choose-the-used-waiting-primitive-.patch | text/x-patch | 4.2 KB |
0002-Error-out-if-waiting-on-socket-readiness-without-a-s.patch | text/x-patch | 2.4 KB |
0003-Only-clear-latch-self-pipe-event-if-there-is-a-pendi.patch | text/x-patch | 7.7 KB |
0004-Combine-win32-and-unix-latch-implementations.patch | text/x-patch | 29.2 KB |
0005-WIP-Introduce-new-WaitEventSet-API.patch | text/x-patch | 54.2 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Fabien COELHO | 2016-03-18 08:07:38 | Re: checkpointer continuous flushing |
Previous Message | Aleksander Alekseev | 2016-03-18 07:54:49 | Re: pgsql: Improve memory management for external sorts. |