Re: Assorted leaks and weirdness in parallel execution

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Assorted leaks and weirdness in parallel execution
Date: 2017-08-31 18:13:59
Message-ID: 27352.1504203239@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Thu, Aug 31, 2017 at 11:09 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> (With this patch,
>> there are no callers of shm_mq_get_queue(); should we remove that?)

> May as well. I can't remember any more why I did shm_mq_detach() that
> way; I think there was someplace where I thought that the
> shm_mq_handle might not be available. Maybe I'm misremembering, or
> perhaps the situation has changed as that code has evolved.

I initially tried to convert the on_dsm_detach callback to take a
pointer to the shm_mq_handle rather than the shm_mq proper. That
caused regression test crashes in some processes, indicating that
there are situations where we have freed the shm_mq_handle before
the DSM detach happens. I think it was only during worker process exit.
That's sort of contrary to the advice in shm_mq.c about the desirable
lifespan of a shm_mq_handle, but I didn't feel like trying to fix it.
It seems generally more robust if the on_dsm_detach callback assumes
as little as possible about intra-process state, anyway.

I don't have any strong reason to remove shm_mq_get_queue(), other than
neatnik-ism. It might save a caller having to remember the shm_mq pointer
separately. Given the set of API functions, that would only matter if
somebody wanted to set/get the sender/receiver PGPROC pointers later,
but maybe that's a plausible thing to do.

>> It seems like a significant modularity violation that execParallel.c
>> is responsible for creating those shm_mqs but not for cleaning them up.

> Yeah, the correct division of labor between execParallel.c and
> nodeGather.c was not entirely clear to me, and I don't pretend that I
> got that 100% right.

OK, I'll have a go at that.

>> (That would make it more difficult to do the early reader destruction
>> that nodeGather currently does, but I am not sure we care about that.)

> I think the only thing that matters here is -- if we know that we're
> not going to read any more tuples from a worker that might still be
> generating tuples, it's imperative that we shut it down ASAP.
> Otherwise, it's just going to keep cranking them out, wasting
> resources unnecessarily. I think this is different than what you're
> talking about here, but just wanted to be clear.

Yeah, it is different. What I'm looking at is that nodeGather does
DestroyTupleQueueReader as soon as it's seen EOF on a given tuple queue.
That can't save any worker cycles. The reason seems to be that it wants
to collapse its array of TupleQueueReader pointers so only live queues are
in it. That's reasonable, but I'm inclined to implement it by making the
Gather node keep a separate working array of pointers to only the live
TupleQueueReaders. The ParallelExecutorInfo would keep the authoritative
array of all TupleQueueReaders that have been created, and destroy them in
ExecParallelFinish.

Your point is that we want to shut down the TupleQueueReaders immediately
on rescan, which we do already. Another possible scenario is to shut them
down once we've reached the passed-down tuple limit (across the whole
Gather, not per-child which is what 3452dc524 implemented). I don't think
what I'm suggesting would complicate that.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Jacob Champion 2017-08-31 18:53:19 [PATCH] Assert that the correct locks are held when calling PageGetLSN()
Previous Message Peter Geoghegan 2017-08-31 18:00:54 Re: The case for removing replacement selection sort