Re: Unlinking Parallel Hash Join inner batch files sooner

From: Heikki Linnakangas <hlinnaka(at)iki(dot)fi>
To: Jehan-Guillaume de Rorthais <jgdr(at)dalibo(dot)com>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Unlinking Parallel Hash Join inner batch files sooner
Date: 2023-09-27 16:42:03
Message-ID: 871e80d4-e8cf-a117-ceb2-141586a3bcd2@iki.fi
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 11/05/2023 00:00, Jehan-Guillaume de Rorthais wrote:
> On Wed, 10 May 2023 15:11:20 +1200
> Thomas Munro <thomas(dot)munro(at)gmail(dot)com> wrote:
>> The reason I didn't do this earlier is that sharedtuplestore.c
>> continues the pre-existing tradition where each parallel process
>> counts what it writes against its own temp_file_limit. At the time I
>> thought I'd need to have one process unlink all the files, but if a
>> process were to unlink files that it didn't create, that accounting
>> system would break. Without some new kind of shared temp_file_limit
>> mechanism that doesn't currently exist, per-process counters could go
>> negative, creating free money. In the attached patch, I realised
>> something that I'd missed before: there is a safe point for each
>> backend to unlink just the files that it created, and there is no way
>> for a process that created files not to reach that point.
>
> Indeed.
>
> For what it worth, from my new and non-experienced understanding of the
> parallel mechanism, waiting for all workers to reach
> WAIT_EVENT_HASH_GROW_BATCHES_REPARTITION, after re-dispatching old batches in
> new ones, seems like a safe place to instruct each workers to clean their old
> temp files.

Looks good to me too at a quick glance. There's this one "XXX free"
comment though:

> for (int i = 1; i < old_nbatch; ++i)
> {
> ParallelHashJoinBatch *shared =
> NthParallelHashJoinBatch(old_batches, i);
> SharedTuplestoreAccessor *accessor;
>
> accessor = sts_attach(ParallelHashJoinBatchInner(shared),
> ParallelWorkerNumber + 1,
> &pstate->fileset);
> sts_dispose(accessor);
> /* XXX free */
> }

I think that's referring to the fact that sts_dispose() doesn't free the
'accessor', or any of the buffers etc. that it contains. That's a
pre-existing problem, though: ExecParallelHashRepartitionRest() already
leaks the SharedTuplestoreAccessor structs and their buffers etc. of the
old batches. I'm a little surprised there isn't aready an sts_free()
function.

Another thought is that it's a bit silly to have to call sts_attach()
just to delete the files. Maybe sts_dispose() should take the same three
arguments that sts_attach() does, instead.

So that freeing would be nice to tidy up, although the amount of memory
leaked is tiny so might not be worth it, and it's a pre-existing issue.
I'm marking this as Ready for Committer.

--
Heikki Linnakangas
Neon (https://neon.tech)

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andres Freund 2023-09-27 17:01:21 Re: Eager page freeze criteria clarification
Previous Message Alexander Korotkov 2023-09-27 16:41:31 Re: Index range search optimization