| From: | Andres Freund <andres(at)anarazel(dot)de> |
|---|---|
| To: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
| Cc: | Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
| Subject: | Re: Automatically sizing the IO worker pool |
| Date: | 2026-04-08 02:20:55 |
| Message-ID: | v4e2phxr3n22lv6skhtczbhy3xbyqwxuy7pj7cdh6afmhwhrqb@k6cpnb2kjw6c |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Hi,
On 2026-04-08 14:09:16 +1200, Thomas Munro wrote:
> On Wed, Apr 8, 2026 at 12:30 PM Andres Freund <andres(at)anarazel(dot)de> wrote:
> > On 2026-04-08 11:18:51 +1200, Thomas Munro wrote:
> > > > /* Choose one worker to wake for this batch. */
> > > > if (worker == -1)
> > > > worker = pgaio_worker_choose_idle(-1);
> > >
> > > Well I didn't want to wake a worker if we'd failed to enqueue
> > > anything.
> >
> > I think it's worth waking up workers if there are idle ones and the queue is
> > full?
>
> True, but I prefer to test nsync because there is another reason to break:
I don't follow. What I was proposing is after the conditional lock
acquisition succeeded. So is your nsync == 0 check.
> +/*
> + * Tell postmaster that we think a new worker is needed.
> + */
> +static void
> +pgaio_worker_request_grow(void)
> +{
> + /*
> + * Suppress useless signaling if we already know that we're at the
> + * maximum. This uses an unlocked read of nworkers, but that's OK for
> + * this heuristic purpose.
> + */
> + if (io_worker_control->nworkers < io_max_workers)
> {
> - io_worker_control->workers[i].latch = NULL;
> - io_worker_control->workers[i].in_use = false;
> + if (!io_worker_control->grow)
> + {
> + io_worker_control->grow = true;
> + pg_memory_barrier();
> +
> + /*
> + * If the postmaster has already been signaled, don't do it again
> + * until the postmaster clears this flag. There is no point in
> + * repeated signals if grow is being set and cleared repeatedly
> + * while the postmaster is waiting for io_worker_launch_interval
> + * (which it applies even to canceled requests).
> + */
> + if (!io_worker_control->grow_signal_sent)
> + {
> + io_worker_control->grow_signal_sent = true;
> + pg_memory_barrier();
> + SendPostmasterSignal(PMSIGNAL_IO_WORKER_GROW);
> + }
> + }
> }
> }
I'd probbly use early returns to make it a bit more readable.
> +static bool
> +pgaio_worker_can_timeout(void)
> +{
> + PgAioWorkerSet workerset;
> +
> + /* Serialize against pool size changes. */
> + LWLockAcquire(AioWorkerControlLock, LW_SHARED);
> + workerset = io_worker_control->workerset;
> + LWLockRelease(AioWorkerControlLock);
> +
> + if (MyIoWorkerId != pgaio_workerset_get_highest(&workerset))
> + return false;
> +
> + if (MyIoWorkerId < io_min_workers)
> + return false;
> +
> + return true;
> +}
I guess I'd move the < io_min_workers to earlier so that you don't acquire the
lock if that'll return false anyway.
Greetings,
Andres Freund
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Thomas Munro | 2026-04-08 02:24:57 | Re: Automatically sizing the IO worker pool |
| Previous Message | Michael Paquier | 2026-04-08 02:17:30 | Re: Remove commented-out code in 026_overwrite_contrecord.pl |