Re: Automatically sizing the IO worker pool

From: Thomas Munro <thomas(dot)munro(at)gmail(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Automatically sizing the IO worker pool
Date: 2026-04-08 02:47:37
Message-ID: CA+hUKG+tng_nk1_sMzb5ThTuqDy+ERES270_=kcNiZnbDP-u1A@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Apr 8, 2026 at 2:20 PM Andres Freund <andres(at)anarazel(dot)de> wrote:
> I don't follow. What I was proposing is after the conditional lock
> acquisition succeeded. So is your nsync == 0 check.

Oops. Sorry, brainfade. Condition removed.

> > +/*
> > + * Tell postmaster that we think a new worker is needed.
> > + */
> > +static void
> > +pgaio_worker_request_grow(void)
> > +{
> > + /*
> > + * Suppress useless signaling if we already know that we're at the
> > + * maximum. This uses an unlocked read of nworkers, but that's OK for
> > + * this heuristic purpose.
> > + */
> > + if (io_worker_control->nworkers < io_max_workers)
> > {
> > - io_worker_control->workers[i].latch = NULL;
> > - io_worker_control->workers[i].in_use = false;
> > + if (!io_worker_control->grow)
> > + {
> > + io_worker_control->grow = true;
> > + pg_memory_barrier();
> > +
> > + /*
> > + * If the postmaster has already been signaled, don't do it again
> > + * until the postmaster clears this flag. There is no point in
> > + * repeated signals if grow is being set and cleared repeatedly
> > + * while the postmaster is waiting for io_worker_launch_interval
> > + * (which it applies even to canceled requests).
> > + */
> > + if (!io_worker_control->grow_signal_sent)
> > + {
> > + io_worker_control->grow_signal_sent = true;
> > + pg_memory_barrier();
> > + SendPostmasterSignal(PMSIGNAL_IO_WORKER_GROW);
> > + }
> > + }
> > }
> > }
>
>
> I'd probbly use early returns to make it a bit more readable.

Done for this and similar functions.

> > +static bool
> > +pgaio_worker_can_timeout(void)
> > +{
> > + PgAioWorkerSet workerset;
> > +
> > + /* Serialize against pool size changes. */
> > + LWLockAcquire(AioWorkerControlLock, LW_SHARED);
> > + workerset = io_worker_control->workerset;
> > + LWLockRelease(AioWorkerControlLock);
> > +
> > + if (MyIoWorkerId != pgaio_workerset_get_highest(&workerset))
> > + return false;
> > +
> > + if (MyIoWorkerId < io_min_workers)
> > + return false;
> > +
> > + return true;
> > +}
>
> I guess I'd move the < io_min_workers to earlier so that you don't acquire the
> lock if that'll return false anyway.

Done.

Attachment Content-Type Size
v8-0001-aio-Adjust-I-O-worker-pool-size-automatically.patch text/x-patch 47.0 KB

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message David G. Johnston 2026-04-08 03:10:45 doc: Improve wal_level and effective_wal_level GUC around logical replication
Previous Message Michael Paquier 2026-04-08 02:41:32 Re: [PATCH] Fix minRecoveryPoint not advanced past checkpoint in CreateRestartPoint