Re: pg_stat_io_histogram

From: Jakub Wartak <jakub(dot)wartak(at)enterprisedb(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Tomas Vondra <tomas(at)vondra(dot)me>, Ants Aasma <ants(dot)aasma(at)cybertec(dot)at>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: pg_stat_io_histogram
Date: 2026-03-18 13:29:08
Message-ID: CAKZiRmyCMMTXT83-veDgPCJigqzeWP-POqNUve150nYgb07F_g@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Mar 17, 2026 at 3:17 PM Andres Freund <andres(at)anarazel(dot)de> wrote:
>
> Hi,

Hi Andres,

> On 2026-03-17 13:13:59 +0100, Jakub Wartak wrote:
> > 1. Concerns about memory use. With v7 I had couple of ideas, and with those
> > the memory use is really minimized as long as the code is still simple
> > (so nothing fancy, just some ideas to trim stuff and dynamically allocate
> > memory). I hope those reduce memory footprint to acceptable levels, see my
> > earlier description for v7.
>
> Personally I unfortunately continue to think that storing lots of values that
> are never anything but zero isn't a good idea once you have more than a
> handful of kB. Storing pointless data is something different than increasing
> memory usage with actual information.
>
> I still think you should just count the number of histograms needed, have an
> array [object][context][op] with the associated histogram "offset" and then
> increment the associated offset. It'll add an indirection at count time, but
> no additional branches.

Great idea, thanks, I haven't thought about that! Attached v9 attempts to do
that for pending backend I/O struct, which minimizes the (backend) memory
footprint for client backends to just about ~5kB.

I have been pulling my hair trying to achieve the same for shared-memory, but I
have failed to do that w/o sinking into complexity as that would mean variably
allocating shm memory on startup just for I/O histograms depending on
what backend_types could do. We cannot call functions to populate
structure sizes
- like pgstat_tracks_io_op() - to come-up with size from within structs
pgstat_kind_builtin_infos[].

It looks to me we would have to call ShmAlloc() (just as for custom_data[]) and
have special case just to minimize memory shm use there and then play the game
of pgstatio being unique in how it's being handled. Or maybe we just could abuse
custom_data ?, but it looks like mostly for being dedicated for external
registered pgstat modules, so no?

Another idea would be to convert somehow
pgstat_tracks_io_op()/_object()/_bktype()
for all possible backend types into some static computation (macros??), but I
have no idea how to do that? (something similiar to "constexpr"). It seems that
to achieve that the only option would be to have meson/make at compile time to
run some helper to precompute/generate separate pgstat_io_histogram_slotcount.h
with just final #define PGSTAT_IO_HIST_SLOTCOUNT <number> where
<number> would be
sum of all possible valid combinations and just use that to allocate in shm as
some generic array? Is that good way or too much complexity or we can simply
select count(*) from pg_stat_io, which currently gives 67 at the moment and
just hardcore that as #define?

Out of frustration, I've tried with 0003+0004 which are much easier to just to
see how 'Shared Memory Stats' would look like:
- master: 308kB
- v9-000[12]: 578kB
- v9-000[123]: 507kB
- v9-000[1234]: 471kB (+~163kB more)

-J.

Attachment Content-Type Size
v9-0001-Add-pg_stat_io_histogram-view-to-provide-more-det.patch text/x-patch 39.7 KB
v9-0003-Condense-PgStat_IO.stats-BACKEND_NUM_TYPES-array-.patch text/x-patch 7.9 KB
v9-0002-Optimize-pending_hist_time_buckets-memory-use-by-.patch text/x-patch 8.7 KB
v9-0004-Further-condense-and-reduce-memory-used-by-pgstat.patch text/x-patch 5.9 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Álvaro Herrera 2026-03-18 13:30:18 Re: DOCS: typo on CLUSTER page
Previous Message vignesh C 2026-03-18 13:26:32 Re: Skipping schema changes in publication