| From: | Andres Freund <andres(at)anarazel(dot)de> |
|---|---|
| To: | Jakub Wartak <jakub(dot)wartak(at)enterprisedb(dot)com> |
| Cc: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
| Subject: | Re: pg_stat_io_histogram |
| Date: | 2026-01-26 15:08:47 |
| Message-ID: | vhzkeogzrrfzjwo3xrnq4xsjh6i37ou6xsbz7yby3lbb3rnxzz@6fpysnkjyldi |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Hi,
On 2026-01-26 10:40:52 +0100, Jakub Wartak wrote:
> I'm proposing that we add pg_stat_io_histogram that would track/show I/O
> latencies profile, so we could quickly identify I/O outliers.
I think that's an interesting feature!
> From time to time users complain that 'PostgreSQL is slow or stuck' (usually
> COMMIT is slow), when it is quite apparent that it is down to somewhere in
> the I/O stack. It is quite easy to prove once one has proper measurement
> tools in place and is able to correlate, but it takes IMHO way too much time
> and energy to cross-correlate all of that information (iostat -x 1s, wait
> events 1s, and so on), especially if one would like to provide rapid
> response.
For measuring particularly stuck things, I've been wondering about having a
regular timer that starts to collect more information if stuck in a place for
a while. That would probably end up being lower overhead than constantly
measuring... But would also be a lot more work.
> Right now the patch does not include per-backend/PID tracking, hopefully if
> there will be interest in this, I'll add it, but I would like to first hear
> if that's a good idea. The current implementation uses fast bucket
> calculation to avoid overheads and tries to cover most useful range of
> devices via buckets (128us..256ms, so that covers both NVMe/SSD/HDD and
> abnormally high latency too as from time to time I'm try to help with I/O
> stuck for *seconds*, usually a sign of some I/O multipath issues, device
> resetting, or hypervisor woes).
Hm. Isn't 128us a pretty high floor for at least reads and writes? On a good
NVMe disk you'll get < 10us, after all.
I see a few problems with the source of the latency measurements though:
- The latency gathered is that it's quite heavily affected by scheduler
noise. If your process isn't scheduled because other processes are busy
doing stuff on the CPU, it's quite possible to get results many orders of
magnitude wrong.
- With AIO, you're measuring wait time, and that time can be affected by other
IOs in the queue. That will often *drastically* overestimate IO latency
measured this way.
I don't see how we can do better absent cooperation from the kernel (by
putting lower-level measurements into io_uring completions, for example)
though. So maybe this is just how it has to be and we ought to just document
it.
> postgres=# select
> substring(backend_type,1,8) as backend,object,context,io_type,
> bucket_latency_us as lat_us,
> round(bucket_latency_us/1000.0, 3) as lat_ms,
> bucket_count as count
> from pg_stat_get_io_histogram()
> where
> bucket_count > 0
> order by 1,2,3,4,5;
> backend | object | context | io_type | lat_us | lat_ms | count
> ----------+----------+-----------+-----------+--------+--------+-------
> autovacu | relation | normal | read | 128 | 0.128 | 54
Perhaps the latency should be represented as a range?
> Of course most of the I/O calls today are hitting page cache, so one would
> expect they'll be < 128us most of the time
Have you measured whether overhead is measurable when hitting the page cache?
I'd hope that it doesn't, due to io combing amortizing the cost somewhat. But
it seems worth measuring.
I assume you made pgstat_get_io_op_name() return "hit?" because you don't
expect that to ever be hit?
> +static inline int get_bucket_index(uint32_t val) {
> +#define MIN_PG_STAT_IO_HIST_LATENCY 127
> + const uint32_t max_index = PGSTAT_IO_HIST_BUCKETS - 1;
> + /*
> + * hopefully calculated to be 25 by the compiler:
> + * clz(127) = clz(01111111b on uint32) = 25
> + */
> + const uint32_t min_latency_leading_zeros =
> + pg_leading_zero_bits32(MIN_PG_STAT_IO_HIST_LATENCY);
> +
> + /*
> + * make sure the tmp value at least 127 (our minimum bucket size)
> + * as __builtin_clz might return undefined behavior when operating on 0
> + */
> + uint32_t tmp = val | MIN_PG_STAT_IO_HIST_LATENCY;
> + /* count leading zeros */
> + int leading_zeros = pg_leading_zero_bits32(tmp);
> +
> + /* normalize the index */
> + uint32_t index = min_latency_leading_zeros - leading_zeros;
> +
> + /* clamp it to the maximum */
> + return (index > max_index) ? max_index : index;
> +}
Wouldn't it be easier to handle the minimum latency by shifting right?
I think we may also need to handle inputs that don't fit a uint32. For things
like a stopped VM or such we could see IOs that that don't fit into a uint32
when measured in microseconds. So perhaps I'd make the input to the bucket
calc 64 bits, then shift to the minimum precision and mask to implement
clamping.
> @@ -152,6 +189,10 @@ pgstat_count_io_op_time(IOObject io_object, IOContext io_context, IOOp io_op,
> INSTR_TIME_ADD(PendingIOStats.pending_times[io_object][io_context][io_op],
> io_time);
>
> + /* calculate the bucket_index based on latency in us */
> + bucket_index = get_bucket_index(INSTR_TIME_GET_MICROSEC(io_time));
> + PendingIOStats.pending_hist_time_buckets[io_object][io_context][io_op][bucket_index]++;
> +
> /* Add the per-backend count */
> pgstat_count_backend_io_op_time(io_object, io_context, io_op,
It's annoying to have to convert to microseconds here, that's not free :(.
> @@ -1356,6 +1356,24 @@ typedef enum io_stat_col
> IO_NUM_COLUMNS,
> } io_stat_col;
>
> +/*
> +* When adding a new column to the pg_stat_io view and the
> +* pg_stat_get_backend_io() function, add a new enum value here above
> +* IO_NUM_COLUMNS.
> +*/
> +typedef enum hist_io_stat_col
> +{
> + HIST_IO_COL_INVALID = -1,
> + HIST_IO_COL_BACKEND_TYPE,
> + HIST_IO_COL_OBJECT,
> + HIST_IO_COL_CONTEXT,
> + HIST_IO_COL_IOTYPE,
> + HIST_IO_COL_BUCKET_US,
> + HIST_IO_COL_COUNT,
> + HIST_IO_COL_RESET_TIME,
> + HIST_IO_NUM_COLUMNS
> +} history_get_history_state;
Think the IO_NUM_COLUMNS reference in the comment is a copy-pasto. I don't
think this should be introduced in the middle of the pg_stat_io implementation.
> +/*
> + * pg_leading_zero_bits32
> + * Returns the number of leading 0-bits in x, starting at the most significant bit position.
> + * Word must not be 0 (as it is undefined behavior).
> + */
> +static inline int
> +pg_leading_zero_bits32(uint32 word)
Do we really need this in addition to the already existing
pg_leftmost_one_pos32()? Particularly because that already has an msvc
implementation...
Greetings,
Andres Freund
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Peter Eisentraut | 2026-01-26 15:09:00 | Re: Fix accidentally cast away qualifiers |
| Previous Message | Srirama Kucherlapati | 2026-01-26 14:59:49 | RE: AIX support |