From: | Sami Imseih <samimseih(at)gmail(dot)com> |
---|---|
To: | Karina Litskevich <litskevichkarina(at)gmail(dot)com> |
Cc: | Postgres hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: pg_stat_statements: faster search by queryid |
Date: | 2025-09-17 15:48:40 |
Message-ID: | CAA5RZ0uvxY6Lp-LHxqG_fJa3+SjX1dSjc8fLL_KmcWpA8=Ed0w@mail.gmail.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
Thanks for raising this.
> This takes a long time because the pg_stat_statements() function forms
> tuples for all statistics it has first, and then they are filtered by
> the WHERE clause.
I am curious about the specific use case where this becomes a concern.
How often are you querying pg_stat_statements for individual entries?
My initial thought is that this patch does not remove the issue of
loading the entire query text (just to return one or a few entries). I
suspect that repeatedly loading query text into memory for lookups
could be a bigger problem. One option would be to pass showtext=false
for such lookups, but that makes the function more cumbersome to use
and understand.
I do think having the ability to look up a specific entry based on a
key (that is, hash_search instead of hash_seq_search) would be useful.
But we also need to consider how to handle query text lookups, so we
are not forced to load the entire text file into memory.
For what it is worth, I have been thinking about what it would take to
move query texts into shared memory, which could make this type of
filtering more practical.
Just my 2c.
--
Sami Imseih
Amazon Web Services (AWS)
From | Date | Subject | |
---|---|---|---|
Next Message | Sami Imseih | 2025-09-17 15:52:31 | Re: [BUG] temporary file usage report with extended protocol and unnamed portals |
Previous Message | Peter Geoghegan | 2025-09-17 15:46:58 | Re: PG 18 release notes draft committed |