From: | Justin Pryzby <pryzby(at)telsasoft(dot)com> |
---|---|
To: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> |
Cc: | pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: proposal - log_full_scan |
Date: | 2021-04-17 15:09:31 |
Message-ID: | 20210417150931.GN3315@telsasoft.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote:
> today I worked on postgres's server used for critical service. Because the
> application is very specific, we had to do final tuning on production
> server. I fix lot of queries, but I am not able to detect fast queries that
> does full scan of middle size tables - to 1M rows. Surely I wouldn't log
> all queries. Now, there are these queries with freq 10 per sec.
>
> Can be nice to have a possibility to set a log of queries that do full
> scan and read more tuples than is specified limit or that does full scan of
> specified tables.
>
> What do you think about the proposed feature?
Are you able to use auto_explain with auto_explain.log_min_duration ?
Then you can search for query logs with
message ~ 'Seq Scan .* \(actual time=[.0-9]* rows=[0-9]{6,} loops=[0-9]*)'
Or can you use pg_stat_all_tables.seq_scan ?
But it seems to me that filtering on the duration would be both a more
important criteria and a more general one, than "seq scan with number of rows".
| (split_part(message, ' ', 2)::float/1000 AS duration ..) WHERE duration>2222;
--
Justin
From | Date | Subject | |
---|---|---|---|
Next Message | Pavel Stehule | 2021-04-17 15:22:59 | Re: proposal - log_full_scan |
Previous Message | Pavel Stehule | 2021-04-17 14:36:52 | proposal - log_full_scan |