From: | Lukas Fittl <lukas(at)fittl(dot)com> |
---|---|
To: | Patrick Hemmer <postgresql(at)stormcloud9(dot)net> |
Cc: | pgsql-performance(at)lists(dot)postgresql(dot)org, Adrien Nayrat <adrien(dot)nayrat(at)anayrat(dot)info>, Justin Pryzby <pryzby(at)telsasoft(dot)com> |
Subject: | Re: performance statistics monitoring without spamming logs |
Date: | 2018-07-12 22:25:25 |
Message-ID: | CAP53PkzXLVzD2hwi4iVv-H7TuM-ZdUN52zE0tGcmtj5qf1o2Eg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-performance |
On Tue, Jul 10, 2018 at 11:38 AM, Justin Pryzby <pryzby(at)telsasoft(dot)com>
wrote:
>
> > 2. Make stats available in `pg_stat_statements` (or alternate view that
> > could be joined on). The block stats are already available here, but
> > others like CPU usage, page faults, and context switches are not.
>
> pg_stat_statements is ./contrib/pg_stat_statements/pg_stat_statements.c
> which is 3k LOC.
>
> getrusage stuff and log_*_stat stuff is in src/backend/tcop/postgres.c
Before you start implementing something here, take a look at pg_stat_kcache
[0]
Which already aims to collect a few more system statistics than what
pg_stat_statements provides today, and might be a good basis to extend from.
It might also be worth to look at pg_stat_activity wait event sampling to
determine where a system spends time, see e.g. pg_wait_sampling [1] for one
approach to this.
[0]: https://github.com/powa-team/pg_stat_kcache
[1]: https://github.com/postgrespro/pg_wait_sampling
Best,
Lukas
--
Lukas Fittl
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2018-07-12 22:30:34 | Re: Shared buffer access rule violations? |
Previous Message | Andrew Dunstan | 2018-07-12 22:15:54 | Re: Vacuum: allow usage of more than 1GB of work mem |
From | Date | Subject | |
---|---|---|---|
Next Message | Adrien NAYRAT | 2018-07-13 07:23:59 | Re: performance statistics monitoring without spamming logs |
Previous Message | Roman Konoval | 2018-07-12 10:55:09 | Re: High concurrency but simple updating causes deadlock |