From: | "Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com> |
---|---|
To: | "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp> |
Cc: | Decibel! <decibel(at)decibel(dot)org>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: contrib/pg_stat_statements |
Date: | 2008-10-17 09:30:50 |
Message-ID: | 1d709ecc0810170230k62418c9aw5e79b119c5e88f01@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
>
> Decibel! <decibel(at)decibel(dot)org> wrote:
>
> I had tried to use a normal table for store stats information,
> but several acrobatic hacks are needed to keep performance.
I guess it is not really required to synchronize the stats into some
physical table immediately.
I would suggest keeping all the data in memory, and having a job that
periodically dumps snapshots into physical tables (with WAL etc).
In that case one would be able to compute database workload as a difference
between two given snapshots. From my point of view, it does not look like a
performance killer to have snapshots every 15 minutes. It does not look too
bad to get the statistics of last 15 minutes lost in case of database crash
either.
Regards,
Vladimir Sitnikov
From | Date | Subject | |
---|---|---|---|
Next Message | Pavel Stehule | 2008-10-17 10:16:05 | WIP: grouping sets support |
Previous Message | Gregory Stark | 2008-10-17 08:00:42 | Re: Cross-column statistics revisited |