Re: contrib/pg_stat_statements

From: Decibel! <decibel(at)decibel(dot)org>
To: Vladimir Sitnikov <sitnikov(dot)vladimir(at)gmail(dot)com>
Cc: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements
Date: 2008-10-21 12:53:48
Message-ID: 9D4A5FAA-F9C0-4785-855E-F407626F1E82@decibel.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Oct 17, 2008, at 4:30 AM, Vladimir Sitnikov wrote:
> Decibel! <decibel(at)decibel(dot)org> wrote:
>
> I had tried to use a normal table for store stats information,
> but several acrobatic hacks are needed to keep performance.
> I guess it is not really required to synchronize the stats into
> some physical table immediately.
> I would suggest keeping all the data in memory, and having a job
> that periodically dumps snapshots into physical tables (with WAL etc).
> In that case one would be able to compute database workload as a
> difference between two given snapshots. From my point of view, it
> does not look like a performance killer to have snapshots every 15
> minutes. It does not look too bad to get the statistics of last 15
> minutes lost in case of database crash either.

Yeah, that's exactly what I had in mind. I agree that trying to write
to a real table for every counter update would be insane.

My thought was to treat the shared memory area as a buffer of stats
counters. When you go to increment a counter, if it's not in the
buffer then you'd read it out of the table, stick it in the buffer
and increment it. As items age, they'd get pushed out of the buffer.
--
Decibel!, aka Jim C. Nasby, Database Architect decibel(at)decibel(dot)org
Give your computer some brain candy! www.distributed.net Team #1828

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Mike Aubury 2008-10-21 12:54:39 Re: automatic parser generation for ecpg
Previous Message Tom Lane 2008-10-21 12:47:35 Re: SSL cleanups/hostname verification