Pascal Cohen wrote:
> But to sum up we would like to collect statistics (write mostly tables,
> high volume generation, data not critical) on an application usage on a
> read mostly DB with the least impact on this DB perfs. ANn we would also
> like to be able to archive outside the DB, the old collected data.
[Just throwing ideas around - there's probably a much better and already
well established to do all this]:
Does the stats data need to be handled within the same transactional
scope as the real app data? In other words, do you care if a stats entry
is generated for a transaction that then rolls back?
If you're OK with that, then another possible area to investigate is the
use of IPC messaging to a separate stats collector daemon outside of,
and running asynchronously with, PostgreSQL. Many of the PostgreSQL
procedural languages can communicate outside the server with mechanisms
like UNIX sockets, IP networking, signals, named pipes, and perhaps even
things like shared memory if the Pg backend's use of it doesn't
interfere. Alternately, you could write your stats collector client as a
C add-in to Pg, which would probably let you minimize it's performance
cost in exchange for more development and debugging time, plus a higher
risk to server stability.
Ideally the part of the logging/stats code running inside the Pg backend
would do as little work as possible to record the message for later
processing. The external stats collector/processor would asynchronously
process messages it receives from all Pg backends and record it in your
preferred format (perhaps inserting it into a separate write-optimised
Pg database on another host).
Come to think of it, having stats recorded for transactions that roll
back is probably desirable, rather than any sort of downside.
In response to
pgsql-general by date
|Next:||From: Bohdan Linda||Date: 2008-07-31 09:24:35|
|Subject: Weird pg_ctl behaviour via ssh|
|Previous:||From: Pascal Cohen||Date: 2008-07-31 08:08:01|
|Subject: Statistics Data archiving with Postgres|