Re: four minor proposals for 9.5

From: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
To: Gregory Smith <gregsmithpgsql(at)gmail(dot)com>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Josh Berkus <josh(at)agliodbs(dot)com>, Vik Fearing <vik(dot)fearing(at)dalibo(dot)com>
Subject: Re: four minor proposals for 9.5
Date: 2014-04-08 16:59:39
Message-ID: CAFj8pRBag6VXmUFwvfbjywm3OaVedGKect5u3CN1n_OHuqaiBQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

2014-04-08 18:34 GMT+02:00 Gregory Smith <gregsmithpgsql(at)gmail(dot)com>:

> On 4/6/14 2:46 PM, Pavel Stehule wrote:
>
>>
>> Proposed options are interesting for "enterprise" using, when you have a
>> some more smart tools for log entry processing, and when you need a complex
>> view about performance of billions queries - when cancel time and lock time
>> is important piece in mosaic of server' fitness.
>>
>
> I once sent a design proposal over for something I called "Performance
> Events" that included this. It will be difficult to get everything people
> want to track into log_line_prefix macro form. You're right that this
> particular one can probably be pushed into there, but you're adding four
> macros just for this feature. And that's only a fraction of what people
> expect from database per-query performance metrics.
>
> The problem I got stuck on with the performance event project was where to
> store the data collected. If you want to keep up with read rates, you
> can't use the existing log infrastructure. It has to be something faster,
> lighter. I wanted to push the data into shared memory somewhere instead.
> Then some sort of logging consumer could drain that queue and persist it
> to disk.
>
> Since then, we've had a number of advances, particularly these two:
>
> -Dynamic shared memory allocation.
> -Query log data from pg_stat_statements can persist.
>

I know nothing about your proposal, so I cannot to talk about it. But I am
sure so any memory based solution is not practical for us. It can work well
for cumulative values (per database), but we need a two views - individual
(per queries) and cumulative (per database, per database server). We
process billion queries per day, and for us is more practical to use a
external log processing tools. But I understand well so for large group of
users can be memory solution perfect and I am thinking so these designs
should coexists together - we log a slow queries (we log plans) and we use
a pg_stat_statements. And users can choose the best method for their
environment.

Probably some API (some data) can be shared by both designs.

Regards

Pavel

>
> With those important fundamentals available, I'm wandering around right
> now trying to get development resources to pick the whole event logging
> idea up again. The hardest parts of the infrastructure I was stuck on in
> the past are in the code today.
>
> --
> Greg Smith greg(dot)smith(at)crunchydatasolutions(dot)com
> Chief PostgreSQL Evangelist - http://crunchydatasolutions.com/
>

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Geoghegan 2014-04-08 17:02:48 Re: B-Tree support function number 3 (strxfrm() optimization)
Previous Message Gregory Smith 2014-04-08 16:34:18 Re: four minor proposals for 9.5