From: | Adrien NAYRAT <adrien(dot)nayrat(at)anayrat(dot)info> |
---|---|
To: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> |
Cc: | Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, "PostgreSQL mailing lists" <pgsql-hackers(at)postgresql(dot)org>, Nikolay Samokhvalov <samokhvalov(at)gmail(dot)com> |
Subject: | Re: Log a sample of transactions |
Date: | 2019-01-15 17:03:36 |
Message-ID: | c8f6bbe9-f6ca-c2ed-2ff0-6db5679e5a67@anayrat.info |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 1/15/19 11:42 AM, Masahiko Sawada wrote:
>> When you troubleshoot applicative issues with multi-statements transaction, you may have to log all queries to find all statements of one transaction. With high throughput, it could be hard to log all queries without causing troubles.
> Hm, can we use log_min_duration_statement to find slow queries of a
> transaction instead? Could you please elaborate on the use-case?
Hello,
The goal is not to find slow queries in a transaction, but troubleshoot
applicative issue when you have short queries.
Sometimes you want to understand what happens in a transaction, either
you perfectly know your application, either you have to log all queries
and find ones with the same transaction ID (%x). It could be problematic
if you have a huge traffic with fast queries.
Thanks,
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2019-01-15 17:31:54 | Re: Proving IS NOT NULL inference for ScalarArrayOpExpr's |
Previous Message | Andres Freund | 2019-01-15 16:45:13 | Re: Pluggable Storage - Andres's take |