Jaime Casanova wrote:
> On Fri, Jul 17, 2009 at 3:38 AM, Mark Kirkwood<markir(at)paradise(dot)net(dot)nz> wrote:
>> With respect to the sum of wait times being not very granular, yes - quite
>> true. I was thinking it is useful to be able to answer the question 'where
>> is my wait time being spent' - but it hides cases like the one you mention.
>> What would you like to see? would max and min wait times be a useful
>> addition, or are you thinking along different lines?
> track number of locks, sum of wait times, max(wait time).
> but actually i started to think that the best is just make use of
> log_lock_waits send the logs to csvlog and analyze there...
Right - I'll look at adding max (at least) early next week.
Yeah, enabling log_lock_waits is certainly another approach, however you
currently miss out on those that are < deadlock_timeout - and
potentially they could be the source of your problem (i.e millions of
waits all < deadlock_timeout but taken together rather significant).
This shortcoming could be overcome by making the cutoff wait time
decoupled from deadlock_timeout (e.g a new parameter
log_min_lock_wait_time or similar).
I'm thinking that having the lock waits analyzable via sql easily may
mean that for most people they don't need to collect and analyze their
logs for this stuff (they just examine the lock stats view from Pgadmin
In response to
pgsql-hackers by date
|Next:||From: Itagaki Takahiro||Date: 2009-07-23 07:56:48|
|Subject: trace hooks (for 2nd commitfest)|
|Previous:||From: Magnus Hagander||Date: 2009-07-23 07:04:09|
|Subject: Re: [PATCH] "could not reattach to shared memory" on Windows|