On Tue, Dec 6, 2011 at 9:58 PM, Greg Smith <greg(at)2ndquadrant(dot)com> wrote:
> -If you have a system with a working TSC clock source (timing data is pulled
> right from the CPU), timing overhead is reasonable enough that you might
> turn it on even for things that happen frequently, such as the buffer I/O
> timing patch enables.
Even the TSC stuff looks expensive enough that you wouldn't to pay the
full overhead all the time on a busy system, but of course we probably
wouldn't want to do that anyway. EXPLAIN ANALYZE is extremely
expensive mostly because it's timing entry and exit into every plan
node, and the way our executor works, those are very frequent
operations. But you could probably gather more coarse-grained
statistics, like separating parse, plan, and execute time for each
query, without breaking a sweat. I'm not sure about buffer I/Os - on
a big sequential scan, you might do quite a lot of those in a pretty
tight loop. That's not an argument against adding the option, though,
assuming that the default setting is off. And, certainly, I agree
with you that it's worth trying to document some of this stuff so that
people don't have to try to figure it out themselves (uggh!).
One random thought: I wonder if there's a way for us to just time
every N'th event or something like that, to keep the overhead low.
The problem is that you might not get accurate results if, say, every
2N'th event takes much longer than normal - you'll either hit all the
long ones, or miss them all. You could "fix" that by using a
pseudorandom number generator to decide whether to time each event,
but that's got it's own overhead...
The Enterprise PostgreSQL Company
In response to
pgsql-hackers by date
|Next:||From: Robert Haas||Date: 2011-12-07 03:45:20|
|Subject: Re: Inlining comparators as a performance optimisation|
|Previous:||From: Robert Haas||Date: 2011-12-07 03:03:31|
|Subject: Re: Large number of open(2) calls with bulk INSERT into