Have you ever done any testing to see if just setting
default_statistics_target to 500 has a negative impact on the system?
On Sep 22, 2006, at 4:48 PM, Cosimo Streppone wrote:
> Christian Storm wrote:
>>> At the moment, my rule of thumb is to check out the ANALYZE VERBOSE
>>> messages to see if all table pages are being scanned.
>>> INFO: "mytable": scanned xxx of yyy pages, containing ...
>>> If xxx = yyy, then I keep statistics at the current level.
>>> When xxx is way less than yyy, I increase the numbers a bit
>>> and retry.
>>> It's probably primitive, but it seems to work well.
>> What heuristic do you use to up the statistics for such a table?
> No heuristics, just try and see.
> For tables of ~ 10k pages, I set statistics to 100/200.
> For ~ 100k pages, I set them to 500 or more.
> I don't know the exact relation.
>> Once you've changed it, what metric do you use to
> > see if it helps or was effective?
> I rerun an analyze and see the results... :-)
> If you mean checking the usefulness, I can see it only
> under heavy load, if particular db queries run in the order
> of a few milliseconds.
> If I see normal queries that take longer and longer, or
> they even appear in the server's log (> 500 ms), then
> I know an analyze is needed, or statistics should be set higher.
Jim Nasby jim(at)nasby(dot)net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
In response to
pgsql-performance by date
|Next:||From: Jim Nasby||Date: 2006-09-27 03:21:11|
|Subject: Re: IN not handled very well? |
|Previous:||From: Jim Nasby||Date: 2006-09-27 03:08:35|
|Subject: Re: Confusion and Questions about blocks read|