Skip site navigation (1) Skip section navigation (2)

Re: Update on high concurrency OLTP application and Postgres

From: Cosimo Streppone <cosimo(at)streppone(dot)it>
To: C Storm <christian(dot)storm(at)gmail(dot)com>
Cc: Postgresql Performance list <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Update on high concurrency OLTP application and Postgres
Date: 2006-09-22 20:48:16
Message-ID: 45144C10.2070501@streppone.it (view raw or flat)
Thread:
Lists: pgsql-performance
Christian Storm wrote:

>>At the moment, my rule of thumb is to check out the ANALYZE VERBOSE
>>messages to see if all table pages are being scanned.
>>
>>   INFO: "mytable": scanned xxx of yyy pages, containing ...
>>
>>If xxx = yyy, then I keep statistics at the current level.
>>When xxx is way less than yyy, I increase the numbers a bit
>>and retry.
>>
>>It's probably primitive, but it seems to work well.
 >
> What heuristic do you use to up the statistics for such a table?

No heuristics, just try and see.
For tables of ~ 10k pages, I set statistics to 100/200.
For ~ 100k pages, I set them to 500 or more.
I don't know the exact relation.

> Once you've changed it, what metric do you use to
 > see if it helps or was effective?

I rerun an analyze and see the results... :-)
If you mean checking the usefulness, I can see it only
under heavy load, if particular db queries run in the order
of a few milliseconds.

If I see normal queries that take longer and longer, or
they even appear in the server's log (> 500 ms), then
I know an analyze is needed, or statistics should be set higher.

-- 
Cosimo



In response to

Responses

pgsql-performance by date

Next:From: Bucky JordanDate: 2006-09-22 21:18:24
Subject: Re: recommended benchmarks
Previous:From: Vivek KheraDate: 2006-09-22 20:34:04
Subject: Re: Opteron vs. Xeon "benchmark"

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group