PGSQL version is: 8.2.7. (BTW, which catalog view contains the
back-end version number?)
On Mon, Sep 29, 2008 at 11:37 AM, Peter Kovacs
> We have a number of automated performance tests (to test our own code)
> involving PostgreSQL. Test cases are supposed to drop and recreate
> tables each time they run.
> The problem is that some of the tests show a linear performance
> degradation overtime. (We have data for three months back in the
> past.) We have established that some element(s) of our test
> environment must be the culprit for the degradation. As rebooting the
> test machine didn't revert speeds to baselines recorded three months
> ago, we have turned our attention to the database as the only element
> of the environment which is persistent across reboots. Recreating the
> entire PGSQL cluster did cause speeds to revert to baselines.
> I understand that vacuuming solves performance problems related to
> "holes" in data files created as a result of tables being updated. Do
> I understand correctly that if tables are dropped and recreated at the
> beginning of each test case, holes in data files are reclaimed, so
> there is no need for vacuuming from a performance perspective?
> I will double check whether the problematic test cases do indeed
> always drop their tables, but assuming they do, are there any factors
> in the database (apart from table updates) that can cause a linear
> slow-down with repetitive tasks?
In response to
pgsql-admin by date
|Next:||From: Jonny||Date: 2008-09-29 11:00:41|
|Subject: turning of pg_xlog|
|Previous:||From: Peter Kovacs||Date: 2008-09-29 09:37:36|
|Subject: Do we need vacuuming when tables are regularly dropped?|