We are running a PostgreSQL 8.4 database, with two tables containing a
lot (> 1 million) moderatly small rows. It contains some btree indexes,
and one of the two tables contains a gin full-text index.
We noticed that the autovacuum process tend to use a lot of memory,
bumping the postgres process near 1Gb while it's running.
I looked in the documentations, but I didn't find the information : do
you know how to estimate the memory required for the autovacuum if we
increase the number of rows ? Is it linear ? Logarithmic ?
Also, is there a way to reduce that memory usage ? Would running the
autovacuum more frequently lower its memory usage ?
Gaël Le Mignot - gael(at)pilotsystems(dot)net
Pilot Systems - 9, rue Desargues - 75011 Paris
Tel : +33 1 44 53 05 55 - www.pilotsystems.net
Gérez vos contacts et vos newsletters : www.cockpit-mailing.com
pgsql-performance by date
|Next:||From: Heikki Linnakangas||Date: 2011-07-09 07:54:08|
|Subject: Re: issue with query optimizer when joining two partitioned
|Previous:||From: Pavel Stehule||Date: 2011-07-09 03:49:34|
|Subject: Re: Slow query when using ORDER BY *and* LIMIT|