"Rob Bamber" wrote:
> Another thought -
> We had a similar issue recently. Our support guys dropped the
> database and then rebuilt it from a dump file. The size of the data
> directory went down from 12GB to less than 2GB. According to the sys
> ad that did the work postgres is not very good a reclaiming disk space
> after large quantities of tuples are deleted over time.
And another thought,
Have you tried clustering your tables on the most frequently used
and/or time-important index in your joins? (remember to VACUUM
ANALYZE after the cluster has completed) You might find a huge
performance increase. Clustering should also fix the problem
mentioned above because the table is physically re-written on disk.
Use EXPLAIN ANALYZE to find out which is the most time-consuming
part of your query and optimise that.
Also, have you analyzed your database recently? If you've never
run VACUUM ANALYZE your query planner's statistics have never been
updated so the query planner might not be making the best choices.
pgsql-admin by date
|Next:||From: LISTMAN||Date: 2004-07-30 13:33:05|
|Subject: Where does the xlateSqlType symbol point to?|
|Previous:||From: Daniel Struck||Date: 2004-07-30 09:43:31|
|Subject: Re: [ADMIN] Secure DB Systems - How to|