We have a database with around 20000 records. Most of
the time we need to update records in two tables.
These two tables have around 6000 records. When we
update them, records in one of the table just get
updated, some of the records in another table will be
deleted and new records will be added.
When we repeatedly update these two tables, (note: we
execute "vacuum --analyze" every time after we update
those two tables). we check our log file, the first
time took 8 seconds, the second time took 9 seconds,
the third time took 11 seconds, the fourth time took
15 seconds, the fifth time took 18 seconds, the sixth
time took 22 seconds and so on. The time we use to
update two tables increses.
We check the database file directory, found database
file size growed.
Then we use "vacuum full", or use a script to do
su -l postgres -c "pg_dump -Fc -f bigNetManDB netman;
sleep 2; pg_restore -v --clean -d netman
to try to bring down the database size.
We still found that after we "vacuum full" database,
the first time to update those two tables took 10
seconds, seconds time took 13 seconds and so on, which
means we cannot get 8 seconds performance anymore.
Can you give me any idea to deal with the database
size growing problem?
Any help will be greatly appreciated.
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
pgsql-interfaces by date
|Next:||From: Paulo Ricardo Stradioti||Date: 2005-07-31 07:16:07|
|Subject: Problem With TCL|
|Previous:||From: Zlatko Matić||Date: 2005-07-24 12:09:31|
|Subject: postgres temporary tables and MS Access|