Re: Performance degradation after successive UPDATE's

From: Bruno Wolff III <bruno(at)wolff(dot)to>
To: Assaf Yaari <assafy(at)mobixell(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance degradation after successive UPDATE's
Date: 2005-12-06 20:44:33
Message-ID: 20051206204433.GA22168@wolff.to
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Tue, Dec 06, 2005 at 11:08:07 +0200,
Assaf Yaari <assafy(at)mobixell(dot)com> wrote:
> Thanks Bruno,
>
> Issuing VACUUM FULL seems not to have influence on the time.
That was just to get the table size back down to something reasonable.

> I've added to my script VACUUM ANALYZE every 100 UPDATE's and run the
> test again (on different record) and the time still increase.

Vacuuming every 100 updates should put an upperbound on how slow things
get. I doubt you need to analyze every 100 updates, but that doesn't
cost much more on top of a vacuum. However, if there is another transaction
open while you are doing the updates, that would prevent clearing out
the deleted rows, since they are potentially visible to it. This is something
you want to rule out.

> Any other ideas?

Do you have any triggers on this table? Are you updating any other tables
at the same time? In particular ones that are referred to by the problem table.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Alan Stange 2005-12-06 20:48:20 Re: postgresql performance tuning
Previous Message Tom Lane 2005-12-06 20:33:31 Re: Missed index opportunity for outer join?