Re: Performance degradation after successive UPDATE's

From: "Assaf Yaari" <assafy(at)mobixell(dot)com>
To: "Bruno Wolff III" <bruno(at)wolff(dot)to>
Cc: <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Performance degradation after successive UPDATE's
Date: 2005-12-06 09:08:07
Message-ID: A3F53DEA945DA44386457F03BA78465F9D12AC@mobiexc.mobixell.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Thanks Bruno,

Issuing VACUUM FULL seems not to have influence on the time.
I've added to my script VACUUM ANALYZE every 100 UPDATE's and run the
test again (on different record) and the time still increase.

Any other ideas?

Thanks,
Assaf.

> -----Original Message-----
> From: Bruno Wolff III [mailto:bruno(at)wolff(dot)to]
> Sent: Monday, December 05, 2005 10:36 PM
> To: Assaf Yaari
> Cc: pgsql-performance(at)postgresql(dot)org
> Subject: Re: Performance degradation after successive UPDATE's
>
> On Mon, Dec 05, 2005 at 19:05:01 +0200,
> Assaf Yaari <assafy(at)mobixell(dot)com> wrote:
> > Hi,
> >
> > I'm using PostgreSQL 8.0.3 on Linux RedHat WS 3.0.
> >
> > My application updates counters in DB. I left a test over the night
> > that increased counter of specific record. After night running
> > (several hundreds of thousands updates), I found out that the time
> > spent on UPDATE increased to be more than 1.5 second (at
> the beginning
> > it was less than 10ms)! Issuing VACUUM ANALYZE and even
> reboot didn't
> > seemed to solve the problem.
>
> You need to be running vacuum more often to get rid of the
> deleted rows (update is essentially insert + delete). Once
> you get too many, plain vacuum won't be able to clean them up
> without raising the value you use for FSM. By now the table
> is really bloated and you probably want to use vacuum full on it.
>

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Kathy Lo 2005-12-06 09:22:30 Memory Leakage Problem
Previous Message Tino Wildenhain 2005-12-06 08:54:40 Re: need help