Re: Updates on large tables are extremely slow

From: Jacques Caron <jc(at)directinfos(dot)com>
To: Yves Vindevogel <yves(dot)vindevogel(at)implements(dot)be>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Updates on large tables are extremely slow
Date: 2005-06-12 17:57:54
Message-ID: 6.2.0.14.0.20050612195157.039cd170@pop.interactivemediafactory.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hi,

At 19:40 12/06/2005, Yves Vindevogel wrote:
>Hi,
>
>I'm trying to update a table that has about 600.000 records.
>The update query is very simple : update mytable set pagesdesc = -
>pages ;
>
>(I use pagesdesc to avoid problems with sort that have one field in
>ascending order and one in descending order. That was a problem I had a
>week ago)

An index on (-pages) would probably do exactly what you want without having
to add another column.

>The query takes about half an hour to an hour to execute.

Depending on the total size of the table and associated indexes and on your
exact setup (especially your hardare), this could be quite normal: the
exuctor goes through all rows in the table, and for each, creates a copy
with the additional column, updates indexes, and logs to WAL. You might
want to look into moving your WAL files (pg_xlog) to a separate disk,
increase WAL and checkpoint buffers, add more RAM, add more disks...

But as I said, you might not even need to do that, just use an index on an
expression...

Jacques.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Madison Kelly 2005-06-12 22:52:05 Re: Index ot being used
Previous Message Yves Vindevogel 2005-06-12 17:40:29 Updates on large tables are extremely slow