Skip site navigation (1) Skip section navigation (2)

Re: slow full table update

From: tv(at)fuzzy(dot)cz
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: slow full table update
Date: 2008-11-12 16:47:59
Message-ID: (view raw or whole thread)
Lists: pgsql-performance

so the table occupies about 50 MB, i.e. each row has about 1 kB, right?
Updating 1000 rows should means about 1MB of data to be updated.

There might be a problem with execution plan of the updates - I guess the
100 rows update uses index scan and the 1000 rows update might use seq

Anyway the table is not too big, so I wouldn't expect such I/O bottleneck
on a properly tuned system. Have you checked the postgresql.conf settings?
What are the values for

1) shared_buffers - 8kB pages used as a buffer (try to increase this a
little, for example to 1000, i.e. 8MB, or even more)

2) checkpoint_segments - number of 16MB checkpoint segments, aka
transaction logs, this usually improves the write / update performance a
lot, so try to increase the default value (3) to at least 8

3) wal_buffers - 8kB pages used to store WAL (minimal effect usually, but
try to increase it to 16 - 64, just to be sure)

There is a nicely annotated config, with recommendations on how to set the
values based on usage etc. See this:


> Hi,
> thank you for your reply.
> Here is some aditional information:
> the problem is on every tables with small and large rows too.
> autovacuum is running.
> relpages	reltuples
> 6213              54743
> tables are almost write-only
> Munin Graphs shows that problems is with I/O bottleneck.
> I found out that
> Update 100 rows takes 0.3s
> but update 1000 rows takes 50s
> Is this better information?
> Thanks for any help.
> best regards
> Marek Fiala
> ______________________________________________________________
>> Od: tv(at)fuzzy(dot)cz
>> Komu: firerox(at)centrum(dot)cz
> > CC: pgsql-performance(at)postgresql(dot)org
>> Datum: 10.11.2008 17:42
>> P�edm�t: Re: [PERFORM] slow full table update
>>Sorry, but you have to provide much more information about the table. The
>>information you've provided is really not sufficient - the rows might be
>>large or small. I guess it's the second option, with a lots of dead rows.
>>Try this:
>>ANALYZE table;
>>SELECT relpages, reltuples FROM pg_class WHERE relname = 'table';
>>Anyway, is the autovacuum running? What are the parameters? Try to
>> execute
>>VACUUM table;
>>and then run the two commands above. That might 'clean' the table and
>>improve the update performance. Don't forget each such UPDATE will
>>actually create a copy of all the modified rows (that's how PostgreSQL
>>works), so if you don't run VACUUM periodically or autovacuum demon, then
>>the table will bloat (occupy much more disk space than it should).
>>If it does not help, try do determine if the UPDATE is CPU or disk bound.
>>I'd guess there are problems with I/O bottleneck (due to the bloating).
>>> Hi,
>>> I have table with cca 60.000 rows and
>>> when I run query as:
>>>  Update table SET column=0;
>>> after 10 minutes i must stop query, but it still running :(
>>> I've Postgres 8.1 with all default settings in postgres.conf
>>> Where is the problem?
>>> Thak you for any tips.
>>> best regards.
>>> Marek Fiala
>>> --
>>> Sent via pgsql-performance mailing list
>>> (pgsql-performance(at)postgresql(dot)org)
>>> To make changes to your subscription:
>>Sent via pgsql-performance mailing list
>> (pgsql-performance(at)postgresql(dot)org)
>>To make changes to your subscription:
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:

In response to


pgsql-performance by date

Next:From: J SissonDate: 2008-11-12 16:48:21
Subject: Re: Performance Question
Previous:From: - -Date: 2008-11-12 16:27:46
Subject: Performance Question

Privacy Policy | About PostgreSQL
Copyright © 1996-2015 The PostgreSQL Global Development Group