Re: slow full table update

From: <firerox(at)centrum(dot)cz>
To: <tv(at)fuzzy(dot)cz>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: slow full table update
Date: 2008-11-12 17:25:59
Message-ID: 200811121825.26487@centrum.cz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hi,

I've changed settings,
but with no effect on speed.

I try explain query with this result
for 10.000 rows > update songs set views = 0 where sid > 20000 and sid < 30000

Bitmap Heap Scan on songs (cost=151.59..6814.29 rows=8931 width=526) (actual time=4.848..167.855 rows=8945 loops=1)

Recheck Cond: ((sid > 20000) AND (sid < 30000))

-> Bitmap Index Scan on pk_songs2 (cost=0.00..151.59 rows=8931 width=0) (actual time=4.071..4.071 rows=9579 loops=1)

Index Cond: ((sid > 20000) AND (sid < 30000))

Is there a way to run this query on sigle throughpass with no Recheck Cond?

Thank you.

best regards
Marek Fiala

______________________________________________________________
> Od: tv(at)fuzzy(dot)cz
> Komu: pgsql-performance(at)postgresql(dot)org
> Datum: 12.11.2008 17:48
> Předmět: Re: [PERFORM] slow full table update
>
>Hi,
>
>so the table occupies about 50 MB, i.e. each row has about 1 kB, right?
>Updating 1000 rows should means about 1MB of data to be updated.
>
>There might be a problem with execution plan of the updates - I guess the
>100 rows update uses index scan and the 1000 rows update might use seq
>scan.
>
>Anyway the table is not too big, so I wouldn't expect such I/O bottleneck
>on a properly tuned system. Have you checked the postgresql.conf settings?
>What are the values for
>
>1) shared_buffers - 8kB pages used as a buffer (try to increase this a
>little, for example to 1000, i.e. 8MB, or even more)
>
>2) checkpoint_segments - number of 16MB checkpoint segments, aka
>transaction logs, this usually improves the write / update performance a
>lot, so try to increase the default value (3) to at least 8
>
>3) wal_buffers - 8kB pages used to store WAL (minimal effect usually, but
>try to increase it to 16 - 64, just to be sure)
>
>There is a nicely annotated config, with recommendations on how to set the
>values based on usage etc. See this:
>
>http://www.powerpostgresql.com/Downloads/annotated_conf_80.html
>http://www.powerpostgresql.com/PerfList
>
>regards
>Tomas
>
>> Hi,
>>
>> thank you for your reply.
>>
>> Here is some aditional information:
>>
>> the problem is on every tables with small and large rows too.
>> autovacuum is running.
>>
>> relpages reltuples
>> 6213 54743
>>
>> tables are almost write-only
>> Munin Graphs shows that problems is with I/O bottleneck.
>>
>> I found out that
>> Update 100 rows takes 0.3s
>> but update 1000 rows takes 50s
>>
>> Is this better information?
>>
>> Thanks for any help.
>>
>> best regards
>> Marek Fiala
>> ______________________________________________________________
>>> Od: tv(at)fuzzy(dot)cz
>>> Komu: firerox(at)centrum(dot)cz
>> &gt; CC: pgsql-performance(at)postgresql(dot)org
>>> Datum: 10.11.2008 17:42
>>> PĹ&#65533;edmÄ&#65533;t: Re: [PERFORM] slow full table update
>>>
>>>Sorry, but you have to provide much more information about the table. The
>>>information you've provided is really not sufficient - the rows might be
>>>large or small. I guess it's the second option, with a lots of dead rows.
>>>
>>>Try this:
>>>
>>>ANALYZE table;
>>>SELECT relpages, reltuples FROM pg_class WHERE relname = 'table';
>>>
>>>Anyway, is the autovacuum running? What are the parameters? Try to
>>> execute
>>>
>>>VACUUM table;
>>>
>>>and then run the two commands above. That might 'clean' the table and
>>>improve the update performance. Don't forget each such UPDATE will
>>>actually create a copy of all the modified rows (that's how PostgreSQL
>>>works), so if you don't run VACUUM periodically or autovacuum demon, then
>>>the table will bloat (occupy much more disk space than it should).
>>>
>>>If it does not help, try do determine if the UPDATE is CPU or disk bound.
>>>I'd guess there are problems with I/O bottleneck (due to the bloating).
>>>
>>>regards
>>>Tomas
>>>
>>>> Hi,
>>>>
>>>> I have table with cca 60.000 rows and
>>>> when I run query as:
>>>> Update table SET column=0;
>>>> after 10 minutes i must stop query, but it still running :(
>>>>
>>>> I've Postgres 8.1 with all default settings in postgres.conf
>>>>
>>>> Where is the problem?
>>>>
>>>> Thak you for any tips.
>>>>
>>>> best regards.
>>>> Marek Fiala
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Sent via pgsql-performance mailing list
>>>> (pgsql-performance(at)postgresql(dot)org)
>>>> To make changes to your subscription:
>>>> http://www.postgresql.org/mailpref/pgsql-performance
>>>>
>>>
>>>
>>>
>>>--
>>>Sent via pgsql-performance mailing list
>>> (pgsql-performance(at)postgresql(dot)org)
>>>To make changes to your subscription:
>>>http://www.postgresql.org/mailpref/pgsql-performance
>>>
>>
>>
>> --
>> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-performance
>>
>
>
>
>--
>Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
>To make changes to your subscription:
>http://www.postgresql.org/mailpref/pgsql-performance
>

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Matthew Wakeling 2008-11-12 17:26:14 Re: Increasing select max(datecol) from bilkaib where datecol<=date'2008-11-01' and (cr='00' or db='00') speed
Previous Message Fernando Hevia 2008-11-12 17:16:11 Re: Performance Question