Skip site navigation (1) Skip section navigation (2)

Re: pessimal trivial-update performance

From: Jesper Krogh <jesper(at)krogh(dot)cc>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: pessimal trivial-update performance
Date: 2010-07-05 09:56:19
Message-ID: 4C31AC43.80409@krogh.cc (view raw or flat)
Thread:
Lists: pgsql-hackers
On 2010-07-04 06:11, Tom Lane wrote:
> Robert Haas<robertmhaas(at)gmail(dot)com>  writes:
>    
>> CREATE OR REPLACE FUNCTION update_tab() RETURNS void AS $$
>> BEGIN
>> 	INSERT INTO tab VALUES (0);
>> 	FOR i IN 1..100000 LOOP
>> 		UPDATE tab SET x = x + 1;
>> 	END LOOP;
>> END
>> $$ LANGUAGE plpgsql;
>>      
> I believe that none of the dead row versions can be vacuumed during this
> test.  So yes, it sucks, but is it representative of real-world cases?
>
>    
The problem can generally be written as "tuples seeing multiple
updates in the same transaction"?

I think that every time PostgreSQL is used with an ORM, there is
a certain amount of multiple updates taking place. I have actually
been reworking clientside to get around multiple updates, since they
popped up in one of my profiling runs. Allthough the time I optimized
away ended being both "roundtrip time" + "update time", but having
the database do half of it transparently, might have been sufficient
to get me to have had a bigger problem elsewhere..

To sum up. Yes I think indeed it is a real-world case.

Jesper

-- 
Jesper

In response to

Responses

pgsql-hackers by date

Next:From: Martin PihlakDate: 2010-07-05 09:58:09
Subject: Re: log files and permissions
Previous:From: Dimitri FontaineDate: 2010-07-05 09:52:39
Subject: Re: pg_archive_bypass

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group