On Sun, Jul 4, 2010 at 12:11 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>> CREATE OR REPLACE FUNCTION update_tab() RETURNS void AS $$
>> INSERT INTO tab VALUES (0);
>> FOR i IN 1..100000 LOOP
>> UPDATE tab SET x = x + 1;
>> END LOOP;
>> $$ LANGUAGE plpgsql;
> I believe that none of the dead row versions can be vacuumed during this
Yep, you seem to be right. The table grows to 802 pages. But why is
it that we can't vacuum them as we go along?
> So yes, it sucks, but is it representative of real-world cases?
Hard to say, but I think it probably is to some degree. I stumbled on
it more-or-less by accident, but it wouldn't surprise me to find out
that there are people doing such things in real applications. It's
not uncommon to want to store an updateable counter somewhere.
The Enterprise Postgres Company
In response to
pgsql-hackers by date
|Next:||From: Pavel Stehule||Date: 2010-07-04 06:41:35|
|Subject: proof concept: do statement parametrization|
|Previous:||From: Tom Lane||Date: 2010-07-04 04:11:19|
|Subject: Re: pessimal trivial-update performance |