Re: Commands per transaction

From: Rod Taylor <pg(at)rbt(dot)ca>
To: Tino Wildenhain <tino(at)wildenhain(dot)de>
Cc: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Commands per transaction
Date: 2006-01-21 21:44:51
Message-ID: 1137879891.53224.49.camel@home
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sat, 2006-01-21 at 22:30 +0100, Tino Wildenhain wrote:
> Rod Taylor schrieb:
> > On Sat, 2006-01-21 at 12:48 -0300, Alvaro Herrera wrote:
> >
> >>Rod Taylor wrote:
> >>
> >>>Is there any way of bumping this limit or am I stuck breaking up the
> >>>transaction?
> >>
> >>Wow, I never heard of anyone reaching the limit :-( Sorry, you are
> >>stuck (short of changing CommandId to 64 bits, which would bloat your
> >>tables considerably ...)

> As a quick fix (as is quickly fixed, quick running ;)
> You could load your update data to a temp table via
> COPY and then update a large table in one sweep.
> I guess you dont have 2^31 tables to update? ;)

I wouldn't have written a plpgsql function if I was doing the same thing
with every tuple. Running multiple statements across the 40GB heap would
take much longer to complete.

I've resorted to a client side script that COPYs the data out of various
partitions in the main structure, applies the logic, then COPYs it back
in again to a new partition. Once complete drop the original partition
so the datasets swap.

This is somewhat slower than my original method of doing it all on the
server side but not by too much since the reduced CPU load seems to
offset the increased network IO.

--

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Eisentraut 2006-01-21 22:49:03 postmaster/postgres merge for testing
Previous Message Tino Wildenhain 2006-01-21 21:30:34 Re: Commands per transaction