From: | <wespvp(at)syntegra(dot)com> |
---|---|
To: | Jan Wieck <JanWieck(at)yahoo(dot)com>, Eric Ridge <ebr(at)tcdi(dot)com> |
Cc: | Pgsql-General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Cursors and Transactions, why? |
Date: | 2004-04-06 17:33:20 |
Message-ID: | BC985410.C2AC%wespvp@syntegra.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 4/6/04 10:54 AM, "Jan Wieck" <JanWieck(at)yahoo(dot)com> wrote:
>> Cursors seem as if they have some nice performance benefits (esp. if
>> you're not using all rows found), but their usefulness drops
>> considerably since you must leave a transaction open.
>
> And now you know why they are so good if you don't use all rows. This
> benefit I think goes away if you use Joe Conway's suggestion of WITH HOLD.
I tried using WITH HOLD in the following case (using an ecpg C program):
foreach row in table A
update table B with value from table A
commit once every 10,000 updates
forend
I created a cursor on table A. Without WITH HOLD, obviously I got an error
on the next TABLE A fetch because the COMMIT closed the cursor. I added
'WITH HOLD' to the cursor. On the first COMMIT, the application hung. I
assume the COMMIT would have completed after some period of time, but I
didn't wait that long.
There are 20 million rows in table A and 60 million in table B (one to many
relationship).
Is this hang on COMMIT when using WITH HOLD to be expected? Is there a way
around it? I don't think it's reasonable put the entire 60 million updates
in a single transaction. The kludge solution I implemented was to write out
all the data I needed from table A to a file, then read that file and update
table B.
Wes
From | Date | Subject | |
---|---|---|---|
Next Message | Bill Moran | 2004-04-06 17:34:18 | Re: PERFORM statement inside procedure |
Previous Message | Stephan Szabo | 2004-04-06 17:32:34 | Re: Create Type Problem |