Re: PREPARE TRANSACTION and webapps

From: Lincoln Yeoh <lyeoh(at)pop(dot)jaring(dot)my>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: PREPARE TRANSACTION and webapps
Date: 2005-11-11 09:45:28
Message-ID: 5.2.1.1.1.20051111171703.0280c950@localhost
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

At 04:11 PM 11/10/2005 -0500, Tom Lane wrote:

>Lincoln Yeoh <lyeoh(at)pop(dot)jaring(dot)my> writes:
> > Is it OK to use PREPARE TRANSACTION and COMMIT PREPARED in order to have
> > transactions that last longer than just a single web request?
>
> > Previously it was usually a bad idea to keep database connections alive
> > just to keep a transaction pending.
>
>A prepared transaction eats just about the same resources (other than an
>active connection) as a live one. In particular it still holds its
>locks, which makes leaving it around for a long time just as evil as
>simply sitting on it in an un-prepared state.

Assuming the transactions don't explicitly do any locks (lock table, select
for update - just selects, inserts and normal updates), would it be
possible to have say 10000 pending prepared transactions? What would the
main limiters be?

It will be very nice if that sort of thing is viable. Previously if you
want to do transactional stuff with webapps, you'd have to simulate it at
the application layer (or leave db connections open[1]). Doing transaction
stuff at the application level seems rather MySQL-ish (OK MySQL 3-ish ;) ).
Having to have tables with transactionid columns, transaction table etc.

Leaving transactions pending will affect vacuuming, but perhaps we can just
put the web transaction stuff in a separate database, so it doesn't affect
vacuuming of other normal transactions. If you do such stuff at the
application layer, you will still have to keep those rows around anyway.

Last but not least, is this a silly thing to do? Are people already doing
such stuff on other databases, or they do such things in other ways for
good reasons (which are?)?

Best regards,
Link.

[1] Which I'd consider viable only in a controlled environment- internal
web app for internal users.

Hmm. I wonder if it would be possible to simulate 20K concurrent database
connections, using many db proxies (e.g. pgpool), and prepared transactions
(just prepare all transactions, but only process a manageable number of
transactions at a time).

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Martijn van Oosterhout 2005-11-11 10:52:38 Re: Best way to use indexes for partial match at
Previous Message Guido Neitzer 2005-11-11 09:43:42 Re: Ordering and unicode