> > That is what transactions are for. If any errors occur, then the
> > transacction is aborted. You are supposed to use transactions when you
> > either everything to occur (the whole transaction), or nothing, if an
> > occurs.
> Yes. There are certainly times when a transaction needs to be
> ABORTed. However, there are many reasons why the database should not
> abort a transaction if it does not need to.
I disagree. You shouldn't be using transactions in the first place, if you
didn't want the sequence to abort if an error occurs.
> There is obviously no
> reason why a transaction needs to be aborted for syntax errors.
I beg to differ. For a start, invalid SQL are GREAT ways to irreversibly
corrupt your data. And you should test your SQL to make sure it doesn't
produce syntax errors before you get as far as putting it into a
Here's an example:
A bank is transferring money from one acount to another. Say the money
leaves the first account (first update query), and then an error occurs
when inserting the money into the second account (second update query). If
you have debited the first account and committed the change despite the
second error, the money would have left the first account, but it wouldn't
have appeared in the second account. This would be irreversible, and would
take lots of man-hours of following the paper trail (if there is one) to
find where things went wrong, if it could be found at all.
That is the whole point of transactions - they are used for an
> is obviously no reason why a transaction needs to be aborted for say,
> trying to insert a duplicate primary key.
It is not obvious at all. In fact, I can see why it obviously shouldn't be
done. Say you want a phone installed. Your phone number should be unique,
and it is concievable that it can be the primary key for the database that
stores phone numbers. The engineer types in the wrong number by accident.
The transaction succeeds, and you end up with two phones with the same
number. BAD thing.
I could sit here and list examples endlessly, but let's not clog up the
list with this too much.
> The -insert- can fail, report
> it as such, and the application can determine if a rollback is
> nessasary. If you don't believe me, here's two fully SQL-92 compliant
> databases, Oracle and interbase, which do not exhibit this behavior:
So, what would you like to be the criteria for aborting or proceeding with
> > If you don't like this behaviour, then use auto-commit, and make every
> > separate statement a transaction in itself. That way if any of the
> > statements fails, the next one won't be aborted. This, however,
> > on the error you get could cause massive irreversible data corrpution.
> > then again, if this is a risk, you should be using transactions which
> > the whole block on any error.
> Auto-commit is not the same thing though. That would make each
> statement a transaction. I don't want that, I want the ability of
> grouping a set of statements and commiting them or rolling them back as
> a whole. I do not, however, want the transaction aborted by the server
> when it does not need to be. Clearly in the above case, neither
> interbase nor oracle decided that the transaction had to be aborted.
> This has to be an option no?
Hmm... Fair point. There might be an option for this. I don't know, as I
never used transactions this way (or tried to, for that matter). I agree
that it could be useful to have some sort of a "evaluation" stage before
committing the transaction, where the application would see what (if any)
errors have occured in the transaction stage, and upon that decide whether
it really wants to commit or roll back.
> > Several ways. You can set up a shared network area, sort out unique
> > file-naming system (which shouldn't be too hard), and send some sort of
> > "URL" as a pointer to the file.
> Ahhh. Unfortunatly that is an unacceptable solution :(. So that means
> there is no large binary storage available in postgre for me.
I am not sure if there is another way. There might be, but I am not aware
of it at the moment.
> > Alternatively, wait for v7.1 (develpment tree available), which will
> > support big record sizes (unlimited, AFAIK). Depending on what you are
> > trying to do, BLOBS may or may not be the ideal thing, but sometimes
> > are the only way to store large chunks of data. 7.1 will hopefully sort
> > that out, as I have bounced my head off the record size limit a few
> > myself.
> Hmm, I really hope the 7.1 implementation of blobs is a true blob, and
> not just a really long varchar or something. I need to store
> arbitrarily large binary data, and be able to retrieve it over the
> database connection. I'm really surprised there isn't a facility for
> this already... Our application couldn't run on postgre without it!
That's fair enough. I myself got a feeling that BLOBs in 7.0 were a quick
cludge rather than a permanent solution (not criticising anyone here!).
But then again - what is the difference between an encoded varchar and a
big binary type? If you get the data you want in a data object, what
difference does it make how it happens? It's all just numbers to a computer
(or am I wrong here?)
In response to
pgsql-general by date
|Next:||From: Lamar Owen||Date: 2000-11-30 17:23:25|
|Subject: Re: Help with Database Recovery|
|Previous:||From: Dale Anderson||Date: 2000-11-30 17:17:43|
|Subject: Table & Column descriptions|