Sorry for bothering you with my stuff for the second time
but I haven't got any answer within two days and the problem
appears fundamental, at least to me.
I have a C application running to deal with meteorological data
like temperature, precipitation, wind speed, wind direction, ...
And I mean loads of data like several thousand sets within every
>From time to time it happens the transmitters have delivered wrong data,
so they send the sets again to be taken as correction.
The idea is to create a unique index on the timestamp, the location id
and the measurement id, then when receiving a duplicate key error
move on to an update command on that specific row.
But, within PostgreSQL this strategy does not work any longer within
a chained transaction, because the duplicate key error leads to
'abort the whole transaction'.
What I can do is change from chained transaction to unchained transaction,
but what I have read in the mailing list so far, the commit operation
requires loads of cpu time, and I do not have time for this when
processing thousands of sets.
I am wondering now whether there is a fundamental design error in
Any ideas, suggestions highly appreciated and thanks for reading so far.
My first message:
In a C application I want to run several
insert commands within a chained transaction
(for faster execution).
>From time to time there will be an insert command
ERROR: Cannot insert a duplicate key into a unique index
As a result, the whole transaction is aborted and all
the previous inserts are lost.
Is there any way to preserve the data
except working with "autocommit" ?
What I have in mind particularly is something like
"Do not abort on duplicate key error".
pgsql-hackers by date
|Next:||From: Tatsuo Ishii||Date: 2001-09-27 12:30:38|
|Subject: Re: multibyte performance|
|Previous:||From: Christof Petig||Date: 2001-09-27 10:47:58|
|Subject: Re: Abort transaction on duplicate key error|