Re: Query on exception handling in PL/pgSQL

From: Johan Wehtje <joweht(at)tpgi(dot)com(dot)au>
To: Victor Ciurus <vikcious(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-general(at)postgresql(dot)org
Subject: Re: Query on exception handling in PL/pgSQL
Date: 2004-11-27 21:17:38
Message-ID: 41A8EEF2.6090607@tpgi.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I would second this motion as it relates to bulk inserts, I also love
PG, but it's bulk insert/copy from file routines are definitely one area
where it cannot claim to be "most advanced".

My own wish list includes :
*Copy to a new table - so that a table in the db is created - default is
for all fields to be a relaxed varchar, but there should be an option of
using a list of field names paired with data types in the copy command.
* specify a start row , a la MS SQL - so a file that might have 2 rows
of header,column information - or in the case of many Terminal captures
an arbitrary number of blank lines..
*support for using the first rows values as field names when copying to
a new table - or using them to map columns in the file to the
destination table.
*Specify row delimiter - so a row delimiter can be other than a new line
character - or can be a character combination. This would help when
dumping multi line text field values from one DB and loading them in
another without having to use the backslash escaped newline method.
*Commit after - specify number of rows to read before committing - and
behavior on error, ideally if I have 100,000 rows in a bulk copy and
there was one error in line 45,000 the server would write a file with
the same name as the input file + an extension like err, with just that
row omitted . Also it would load 100 rows at a time - perform any logging.

Johan Wehtje

Victor Ciurus wrote:

>And how does exactly PG8.0 help us with this?
>
>Tom, coould you be so kind and tell me/us when will PG incorporate
>such a hugely claimed feature like "copy table_x from 'file_y.txt'
>(using delimiters) IGNORE ON DUPLICATES"?
>
>I do LOVE PG and I'm very pleased with its overall features level
>still the lack of "ingore" on unique key constraint insert (especially
>for large bulk inserts!) makes me feel quite helpless!
>
>Or, is there another way, like in Oracle's exception handling, to
>hanle this thing (COPY/INSERTs with Ignore?) I can't emagine how such
>a 'simple' (no offense!) db like MySQL can hanlde INSERT IGONERS (not
>to mention the 'so serious' ORACLE, INFORMIX , MSSQL and prolly DB2)
>and PG not beeing able to do this!
>
>Please enlighten me! I saw a huge deabte over this feature, dated
>2001-2002, ended up on a 'promise' that such feature will be made
>available staring with PG 7.1 or 7.2 (can't remeber exactly!) still no
>sign of improvement on it!
>
>Best regards,
>Victor
>
>
>On Fri, 26 Nov 2004 14:06:52 -0500, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
>
>>diya das <diyadas(at)yahoo(dot)com> writes:
>>
>>
>>
>>
>>>My pgsql function
>>>does an insert to a table and I have opened a
>>>transaction block before calling my function from the
>>>client. When the insert operation fails due to unique
>>>key violation the whole transaction aborts. Is there a
>>>way I can handle this error in my pgsql funtion rather
>>>that aborting and not executing the rest of the
>>>operations?
>>>
>>>
>>Not before PG 8.0. In earlier versions you might be able to work around
>>the problem by testing for existence of the target key before you
>>insert; though this has obvious race-condition problems if you expect
>>multiple clients to be doing it simultaneously.
>>
>> regards, tom lane
>>
>>---------------------------(end of broadcast)---------------------------
>>TIP 7: don't forget to increase your free space map settings
>>
>>
>>
>
>---------------------------(end of broadcast)---------------------------
>TIP 7: don't forget to increase your free space map settings
>
>.
>
>
>

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2004-11-27 21:55:23 Re: row-level deadlock problem
Previous Message Kenneth Tanzer 2004-11-27 20:56:09 Section 9.6.3.5, Regular Expression Matching Rules