From: | "Adam Lang" <aalang(at)rutgersinsurance(dot)com> |
---|---|
To: | "Matthew Kennedy" <mkennedy(at)hssinc(dot)com>, <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Ignore when using COPY FROM |
Date: | 2000-08-29 15:24:34 |
Message-ID: | 00d701c011cd$3ce0b0a0$330a0a0a@Adam |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Well, you could always make a table that has no constraints on duplicates
and COPY TO that one. Then, make a query that inserts the data into your
production table that handles the duplicates.
Adam Lang
Systems Engineer
Rutgers Casualty Insurance Company
----- Original Message -----
From: "Matthew Kennedy" <mkennedy(at)hssinc(dot)com>
To: <pgsql-general(at)postgresql(dot)org>
Sent: Tuesday, August 29, 2000 10:15 AM
Subject: [GENERAL] Ignore when using COPY FROM
> I have a ton of data in a text delimited file from an old legacy system.
> When uploading it into postgres, I'd do something like this:
>
> COPY stuff FROM 'stuff.txt' USING DELIMITERS = '|';
>
> The problem is some of the rows in stuff.txt may not conform to the
> stuff table attributes (duplicate keys, for example). The command above
> terminates with no change to the table stuff when it encounters an error
> in stuff.txt. Is there a way to have postgres ignore erroneous fields
> but keep information about which fields weren't processed. I believe
> Oracle has some support for this through an IGNORE clause.
From | Date | Subject | |
---|---|---|---|
Next Message | Adam Lang | 2000-08-29 15:36:13 | SQL scripts - sequences |
Previous Message | Karl DeBisschop | 2000-08-29 14:52:56 | psql: FATAL 1: Index pg_class_relname_index is not a btree |