Re: psql copy errors

From: Vladimir Yevdokimov <vladimir(at)givex(dot)com>
To: David(dot)Bear(at)asu(dot)edu
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: psql copy errors
Date: 2005-06-23 21:48:19
Message-ID: 200506231748.19051.vladimir@givex.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

On June 23, 2005 03:27 pm, David Bear wrote:
> I'm finding the \copy is very brittle. It seems to stop for everyone
> little reason. Is there a way to tell it to be more forgiving -- for
> example, to ignore extra data fields that might exists on a line?
>
> Or, to have it just skip that offending record but continue on to the
> next.
>
> I've got a tab delimited file, but if \copy sees any extra tabs in the
> file it just stops at that record. I want to be able to control what
> pg does when it hits an exception.
>
> I'm curious what others do for bulk data migration. Since copy seems
> so brittle, there must be a better way...
>

You may use '-d' option of pg_dump in which case it dumps data into INSERT statements.
In this case when you load the damped data it will process tabs properly and will fail any invalid records but finish the process itself.
If you redirect output into a separate file you can analyze later how many records failed.
May be that's what you need in your case.
The only problem with this method I know is that it takes longer to load the data as it does full validation for each record.
--
Vladimir Yevdokimov <vladimir(at)givex(dot)com>

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Jeff Frost 2005-06-23 22:50:03 Re: restoring wal archive and pg_xlog dir
Previous Message Simon Riggs 2005-06-23 21:40:44 Re: restoring wal archive and pg_xlog dir