Skip site navigation (1) Skip section navigation (2)

Re: psql copy errors

From: Vladimir Yevdokimov <vladimir(at)givex(dot)com>
To: David(dot)Bear(at)asu(dot)edu
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: psql copy errors
Date: 2005-06-23 21:48:19
Message-ID: 200506231748.19051.vladimir@givex.com (view raw or flat)
Thread:
Lists: pgsql-admin
On June 23, 2005 03:27 pm, David Bear wrote:
> I'm finding the \copy is very brittle. It seems to stop for everyone
> little reason. Is there a way to tell it to be more forgiving -- for
> example, to ignore extra data fields that might exists on a line?
> 
> Or, to have it just skip that offending record but continue on to the
> next.
> 
> I've got a tab delimited file, but if \copy sees any extra tabs in the
> file it just stops at that record. I want to be able to control what
> pg does when it hits an exception.
> 
> I'm curious what others do for bulk data migration. Since copy seems
> so brittle, there must be a better way...
> 

You may use '-d' option of pg_dump in which case it dumps data into INSERT statements.
In this case when you load the damped data it will process tabs properly and will fail any invalid records but finish the process itself.
If you redirect output into a separate file you can analyze later how many records failed.
May be that's what you need in your case.
The only problem with this method I know is that it takes longer to load the data as it does full validation for each record.
-- 
Vladimir Yevdokimov <vladimir(at)givex(dot)com>

In response to

pgsql-admin by date

Next:From: Jeff FrostDate: 2005-06-23 22:50:03
Subject: Re: restoring wal archive and pg_xlog dir
Previous:From: Simon RiggsDate: 2005-06-23 21:40:44
Subject: Re: restoring wal archive and pg_xlog dir

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group