Re: Dump/restore with bad data and large objects

From: Joshua Drake <jd(at)commandprompt(dot)com>
To: "John T(dot) Dow" <john(at)johntdow(dot)com>
Cc: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Dump/restore with bad data and large objects
Date: 2008-08-25 17:04:13
Message-ID: 20080825100413.1aca1bcf@jd-laptop
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Mon, 25 Aug 2008 10:21:54 -0400
"John T. Dow" <john(at)johntdow(dot)com> wrote:

> By "bad data", I mean a character that's not UTF8, such as hex 98.
>
> As far as I can tell, pg_dump is the tool to use. But it has
> serious drawbacks.
>
> If you dump in the custom format, the data is compressed (nice) and
> includes large objects (very nice). But, from my tests and the
> postings of others, if there is invalid data in a table, although
> PostgreSQL won't complain and pg_dump won't complain, pg_restore will
> strenuously object, rejecting all rows for that particular table (not
> nice at all).

You can use the TOC feature of -Fc to remove restoring of that single
table. You can then convert that single table to a plain text dump and
clean the data. Then restore it separately.

If you have foregin keys and indexes on the bad data table, don't
restore the keys until *after* you have done the above.

Sincerely,

Joshua D. Drake

--
The PostgreSQL Company since 1997: http://www.commandprompt.com/
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Scott Marlowe 2008-08-25 17:08:14 Re: Regarding access to a user
Previous Message Scott Marlowe 2008-08-25 17:02:37 Re: SERIAL datatype