Dump/restore with bad data and large objects

From: "John T(dot) Dow" <john(at)johntdow(dot)com>
To: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Dump/restore with bad data and large objects
Date: 2008-08-25 14:21:54
Message-ID: 200808251422.m7PEMRHh038311@web2.nidhog.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

By "bad data", I mean a character that's not UTF8, such as hex 98.

As far as I can tell, pg_dump is the tool to use. But it has
serious drawbacks.

If you dump in the custom format, the data is compressed (nice) and
includes large objects (very nice). But, from my tests and the postings of
others, if there is invalid data in a table, although PostgreSQL won't complain and
pg_dump won't complain, pg_restore will strenuously object, rejecting all rows for that
particular table (not nice at all).

If you dump in plain text format, you can at least inspect the dumped
data and fix it manually or with iconv. But the plain text
format doesn't support large objects (again, not nice). While byte arrays are supported, they result in very large dump files.

Also, neither of these methods gets information such as the roles, so
that has to be captured some other way if the database has to be rebuilt
from scratch.

Is my understanding incomplete or wrong? Is there no good solution?

Why isn't there a dumpall that writes in compressed format and allows recovery from bad data?

John

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2008-08-25 16:07:23 Re: playing with catalog tables limits? dangers? was: seq bug 2073 and time machine
Previous Message Tom Lane 2008-08-25 14:18:24 Re: Issue with creation of Partial_indexes (Immutable?)