Re: how robust are custom dumps?

From: Willy-Bas Loos <willybas(at)gmail(dot)com>
To: Thom Brown <thom(at)linux(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: how robust are custom dumps?
Date: 2012-04-25 07:42:31
Message-ID: CAHnozTigZRScnRs_B8wVM5pFrzq7N==OksLNh7Lqu2XTYeL+nw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Tue, Apr 24, 2012 at 10:04 PM, Thom Brown <thom(at)linux(dot)com> wrote:

> What was the experience? Is it possible you had specified a
> compression level without the format set to custom? That would result
> in a plain text output within a gzip file, which would then error out
> if you tried to restore it with pg_restore, but would be perfectly
> valid if you passed the uncompressed output directly into psql.
>

yes, probably. I remember that it was a binary file, but i didn't know
about the possibility of gzip in pg_dump.
Possibly the 2 GB size limit for a FAT partition was exceeded, but that
would have resulted in an error, so i would have known.

i think it's time to restore my trust in the custom dumps. :)

i do have one suggestion.
pg_restore only gives a user this feedback, when he makes this
mistake:"pg_restore: [archiver] input file does not appear to be a valid
archive".

Would it be feasible for pg_restore to detect that it is a different
pg_dump format and inform the user about it?

Cheers,

WB

--
"Quality comes from focus and clarity of purpose" -- Mark Shuttleworth

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Magnus Hagander 2012-04-25 07:51:36 Re: how robust are custom dumps?
Previous Message Valentin Militaru 2012-04-25 07:26:51 Re: Fractions of seconds in timestamps