Improving pg_dump performance

From: Ron <ronljohnsonjr(at)gmail(dot)com>
To: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Improving pg_dump performance
Date: 2018-07-23 07:23:45
Message-ID: fecafc40-663d-cbd8-f1de-2201d3f84105@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi,

We've got an old (v8.4.17, thus no parallel backups) 2.9TB database that
needs to be migrated to a new data center and then restored to v9.6.9.

The database has many large tables full of bytea columns containing pdf
images, and so the dump file is going to be more than 2x larger than the
existing data/base...

The command is:
$ pg_dump -v -Z0 -Fc $DB --file=${TARGET}/${DATE}_${DB}.dump 2>
${DATE}_${DB}.log

Using -Z0 because pdf files are already compressed.

Because of an intricate web of FK constraints and partitioned tables, the
customer doesn't trust a set of "partitioned" backups using --table= and
regular expressions (the names of those big tables all have the year in
them), and so am stuck with a single-threaded backup.

Are there any config file elements that I can tweak (extra points for not
having to restart postgres) to make it run faster, or deeper knowledge of
how pg_restore works so that I could convince them to let me do the
partitioned backups?

Lastly, is there any way to not make the backups so large (maybe by using
the --binary-upgrade option, even though the man page says, "in-place
upgrades only")?

--
Angular momentum makes the world go 'round.

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Andreas Kretschmer 2018-07-23 07:32:41 Re: Improving pg_dump performance
Previous Message Achilleas Mantzios 2018-07-23 06:35:37 Re: Can't compile postgresql 11 on FreeBSD 11.1