pg_dump to a remote server

From: Ron <ronljohnsonjr(at)gmail(dot)com>
To: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: pg_dump to a remote server
Date: 2018-04-16 23:58:47
Message-ID: 53c302b2-a634-96c2-b1f5-328437eb37fd@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

We're upgrading from v8.4 to 9.6 on a new VM in a different DC.  The dump
file will be more than 1TB, and there's not enough disk space on the current
system for the dump file.

Thus, how can I send the pg_dump file directly to the new server while the
pg_dump command is running?  NFS is one method, but are there others
(netcat, rsync)?  Since it's within the same company, encryption is not
required.

Or would it be better to install both 8.4 and 9.6 on the new server (can I
even install 8.4 on RHEL 6.9?), rsync the live database across and then set
up log shipping, and when it's time to cut over, do an in-place pg_upgrade?

(Because this is a batch system, we can apply the data input files to bring
the new database up to "equality" with the 8.4 production system.)

Thanks

--
Angular momentum makes the world go 'round.

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Adrian Klaver 2018-04-17 00:18:02 Re: pg_dump to a remote server
Previous Message Bob Jones 2018-04-16 18:30:43 To prefer sorts or filters in postgres, that is the question....