Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

From: dennis jenkins <dennis(dot)jenkins(dot)75(at)gmail(dot)com>
To: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?
Date: 2012-03-17 15:46:00
Message-ID: CAAEzAp-HxQEsO-JKUP=ceEAkw1556rOWi1CGgTnORsJfw5HF0A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Fri, Mar 16, 2012 at 2:20 PM, Aleksey Tsalolikhin
<atsaloli(dot)tech(at)gmail(dot)com> wrote:
> On Thu, Mar 15, 2012 at 6:43 AM, Aleksey Tsalolikhin
> <atsaloli(dot)tech(at)gmail(dot)com> wrote:

> Our database is about 200 GB - over a WAN link, last time it took 8
> hours to do a full sync, I expect it'll be
> more like 9 or 10 hours this time.
>

Aleksey, a suggestion: The vast majority of the postgresql wire
protocol compresses well. If your WAN link is not already compressed,
construct a compressed SSH tunnel for the postgresql TCP port in the
WAN link. I've done this when rebuilding a 300GB database (via slony)
over a bandwidth-limited (2MB/s) VPN link and it cut the replication
resync time down significantly.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Adrian Klaver 2012-03-17 20:39:06 Re: One more query
Previous Message Simon Riggs 2012-03-17 09:58:07 Re: Temporal foreign keys