From: | David Kensiski <David(at)Kensiski(dot)org> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | XID wraparound with huge pg_largeobject |
Date: | 2015-11-30 17:58:14 |
Message-ID: | CAGTbF5WDU8JJAut0JhBkoh1yQbY7xXF+VYJqAKLG7-drT32YUQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I am working with a client who has a 9.1 database rapidly approaching XID
wraparound. They also have an exceedingly large pg_largeobject table (4217
GB) that has never been vacuumed. An attempt to vacuum this on a replica
has run for days and never succeeded. (Or more accurately, never been
allowed to succeed because we needed to get the replica back on-line.)
Are there creative ways to do such a vacuum with minimal impact on
production? Even if I let the vacuum complete on the replica, I don't
think I can play accrued logs from the master, can I? Or is there some
trick to doing so?
I explored using slony and was all excited until I discovered it won't
replicate pg_largeobject because it cannot create triggers on the table.
I started looking into the pg_rewind contrib in 9.5, but it plays back
xlogs to revert so would suffer the same problem as the replica.
Any other ideas about how we can do this?
Thanks!
--Dave
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2015-11-30 18:26:35 | Re: 2 questions |
Previous Message | anj patnaik | 2015-11-30 17:54:18 | Re: 2 questions |