| From: | Bala M <krishna(dot)pgdba(at)gmail(dot)com> |
|---|---|
| To: | Greg Sabino Mullane <htamfids(at)gmail(dot)com>, Francisco Olarte <folarte(at)peoplecall(dot)com> |
| Cc: | "adrian(dot)klaver(at)aklaver(dot)com" <adrian(dot)klaver(at)aklaver(dot)com>, chris+google(at)qwirx(dot)com, pgsql-general(at)lists(dot)postgresql(dot)org |
| Subject: | Re: Index corruption issue after migration from RHEL 7 to RHEL 9 (PostgreSQL 11 streaming replication) |
| Date: | 2025-11-05 06:27:22 |
| Message-ID: | CAJ4rSwstZoVgVjbHeDNVq+7eBWCVZSXjNMRpzB4QFjArZT0Hcg@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Thank you all for your suggestions,
Thanks for your quick response and for sharing the details.
After reviewing the options, the logical replication approach seems to be
the most feasible one with minimal downtime.
However, we currently have 7 streaming replication setups running from
production, with a total database size of around 15 TB. Out of this, there
are about 10 large tables ranging from 1 TB (max) to 50 GB (min) each,
along with approximately 150+ sequences.
Could you please confirm if there are any successful case studies or
benchmarks available for a similar setup?
Additionally, please share any recommended parameter tuning or best
practices for handling logical replication at this scale.
Current server configuration:
CPU: 144 cores
RAM: 512 GB
Thanks & Regards
Krishna.
On Fri, 24 Oct 2025 at 21:55, Francisco Olarte <folarte(at)peoplecall(dot)com>
wrote:
>
> On Thu, 23 Oct 2025 at 17:21, Greg Sabino Mullane <htamfids(at)gmail(dot)com>
> wrote
>
> pg_dump is the most reliable, and the slowest. Keep in mind that only the
>> actual data needs to move over (not the indexes, which get rebuilt after
>> the data is loaded). You could also mix-n-match pg_logical and pg_dump if
>> you have a few tables that are super large. Whether either approach fits in
>> your 24 hour window is hard to say without you running some tests.
>>
>
> Long time ago I had a similar problem and did a "running with scissors"
> restore. This means:
>
> 1.- Prepare normal configuration, test, etc for the new version.
> 2.- Prepare a restore configuration, with fsync=off, wallevel=minimal,
> whatever option gives you any speed advantage.
>
> As the target was empty, if restore failed we could just clean and restart.
>
> 3.- Dump, boot with the restore configuration, restore, clean shutdown,
> switch to production configuration, boot again and follow on.
>
> Time has passed and I lost my notes, but I remember the restore was much
> faster than doing it with the normal production configuration. Given
> current machine speeds, it maybe doable.
>
>
> Francisco Olarte.
>
>
| From | Date | Subject | |
|---|---|---|---|
| Next Message | PALAYRET Jacques | 2025-11-05 09:24:50 | Re: PostgreSQL trigger how to detect a column value explicitely modified |
| Previous Message | Laurenz Albe | 2025-11-04 17:29:05 | Re: PostgreSQL trigger how to detect a column value explicitely modified |