| From: | Ron Johnson <ronljohnsonjr(at)gmail(dot)com> |
|---|---|
| To: | Pgsql-admin <pgsql-admin(at)lists(dot)postgresql(dot)org> |
| Subject: | Re: rebuild big tables with pgrepack |
| Date: | 2025-11-14 19:47:27 |
| Message-ID: | CANzqJaAbcfmmHH4mJygjK4fZYFGmzPPiFn5+RxvTeMYibbjxVw@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-admin |
On Fri, Nov 14, 2025 at 2:14 PM ek ek <livadidrive(at)gmail(dot)com> wrote:
> Hello everyone,
> I’m going to rebuild a 900GB table using pg_repack. I’m hesitant to do
> such a large operation in one go.
> Is there an ideal or recommended way to repack very large tables?
>
Everything in database maintenance is circumstantial.
The basics that I'd do are:
* Verify that you have enough free disk space for both the new table, the
new indices and also the WALs generated.
* Do it during a low-activity window.
* Don't run a database backup at the same time.
* First execute with --dry-run.
* Consider the --no-order option. That'll speed things up.
* And --no-analyze, though you'll have to manually ANALYZE immediately
afterwards.
* (I'd probably disable autoanalyze on that table before the repack and
then enable it after the manual ANALYZE.)
* The --jobs option speeds up index rebuilds.
* Run it from cron, and redirect both stdout and stderr to the same log
file.
--
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> lobster!
| From | Date | Subject | |
|---|---|---|---|
| Next Message | pg254kl | 2025-11-15 21:33:17 | Re: rebuild big tables with pgrepack |
| Previous Message | ek ek | 2025-11-14 19:14:34 | rebuild big tables with pgrepack |