| From: | Gus Spier <gus(dot)spier(at)gmail(dot)com> |
|---|---|
| To: | Ron Johnson <ronljohnsonjr(at)gmail(dot)com> |
| Cc: | Olivier Gautherot <ogautherot(at)gautherot(dot)net>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com>, pgsql-general <pgsql-general(at)lists(dot)postgresql(dot)org> |
| Subject: | Re: Attempting to delete excess rows from table with BATCH DELETE |
| Date: | 2026-01-28 15:39:00 |
| Message-ID: | CAG8xniexsPtem2+5KpsAv_J-qO2dtD1tkhgcsD_f7H6xnrbPmw@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Good thoughts. Thanks, Ron!
On Wed, Jan 28, 2026 at 10:02 AM Ron Johnson <ronljohnsonjr(at)gmail(dot)com> wrote:
>
> More thoughts:
> 1. In cases where records are huge (bytea storing images) I added an inner hourly loop.
> 2. Disable autovaccum on the table you're purging, then run pg_repack on it and re-enable autovacuum.
> 3. pg_repack --no-order is a lot faster than having it order by the PK. (You might want it ordered by an indexed date field, though.)
>
> On Wed, Jan 28, 2026 at 5:57 AM Gus Spier <gus(dot)spier(at)gmail(dot)com> wrote:
>>
>> Thanks to all.
>>
>> I'll give the bash loop method a try and let you know how it works out.
>>
>> Regards to all,
>> Gus
>>
>>
>> On Wed, Jan 28, 2026 at 2:32 AM Olivier Gautherot
>> <ogautherot(at)gautherot(dot)net> wrote:
>> >
>> > Hi Gus!
>> >
>> > This reminds me of a costly mistake I made and you want to avoid: it was a mission critical database (say physical safety, real people) and the vacuum froze the DB for 24 hours, until I finally took it offline.
>> >
>> > If you can take it offline (and you have a couple of hours)
>> > - disconnect the DB
>> > - drop indexes (that's the killer)
>> > - remove unnecessary data
>> > - vaccuum manually (or better, copy the relevant data to a new table and rename it - this will save the DELETE above and will defragment the table)
>> > - rebuild indexes
>> > - connect the DB
>> >
>> > The better solution would be partitioning:
>> > - choose a metrics (for instance a timestamp)
>> > - create partition tables for the period you want to keep
>> > - copy the relevant data to the partitions and create partial indexes
>> > - take the DB off line
>> > - update the last partition with the latest data (should be a fast update)
>> > - truncate the original table
>> > - connect partitions
>> > - connect the DB
>> >
>> > In the future, deleting historic data will be a simple DROP TABLE.
>> >
>> > Hope it helps
>> > --
>> > Olivier Gautherot
>> > Tel: +33 6 02 71 92 23
>> >
>> >
>> > El mié, 28 de ene de 2026, 5:06 a.m., Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> escribió:
>> >>
>> >> Ron Johnson <ronljohnsonjr(at)gmail(dot)com> writes:
>> >> > Hmm. Must have been START TRANSACTION which I remember causing issues in DO
>> >> > blocks.
>> >>
>> >> Too lazy to test, but I think we might reject that. The normal rule
>> >> in a procedure is that the next command after a COMMIT automatically
>> >> starts a new transaction, so you don't need an explicit START.
>> >>
>> >> regards, tom lane
>> >>
>> >>
>
>
>
> --
> Death to <Redacted>, and butter sauce.
> Don't boil me, I'm still alive.
> <Redacted> lobster!
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Ron Johnson | 2026-01-28 15:57:22 | Re: Attempting to delete excess rows from table with BATCH DELETE |
| Previous Message | Greg Sabino Mullane | 2026-01-28 15:38:53 | Re: Attempting to delete excess rows from table with BATCH DELETE |