| From: | Gus Spier <gus(dot)spier(at)gmail(dot)com> |
|---|---|
| To: | Olivier Gautherot <ogautherot(at)gautherot(dot)net> |
| Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Ron Johnson <ronljohnsonjr(at)gmail(dot)com>, "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com>, pgsql-general <pgsql-general(at)lists(dot)postgresql(dot)org> |
| Subject: | Re: Attempting to delete excess rows from table with BATCH DELETE |
| Date: | 2026-01-28 10:57:09 |
| Message-ID: | CAG8xnie28cnR7220M8G3iQ0G-L_c3WOp6ZrSOiP58M3-B=6-Zw@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Thanks to all.
I'll give the bash loop method a try and let you know how it works out.
Regards to all,
Gus
On Wed, Jan 28, 2026 at 2:32 AM Olivier Gautherot
<ogautherot(at)gautherot(dot)net> wrote:
>
> Hi Gus!
>
> This reminds me of a costly mistake I made and you want to avoid: it was a mission critical database (say physical safety, real people) and the vacuum froze the DB for 24 hours, until I finally took it offline.
>
> If you can take it offline (and you have a couple of hours)
> - disconnect the DB
> - drop indexes (that's the killer)
> - remove unnecessary data
> - vaccuum manually (or better, copy the relevant data to a new table and rename it - this will save the DELETE above and will defragment the table)
> - rebuild indexes
> - connect the DB
>
> The better solution would be partitioning:
> - choose a metrics (for instance a timestamp)
> - create partition tables for the period you want to keep
> - copy the relevant data to the partitions and create partial indexes
> - take the DB off line
> - update the last partition with the latest data (should be a fast update)
> - truncate the original table
> - connect partitions
> - connect the DB
>
> In the future, deleting historic data will be a simple DROP TABLE.
>
> Hope it helps
> --
> Olivier Gautherot
> Tel: +33 6 02 71 92 23
>
>
> El mié, 28 de ene de 2026, 5:06 a.m., Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> escribió:
>>
>> Ron Johnson <ronljohnsonjr(at)gmail(dot)com> writes:
>> > Hmm. Must have been START TRANSACTION which I remember causing issues in DO
>> > blocks.
>>
>> Too lazy to test, but I think we might reject that. The normal rule
>> in a procedure is that the next command after a COMMIT automatically
>> starts a new transaction, so you don't need an explicit START.
>>
>> regards, tom lane
>>
>>
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Wim Rouquart | 2026-01-28 11:02:06 | RE: Index (primary key) corrupt? |
| Previous Message | QUINCEROT Emmanuel | 2026-01-28 09:48:49 | Efficient batched iteration over hash/list partitioned tables |