| From: | Kristjan Mustkivi <sonicmonkey(at)gmail(dot)com> |
|---|---|
| To: | Rick Otten <rottenwindfish(at)gmail(dot)com> |
| Cc: | James Pang <jamespang886(at)gmail(dot)com>, pgsql-performance(at)lists(dot)postgresql(dot)org |
| Subject: | Re: table bloat very fast and free space can not be reused |
| Date: | 2026-04-21 08:33:53 |
| Message-ID: | CAOQPKatLGcDMJ+tFpRqO-NWfs8xgN3v692CmxrV9MMG=qZn1Hg@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
On Mon, Apr 20, 2026 at 5:39 PM Rick Otten <rottenwindfish(at)gmail(dot)com> wrote:
>
> Which design is an antipattern? Using json for volatile data sets or unlogging the table?
For us, a large (i.e spilling over to TOAST) json blob in a table
where this json blob (or text, does not matter) gets very frequent
(hundreds and thousands of) updates per minute.
> Does `pg_repack` help? I know it probably isn't practical to run it every couple of days. It also can sometimes causes headaches when repacking a table with a ton of logical replication activity, but it might be a tool to consider if you haven't already. If you can partition the table with the crazy amount of json changes, you don't have to repack all the partitions, you might be able to repack just the older ones with the worst bloat.
This feels like a complexity I personally would like to avoid. Far
more preferable is to get rid of the json. Split the large json into a
normalized table design based on the most useful/frequent patterns.
Br,
--
Kristjan Mustkivi
Email: kristjan(dot)mustkivi(at)gmail(dot)com
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Merlin Moncure | 2026-04-21 17:22:45 | feature request: index supported REINDEX for partial indexes on needle/haystack tables |
| Previous Message | Rick Otten | 2026-04-20 14:39:24 | Re: table bloat very fast and free space can not be reused |