| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | MUHAMMAD ASIF <anaeem(dot)it(at)hotmail(dot)com> |
| Cc: | pgsql-hackers(at)postgresql(dot)org |
| Subject: | Re: vacuumlo issue |
| Date: | 2012-03-20 14:53:07 |
| Message-ID: | 25231.1332255187@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
MUHAMMAD ASIF <anaeem(dot)it(at)hotmail(dot)com> writes:
> We have noticed the following issue with vacuumlo database that have millions of record in pg_largeobject i.e.
> WARNING: out of shared memoryFailed to remove lo 155987: ERROR: out of shared memory HINT: You might need to increase max_locks_per_transaction.
> Why do we need to increase max_locks_per_transaction/shared memory for
> clean up operation,
This seems to be a consequence of the 9.0-era decision to fold large
objects into the standard dependency-deletion algorithm and hence
take out locks on them individually.
I'm not entirely convinced that that was a good idea. However, so far
as vacuumlo is concerned, the only reason this is a problem is that
vacuumlo goes out of its way to do all the large-object deletions in a
single transaction. What's the point of that? It'd be useful to batch
them, probably, rather than commit each deletion individually. But the
objects being deleted are by assumption unreferenced, so I see no
correctness argument why they should need to go away all at once.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Albe Laurenz | 2012-03-20 14:53:37 | Re: vacuumlo issue |
| Previous Message | Alvaro Herrera | 2012-03-20 14:44:21 | Re: Error trying to compile a simple C trigger |