Re: vacuumlo issue

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Josh Kupershmidt <schmiddy(at)gmail(dot)com>
Cc: MUHAMMAD ASIF <anaeem(dot)it(at)hotmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: vacuumlo issue
Date: 2012-03-20 15:50:53
Message-ID: 26230.1332258653@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Josh Kupershmidt <schmiddy(at)gmail(dot)com> writes:
> On Tue, Mar 20, 2012 at 7:53 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> I'm not entirely convinced that that was a good idea. However, so far
>> as vacuumlo is concerned, the only reason this is a problem is that
>> vacuumlo goes out of its way to do all the large-object deletions in a
>> single transaction. What's the point of that? It'd be useful to batch
>> them, probably, rather than commit each deletion individually. But the
>> objects being deleted are by assumption unreferenced, so I see no
>> correctness argument why they should need to go away all at once.

> I think you are asking for this option:
> -l LIMIT stop after removing LIMIT large objects
> which was added in b69f2e36402aaa.

Uh, no, actually that flag seems utterly brain-dead. Who'd want to
abandon the run after removing some arbitrary subset of the
known-unreferenced large objects? You'd just have to do all the search
work over again. What I'm thinking about is doing a COMMIT after every
N large objects.

I see that patch has not made it to any released versions yet.
Is it too late to rethink the design? I propose (a) redefining it
as committing after every N objects, and (b) having a limit of 1000
or so objects by default.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Jeff Janes 2012-03-20 15:54:55 Re: Regarding column reordering project for GSoc 2012
Previous Message Tom Lane 2012-03-20 15:43:40 Re: Postgres 8.4 planner question - bad plan, good plan for almost same queries.