Re: reloption to prevent VACUUM from truncating empty pages at the end of relation

From: Fujii Masao <masao(dot)fujii(at)gmail(dot)com>
To: Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: reloption to prevent VACUUM from truncating empty pages at the end of relation
Date: 2018-04-18 17:20:15
Message-ID: CAHGQGwHq1uYRoxMtr84eKrARSA_G7YcOSE9TuSLNnxp2LbjmFQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Apr 18, 2018 at 11:29 PM, Pavan Deolasee
<pavan(dot)deolasee(at)gmail(dot)com> wrote:
>
>
> On Tue, Apr 17, 2018 at 11:05 PM, Fujii Masao <masao(dot)fujii(at)gmail(dot)com> wrote:
>>
>> Hi,
>>
>> I'd like to propose to add $SUBJECT for performance improvement.
>>
>> When VACUUM tries to truncate the trailing empty pages, it scans
>> shared_buffers
>> to invalidate the pages-to-truncate during holding an AccessExclusive lock
>> on
>> the relation. So if shared_buffers is huge, other transactions need to
>> wait for
>> a very long time before accessing to the relation. Which would cause the
>> response-time spikes, for example, I observed such spikes several times on
>> the server with shared_buffers = 300GB while running the benchmark.
>> Therefore, I'm thinking to propose $SUBJECT and enable it to avoid such
>> spikes
>> for that relation.
>
>
> Alvaro reminded me that we already have a mechanism in place which forces
> VACUUM to give up the exclusive lock if another backend is waiting on the
> lock for more than certain pre-defined duration. AFAICS we give up the lock,
> but again retry truncation from the previously left off position. What if we
> make that lock-wait duration configurable on a per-table basis? And may be a
> special value to never truncate (though it seems quite excessive to me and a
> possible footgun)

I'm not sure if it's safe to cancel forcibly VACUUM's truncation during
scaning shared_buffers. That scan happens after WAL-logging and before
the actual truncation.

> I was actually thinking in the other direction. So between the time VACUUM
> figures out it can possibly truncate last K pages, some backend may insert a
> tuple in some page and make the truncation impossible. What if we truncate
> the FSM before starting the backward scan so that new inserts go into the
> pages prior to the truncation point, if possible. That will increase the
> chances of VACUUM being able to truncate all the empty pages. Though I think
> in some cases it might lead to unnecessary further extension of the
> relation. May be we use some heuristic based on available free space in the
> table prior to the truncation point?

Isn't this too complicated? I wonder what heuristic we can use here.

Regards,

--
Fujii Masao

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2018-04-18 17:25:53 Re: pgindent run soon?
Previous Message Teodor Sigaev 2018-04-18 17:14:10 Re: pgindent run soon?