Re: drop/truncate table sucks for large values of shared buffers

From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: drop/truncate table sucks for large values of shared buffers
Date: 2015-06-30 04:02:05
Message-ID: CAA4eK1JjpLyiae9dmwPwJG2kPDLEp7_nJb4-HstrsSSjNCDhYw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Jun 29, 2015 at 7:18 PM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>
> On 28 June 2015 at 17:17, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>>
>> I'm not sure what you consider "dire", but missing a dirty buffer
>> belonging to the to-be-destroyed table would result in the system being
>> permanently unable to checkpoint, because attempts to write out the
buffer
>> to the no-longer-extant file would fail. You could only get out of the
>> situation via a forced database crash (immediate shutdown), followed by
>> replaying all the WAL since the time of the problem. In production
>> contexts that could be pretty dire.
>
>
> Yes, its bad, but we do notice that has happened. We can also put in code
to specifically avoid this error at checkpoint time.
>
> If lseek fails badly then SeqScans would give *silent* data loss, which
in my view is worse. Just added pages aren't the only thing we might miss
if lseek is badly wrong.
>

So for the purpose of this patch, do we need to assume that
lseek can give us wrong size of file and we should add preventive
checks and other handling for the same?
I am okay to change that way, if we are going to have that as assumption
in out code wherever we are using it or will use it in-future, otherwise
we will end with some preventive checks which are actually not required.

Another idea here is that use some other way instead of lseek to know
the size of file. Do you think we can use stat() for this purpose, we
are already using it in fd.c?

> So, I think this patch still has legs. We can check that the clean up has
been 100% when we do the buffer scan at the start of the checkpoint
>

One way to ensure that is verify that each buffer header tag is
valid (which essentially means whether the object exists), do
you have something else in mind to accomplish this part if we
decide to go this route?

> - that way we do just one scan of the buffer pool and move a
time-consuming operation into a background process.
>

Agreed and if that turns out to be cheap, then we might want to use
some way to optimize Drop Database and others in same way.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Amit Kapila 2015-06-30 04:14:51 Re: drop/truncate table sucks for large values of shared buffers
Previous Message Robert Haas 2015-06-30 03:54:29 Re: Bug in bttext_abbrev_convert()