|From:||Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>|
|To:||Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>|
|Subject:||Re: drop/truncate table sucks for large values of shared buffers|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> writes:
> I have looked into it and found that the main reason for such
> a behaviour is that for those operations it traverses whole
> shared_buffers and it seems to me that we don't need that
> especially for not-so-big tables. We can optimize that path
> by looking into buff mapping table for the pages that exist in
> shared_buffers for the case when table size is less than some
> threshold (say 25%) of shared buffers.
I don't like this too much because it will fail badly if the caller
is wrong about the maximum possible page number for the table, which
seems not exactly far-fetched. (For instance, remember those kernel bugs
we've seen that cause lseek to lie about the EOF position?) It also
offers no hope of a fix for the other operations that scan the whole
buffer pool, such as DROP TABLESPACE and DROP DATABASE.
In the past we've speculated about fixing the performance of these things
by complicating the buffer lookup mechanism enough so that it could do
"find any page for this table/tablespace/database" efficiently.
Nobody's had ideas that seemed sane performance-wise though.
regards, tom lane
|Next Message||Tom Lane||2015-06-27 14:27:53||Bogus postmaster-only contexts laying about in backends|
|Previous Message||Tom Lane||2015-06-27 13:49:33||Re: Semantics of pg_file_settings view|