From: | Simon Riggs <simon(at)2ndQuadrant(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: our buffer replacement strategy is kind of lame |
Date: | 2011-08-12 08:36:32 |
Message-ID: | CA+U5nM+G5_P8Xw548Tuma=XHA00Hpr=SYWDKeY5kO3Z810xbrQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Aug 12, 2011 at 5:05 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On
> the other hand, the buffer manager has *no problem at all* trashing
> the buffer arena if we're faulting in pages for an index scan rather
> than a sequential scan. If you manage to get all of sample_data into
> memory (by running many copies of the above query in parallel, you can
> get each one to allocate its own ring buffer, and eventually pull in
> all the pages), and then run some query that probes an index which is
> too large to fit in shared_buffers, it cheerfully blows the whole
> sample_data table out without a second thought. Had you sequentially
> scanned a big table, of course, there would be some protection, but an
> index scan can stomp all over everything with complete impunity.
That's a good observation and I think we should do this
* Make an IndexScan use a ring buffer once it has used 32 blocks. The
vast majority won't do that, so we avoid overhead on the common path.
* Make an BitmapIndexScan use a ring buffer when we know that the
index is larger than 32 blocks. (Ignore upper parts of tree for that
calc).
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Jan Urbański | 2011-08-12 09:57:40 | Re: plpython crash |
Previous Message | Simon Riggs | 2011-08-12 08:33:07 | Re: our buffer replacement strategy is kind of lame |