Possible performance improvement: buffer replacement policy

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: pgsql-hackers(at)postgreSQL(dot)org
Subject: Possible performance improvement: buffer replacement policy
Date: 2000-08-28 00:05:29
Message-ID: 1601.967421129@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Those of you with long memories may recall a benchmark that Edmund Mergl
drew our attention to back in May '99. That test showed extremely slow
performance for updating a table with many indexes (about 20). At the
time, it seemed the problem was due to bad performance of btree with
many equal keys, so I thought I'd go back and retry the benchmark after
this latest round of btree hackery.

The good news is that btree itself seems to be pretty well fixed; the
bad news is that the benchmark is still slow for large numbers of rows.
The problem is I/O: the CPU mostly sits idle waiting for the disk.
As best I can tell, the difficulty is that the working set of pages
needed to update this many indexes is too large compared to the number
of disk buffers Postgres is using. (I was running with -B 1000 and
looking at behavior for a 100000-row test table. This gave me a table
size of 3876 pages, plus 11526 pages in 20 indexes.)

Of course, there's only so much we can do when the number of buffers
is too small, but I still started to wonder if we are using the buffers
as effectively as we can. Some tracing showed that most of the pages
of the indexes were being read and written multiple times within a
single UPDATE query, while most of the pages of the table proper were
fetched and written only once. That says we're not using the buffers
as well as we could; the index pages are not being kept in memory when
they should be. In a query like this, we should displace main-table
pages sooner to allow keeping more index pages in cache --- but with
the simple LRU replacement method we use, once a page has been loaded
it will stay in cache for at least the next NBuffers (-B) page
references, no matter what. With a large NBuffers that's a long time.

I've come across an interesting article:
The LRU-K Page Replacement Algorithm For Database Disk Buffering
Elizabeth J. O'Neil, Patrick E. O'Neil, Gerhard Weikum
Proceedings of the 1993 ACM SIGMOD international conference
on Management of Data, May 1993
(If you subscribe to the ACM digital library, you can get a PDF of this
from there.) This article argues that standard LRU buffer management is
inherently not great for database caches, and that it's much better to
replace pages on the basis of time since the K'th most recent reference,
not just time since the most recent one. K=2 is enough to get most of
the benefit. The big win is that you are measuring an actual page
interreference time (between the last two references) and not just
dealing with a lower-bound guess on the interreference time. Frequently
used pages are thus much more likely to stay in cache.

It looks like it wouldn't take too much work to replace shared buffers
on the basis of LRU-2 instead of LRU, so I'm thinking about trying it.

Has anyone looked into this area? Is there a better method to try?

regards, tom lane

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2000-08-28 04:57:32 Re: Too many open files (was Re: spinlock problems reported earlier)
Previous Message The Hermit Hacker 2000-08-27 23:31:43 Re: Too many open files (was Re: spinlock problems reported earlier)