Skip site navigation (1) Skip section navigation (2)

RE: Performance TODO items

From: "Darren King" <darrenk(at)insightdist(dot)com>
To: "PostgreSQL-development" <pgsql-hackers(at)postgreSQL(dot)org>
Subject: RE: Performance TODO items
Date: 2001-07-30 19:32:50
Message-ID: NDBBJNEIGLIPLCHCMANLEEBJCPAA.darrenk@insightdist.com (view raw or flat)
Thread:
Lists: pgsql-hackers
> 3)  I am reading the Solaris Internals book and there is mention of a
> "free behind" capability with large sequential scans.  When a large
> sequential scan happens that would wipe out all the old cache entries,
> the kernel detects this and places its previous pages first
> on the free list.  For out code, if we do a sequential scan of a table
> that is larger than our buffer cache size, I think we should detect
> this and do the same.  See http://techdocs.postgresql.org for my
> performance paper for an example.
>
> New TODO entries are:
>
> 	* Order duplicate index entries by tid
> 	* Add queue of backends waiting for spinlock
> 	* Add free-behind capability for large sequential scans

So why do we cache sequetially-read pages?  Or at least not have an
option to control it?

Oracle (to the best of my knowledge) does NOT cache pages read by a
sequential index scan for at least two reasons/assumptions (two being
all that I can recall):

1. Caching pages for sequential scans over sufficiently large tables
will just cycle the cache.  The pages that will be cached at the end of
the query will be the last N pages of the table, so when the same
sequential query is run again, the scan from the beginning of the table
will start flushing the oldest cached pages which are more than likely
going to be the ones that will be needed at the end of the scan, etc,
etc.  In a multi-user environment, the effect is worse.

2. Concurrent or consective queries in a dynamic database will not
generate plans that use the same sequential scans, so they will tend to
thrash the cache.

Now there are some databases where the same general queries are run time
after time and caching the pages from sequential scans does make sense,
but in larger, enterprise-type systems, indices are created to help
speed up the most used queries and the sequential cache entries only
serve to clutter the cache and flush the useful pages.

Is there any way that caching pages read in by a sequential scan could
be made a configurable-option?

Any chance someone could run pgbench on a test system set up to not
cache sequential reads?

Darren


In response to

Responses

pgsql-hackers by date

Next:From: Tom LaneDate: 2001-07-30 20:46:51
Subject: Re: Revised Patch to allow multiple table locks in "Unison"
Previous:From: Tom LaneDate: 2001-07-30 19:29:12
Subject: Re: Performance TODO items

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group