Skip site navigation (1) Skip section navigation (2)

Re: Sequential Scan Read-Ahead

From: Curt Sampson <cjs(at)cynic(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>,PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Sequential Scan Read-Ahead
Date: 2002-04-26 02:27:17
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
On Thu, 25 Apr 2002, Tom Lane wrote:

> Curt Sampson <cjs(at)cynic(dot)net> writes:
> > 1. Theoretical proof: two components of the delay in retrieving a
> > block from disk are the disk arm movement and the wait for the
> > right block to rotate under the head.
> > When retrieving, say, eight adjacent blocks, these will be spread
> > across no more than two cylinders (with luck, only one).
> Weren't you contending earlier that with modern disk mechs you really
> have no idea where the data is?

No, that was someone else. I contend that with pretty much any
large-scale storage mechanism (i.e., anything beyond ramdisks),
you will find that accessing two adjacent blocks is almost always
1) close to as fast as accessing just the one, and 2) much, much
faster than accessing two blocks that are relatively far apart.

There will be the odd case where the two adjacent blocks are
physically far apart, but this is rare.

If this idea doesn't hold true, the whole idea that sequential
reads are faster than random reads falls apart, and the optimizer
shouldn't even have the option to make random reads cost more, much
less have it set to four rather than one (or whatever it's set to).

> You're asserting as an article of
> faith that the OS has been able to place the file's data blocks
> optimally --- or at least well enough to avoid unnecessary seeks.

So are you, in the optimizer. But that's all right; the OS often
can and does do this placement; the FFS filesystem is explicitly
designed to do this sort of thing. If the filesystem isn't empty
and the files grow a lot they'll be split into large fragments,
but the fragments will be contiguous.

> But just a few days ago I was getting told that random_page_cost
> was BS because there could be no such placement.

I've been arguing against that point as well.

> And also ensure that you aren't testing the point at issue.
> The point at issue is that *in the presence of kernel read-ahead*
> it's quite unclear that there's any benefit to a larger request size.

I will test this.

Curt Sampson  <cjs(at)cynic(dot)net>   +81 90 7737 2974
    Don't you know, in this new Dark Age, we're all light.  --XTC

In response to

pgsql-hackers by date

Next:From: Rod TaylorDate: 2002-04-26 02:30:45
Subject: pg_constraint
Previous:From: Bruce MomjianDate: 2002-04-26 02:25:06
Subject: Re: What is wrong with hashed index usage?

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group