From: | Michael Loftis <mloftis(at)wgops(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Curt Sampson <cjs(at)cynic(dot)net>, Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Sequential Scan Read-Ahead |
Date: | 2002-04-25 05:43:11 |
Message-ID: | 3CC7976F.7070407@wgops.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Tom Lane wrote:
>Curt Sampson <cjs(at)cynic(dot)net> writes:
>
>>Grabbing bigger chunks is always optimal, AFICT, if they're not
>>*too* big and you use the data. A single 64K read takes very little
>>longer than a single 8K read.
>>
>
>Proof?
>
I contend this statement.
It's optimal to a point. I know that my system settles into it's best
read-speeds @ 32K or 64K chunks. 8K chunks are far below optimal for my
system. Most systems I work on do far better at 16K than at 8K, and
most don't see any degradation when going to 32K chunks. (this is
across numerous OSes and configs -- results are interpretations from
bonnie disk i/o marks).
Depending on what you're doing it is more efficiend to read bigger
blocks up to a point. If you're multi-thread or reading in non-blocking
mode, take as big a chunk as you can handle or are ready to process in
quick order. If you're picking up a bunch of little chunks here and
there and know oyu're not using them again then choose a size that will
hopeuflly cause some of the reads to overlap, failing that, pick the
smallest usable read size.
The OS can never do that stuff for you.
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2002-04-25 05:50:32 | Re: md5 passwords and pg_shadow |
Previous Message | Neil Conway | 2002-04-25 05:21:16 | md5 passwords and pg_shadow |