Re: PostgreSQL reads each 8k block - no larger blocks are used - even on sequential scans

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Gerhard Wiesinger <lists(at)wiesinger(dot)com>
Cc: Greg Smith <gsmith(at)gregsmith(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: PostgreSQL reads each 8k block - no larger blocks are used - even on sequential scans
Date: 2009-10-10 02:11:39
Message-ID: 26780.1255140699@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Gerhard Wiesinger <lists(at)wiesinger(dot)com> writes:
> I've one idea, which is not ideal, but may work and shouldn't be much
> effort to implement:
> As in the example above we read B1-B5 and B7-B10 on a higher level outside
> of normal buffer management with large request sizes (e.g. where hash
> index scans and sequential scans are done). As the blocks are now in cache
> normal buffer management is very fast:
> 1.) B1-B5: 5*8k=40k
> 2.) B7-B10: 4*8k=32k

> So we are reading for 1.):
> B1-B5 in one 40k block (typically from disk), afterwards we read B1, B2,
> B3, B4, B5 in 8k chunks from cache again.

Is this really different from, or better than, telling the OS we'll need
those blocks soon via posix_fadvise?

regards, tom lane

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2009-10-10 02:22:52 Re: Stuck vacuum...
Previous Message Alvaro Herrera 2009-10-09 23:39:14 Re: transaction ID wraparound - should I use 'VACUUM' or 'VACUUM FULL' ?