Skip site navigation (1) Skip section navigation (2)

[Fwd: Re: 8192 BLCKSZ ?]

From: mlw <markw(at)mohawksoft(dot)com>
To: "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: [Fwd: Re: 8192 BLCKSZ ?]
Date: 2000-11-28 18:20:34
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
Tom Samplonius wrote:

> On Tue, 28 Nov 2000, mlw wrote:
> > Tom Samplonius wrote:
> > >
> > > On Mon, 27 Nov 2000, mlw wrote:
> > >
> > > > This is just a curiosity.
> > > >
> > > > Why is the default postgres block size 8192? These days, with caching
> > > > file systems, high speed DMA disks, hundreds of megabytes of RAM, maybe
> > > > even gigabytes. Surely, 8K is inefficient.
> > >
> > >   I think it is a pretty wild assumption to say that 32k is more efficient
> > > than 8k.  Considering how blocks are used, 32k may be in fact quite a bit
> > > slower than 8k blocks.
> >
> > I'm not so sure I agree. Perhaps I am off base here, but I did a bit of
> > OS profiling a while back when I was doing a DICOM server. I
> > experimented with block sizes and found that the best throughput on
> > Linux and Windows NT was at 32K. The graph I created showed a steady
> > increase in performance and a drop just after 32K, then steady from
> > there. In Windows NT it was more pronounced than it was in Linux, but
> > Linux still exhibited a similar trait.
>   You are a bit off base here.  The typical access pattern is random IO,
> not sequentional.  If you use a large block size in Postgres, Postgres
> will read and write more data than necessary.  Which is faster? 1000 x 8K
> IOs?  Or 1000 x 32K IOs

I can sort of see your point, but the  8K vs 32K is not a linear
The big hit is the disk I/O operation, more so than just the data size. 
It may
be almost as efficient to write 32K as it is to write 8K. While I do not
know the
exact numbers, and it varies by OS and disk subsystem,  I am sure that
32K is not even close to 4x more expensive than 8K. Think about seek
writing anything to the disk is expensive regardless of the amount of
data. Most
disks today have many heads, and are RL encoded. It may only add 10us
1-2 sectors of a 64 sector drive spinning 7200 rpm)  to a disk operation
takes an order of magnitude longer positioning the heads.

The overhead of an additional 24K is minute compared to the cost of a
operation. So if any measurable benefit can come from having bigger
buffers, i.e.
having more data available per disk operation, it will probably be


pgsql-hackers by date

Next:From: Thomas LockhartDate: 2000-11-28 18:20:42
Subject: Re: Re: FWD: tinterval vs interval on pgsql-novice
Previous:From: Joel BurtonDate: 2000-11-28 18:14:21
Subject: Re: Warning: Don't delete those /tmp/.PGSQL.* files

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group