Decreasing BLKSZ

From: "Marc Morin" <marc(at)sandvine(dot)com>
To: <pgsql-performance(at)postgresql(dot)org>
Subject: Decreasing BLKSZ
Date: 2006-09-25 20:09:33
Message-ID: 2BCEB9A37A4D354AA276774EE13FB8C201130680@mailserver.sandvine.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Our application has a number of inserters posting rows of network
statistics into a database. This is occuring continously. The
following is an example of a stats table (simplified but maintains key
concepts).


CREATE TABLE stats
(
logtime timestamptz,
key int,
stat1 bigint,
stat2 bigint,
stat3 bigint,
PRIMARY KEY (key,logtime)
);
CREATE INDEX x ON stats(logtime);

There are on the order of 1M unique values for "key" and a new row for
each key value will be inserted say every 15 minutes. These rows are
divided up between a number of different inserting elements, but that
isn't relevant.

The problem is, the insert pattern has low correlation with the
(key,logtime) index. In this case, would need >1M blocks in my
shared_buffer space to prevent a read-modify-write type of pattern
happening during the inserts (given a large enough database).

Wondering about lowering the BLKSZ value so that the total working set
of blocks required can be maintained in my shared buffers. Our database
only has 8G of memory and likely need to reduce BLKSZ to 512....

Any comment on other affects or gotchas with lowering the size of BLKSZ?
Currently, our database is thrashing its cache of blocks we we're
getting only ~100 inserts/second, every insert results in a
evict-read-modify operation.


Ideally, like to keep the entire working set of blocks in memory across
insert periods so that the i/o looks more like write full blocks....

Thanks
Marc


Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Markus Schaber 2006-09-25 21:11:08 Re: Decreasing BLKSZ
Previous Message yoav x 2006-09-25 14:58:17 Re: PostgreSQL and sql-bench