Skip site navigation (1) Skip section navigation (2)

Re: Decreasing BLKSZ

From: Markus Schaber <schabi(at)logix-tt(dot)com>
To: Marc Morin <marc(at)sandvine(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Decreasing BLKSZ
Date: 2006-09-25 21:11:08
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
Hi, Marc,

Marc Morin wrote:

> The problem is, the insert pattern has low correlation with the
> (key,logtime) index.   In this case, would need >1M blocks in my
> shared_buffer space to prevent a read-modify-write type of pattern
> happening during the inserts (given a large enough database).

Would it be possible to change the primary key to (logtime,key)? This
could help keeping the "working window" small.

Secondly, the real working set is smaller, as the rows are all inserted
at the end of the table, filling each page until it's full, so only the
last pages are accessed. There's no relation between the index order,
and the order of data on disk, unless you CLUSTER.

> Any comment on other affects or gotchas with lowering the size of
> BLKSZ?  Currently, our database is thrashing its cache of blocks we
> we're getting only ~100 inserts/second, every insert results in a
> evict-read-modify operation.

I'm not shure that's the correct diagnosis.

Do you have one transaction per insert? Every transaction means a forced
sync to the disk, so you won't get more than about 100-200 commits per
second, depending on your actual disk rotation speed.

To improve concurrency of the "numer of inserters" running in parallel,
try to tweak the config variables commit_delay and commit_sibling, so
you get a higher overall throughput at cost of an increased delay per
connection, and increase the number of inserters. Using sensible
tweaking, the throughput should scale nearly linear with the number of
backens. :-)

If feasible for your application, you can also bundle several log
entries into a single transaction. If you're CPU bound, you can use COPY
instead of INSERT or (if you can wait for 8.2) the new multi-row INSERT
to further improve performance, but I doubt that you're CPU bound.

The only way to "really" get over the sync limit is to have (at least)
the WAL on a battery backed ram / SSD media that has no "spinning disk"
physical limit, or abandon crash safety by turning fsync off.

Markus Schaber | Logical Tracking&Tracing International AG
Dipl. Inf.     | Software Development GIS

Fight against software patents in Europe!

In response to


pgsql-performance by date

Next:From: Marc MorinDate: 2006-09-25 21:54:10
Subject: Re: Decreasing BLKSZ
Previous:From: Marc MorinDate: 2006-09-25 20:09:33
Subject: Decreasing BLKSZ

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group