Re: Migration study, step 1: bulk write performance optimization

From: PFC <lists(at)peufeu(dot)com>
To: "Mikael Carneholm" <Mikael(dot)Carneholm(at)wirelesscar(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Migration study, step 1: bulk write performance optimization
Date: 2006-03-20 16:35:00
Message-ID: op.s6p0cm0icigqcu@apollo13
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

> using a 16kb block size (for read performance) will probably be
> considered as well.

Hm, this means that when postgres wants to write just one 8k page, the OS
will have to read 16k, replace half of it with the new block, and write
16k again... I guess it should be better to stick with the usual block
size. Also, it will have to read 16k every time it rally wants to read one
page... which happens quite often except for seq scan.

> NOTE: this machine/configuration is NOT what we will be using in
> production if the study turns out OK, it's just supposed to work as a
> development machine in the first phase whose purpose more or less is to
> get familiar with configurating Postgres and see if we can get the
> application up and running (we will probably use a 64bit platform and

Opteron xDDD

Use XFS or Reiser... ext3 isn't well suited for this. use noatime AND
nodiratime.

It's safe to turn off fsync while importing your data.
For optimum speed, put the WAL on another physical disk.

Look in the docs which of maintenance_work_mem, or work_mem or sort_mem
is used for index creation, and set it to a very large value, to speed up
that index creation. Create your indexes with fsync=off also.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Dave Cramer 2006-03-20 17:44:31 Re: Migration study, step 1: bulk write performance optimization
Previous Message Reimer 2006-03-20 16:34:47 Re: Query Feromance