tuning Postgres for large data import (using Copy from)

From: "Marc Mamin" <m(dot)mamin(at)gmx(dot)net>
To: pgsql-performance(at)postgresql(dot)org
Subject: tuning Postgres for large data import (using Copy from)
Date: 2005-05-12 10:34:46
Message-ID: 6428.1115894086@www73.gmx.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hello,

I'd like to tune Postgres for large data import (using Copy from).

here are a few steps already done:

1) use 3 different disks for:

-1: source data
-2: index tablespaces
-3: data tablespaces


2) define all foreign keys as initially deferred

3) tune some parameters:

max_connections =20
shared_buffers =30000
work_mem = 8192
maintenance_work_mem = 32768
checkpoint_segments = 12

(I also modified the kernel accordingly)

4) runs VACUUM regulary

The server runs RedHat and has 1GB RAM

In the production (which may run on a better server), I plan to:

- import a few millions rows per day,
- keep up to ca 100 millions rows in the db
- delete older data

I've seen a few posting on hash/btree indexes, which say that hash index do
not work very well in Postgres;
currently, I only use btree indexes. Could I gain performances whole using
hash indexes as well ?

How does Postgres handle concurrent copy from on: same table / different
tables ?

I'd be glad on any further suggestion on how to further increase my
performances.

Marc

--
+++ Lassen Sie Ihren Gedanken freien Lauf... z.B. per FreeSMS +++
GMX bietet bis zu 100 FreeSMS/Monat: http://www.gmx.net/de/go/mail

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Alex Turner 2005-05-12 14:08:09 Re: Partitioning / Clustering
Previous Message PFC 2005-05-12 10:09:34 Re: Partitioning / Clustering