bulk insert performance problem

From: "Christian Bourque" <christian(dot)bourque(at)gmail(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: bulk insert performance problem
Date: 2008-04-08 03:01:18
Message-ID: a6ee49d30804072001mf776603se42c475548cde018@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hi,

I have a performance problem with a script that does massive bulk
insert in 6 tables. When the script starts the performance is really
good but will degrade minute after minute and take almost a day to
finish!

I almost tried everything suggested on this list, changed our external
raid array from raid 5 to raid 10, tweaked postgresql.conf to the best
of my knowledge, moved pg_xlog to a different array, dropped the
tables before running the script. But the performance gain was
negligible even after all these changes...

IMHO the hardware that we use should be up to the task: Dell PowerEdge
6850, 4 x 3.0Ghz Dual Core Xeon, 8GB RAM, 3 x 300GB SAS 10K in raid 5
for / and 6 x 300GB SAS 10K in raid 10 (MD1000) for PG data, the data
filesystem is ext3 mounted with noatime and data=writeback. Running on
openSUSE 10.3 with PostgreSQL 8.2.7. The server is dedicated for
PostgreSQL...

We tested the same script and schema with Oracle 10g on the same
machine and it took only 2.5h to complete!

What I don't understand is that with Oracle the performance seems
always consistent but with PG it deteriorates over time...

Any idea? Is there any other improvements I could do?

Thanks

Christian

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Craig Ringer 2008-04-08 03:18:48 Re: bulk insert performance problem
Previous Message Bill Moran 2008-04-07 20:49:54 Re: Looking for bottleneck during load test