Load experimentation

From: Ben Brehmer <benbrehmer(at)gmail(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Load experimentation
Date: 2009-12-07 18:12:28
Message-ID: 4B1D458C.6090806@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hello All,

I'm in the process of loading a massive amount of data (500 GB). After
some initial timings, I'm looking at 260 hours to load the entire 500GB.
10 days seems like an awfully long time so I'm searching for ways to
speed this up. The load is happening in the Amazon cloud (EC2), on a
m1.large instance:
-7.5 GB memory
-4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each)
-64-bit platform

So far I have modified my postgresql.conf file (PostgreSQL 8.1.3). The
modifications I have made are as follows:

shared_buffers = 786432
work_mem = 10240
maintenance_work_mem = 6291456
max_fsm_pages = 3000000
wal_buffers = 2048
checkpoint_segments = 200
checkpoint_timeout = 300
checkpoint_warning = 30
autovacuum = off

There are a variety of instance types available in the Amazon cloud
(http://aws.amazon.com/ec2/instance-types/), including high memory and
high CPU. High memory instance types come with 34GB or 68GB of memory.
High CPU instance types have a lot less memory (7GB max) but up to 8
virtual cores. I am more than willing to change to any of the other
instance types.

Also, there is nothing else happening on the loading server. It is
completely dedicated to the load.

Any advice would be greatly appreciated.

Thanks,

Ben

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Kevin Grittner 2009-12-07 18:33:13 Re: Load experimentation
Previous Message Scott Marlowe 2009-12-06 20:24:06 Re: Large DB, several tuning questions: Index sizes, VACUUM, REINDEX, Autovacuum