Copy database performance issue

From: Steve <cheetah(at)tanabi(dot)org>
To: pgsql-performance(at)postgresql(dot)org
Subject: Copy database performance issue
Date: 2006-10-23 21:51:40
Message-ID: Pine.GSO.4.64.0610231734220.3930@kingcheetah.tanabi.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hello there;

I've got an application that has to copy an existing database to a new
database on the same machine.

I used to do this with a pg_dump command piped to psql to perform the
copy; however the database is 18 gigs large on disk and this takes a LONG
time to do.

So I read up, found some things in this list's archives, and learned that
I can use createdb --template=old_database_name to do the copy in a much
faster way since people are not accessing the database while this copy
happens.

The problem is, it's still too slow. My question is, is there any way I
can use 'cp' or something similar to copy the data, and THEN after that's
done modify the database system files/system tables to recognize the
copied database?

For what it's worth, I've got fsync turned off, and I've read every tuning
thing out there and my settings there are probably pretty good. It's a
Solaris 10 machine (V440, 2 processor, 4 Ultra320 drives, 8 gig ram) and
here's some stats:

shared_buffers = 300000
work_mem = 102400
maintenance_work_mem = 1024000

bgwriter_lru_maxpages=0
bgwriter_lru_percent=0

fsync = off
wal_buffers = 128
checkpoint_segments = 64

Thank you!

Steve Conley

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Jim C. Nasby 2006-10-23 21:59:13 Re: Best COPY Performance
Previous Message Jim C. Nasby 2006-10-23 21:47:09 Re: New hardware thoughts