From: | "Jim C(dot) Nasby" <jim(at)nasby(dot)net> |
---|---|
To: | Steve <cheetah(at)tanabi(dot)org> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Copy database performance issue |
Date: | 2006-10-25 00:25:13 |
Message-ID: | 20061025002513.GC26892@nasby.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, Oct 23, 2006 at 05:51:40PM -0400, Steve wrote:
> Hello there;
>
> I've got an application that has to copy an existing database to a new
> database on the same machine.
>
> I used to do this with a pg_dump command piped to psql to perform the
> copy; however the database is 18 gigs large on disk and this takes a LONG
> time to do.
>
> So I read up, found some things in this list's archives, and learned that
> I can use createdb --template=old_database_name to do the copy in a much
> faster way since people are not accessing the database while this copy
> happens.
>
>
> The problem is, it's still too slow. My question is, is there any way I
> can use 'cp' or something similar to copy the data, and THEN after that's
> done modify the database system files/system tables to recognize the
> copied database?
AFAIK, that's what initdb already does... it copies the database,
essentially doing what cp does.
> For what it's worth, I've got fsync turned off, and I've read every tuning
> thing out there and my settings there are probably pretty good. It's a
> Solaris 10 machine (V440, 2 processor, 4 Ultra320 drives, 8 gig ram) and
> here's some stats:
I don't think any of the postgresql.conf settings will really come into
play when you're doing this...
--
Jim Nasby jim(at)nasby(dot)net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
From | Date | Subject | |
---|---|---|---|
Next Message | Craig A. James | 2006-10-25 05:36:04 | Re: Best COPY Performance |
Previous Message | Jim C. Nasby | 2006-10-25 00:21:31 | Re: Problems using a function in a where clause |