From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Aaron Brown <abrown(at)bzzagent(dot)com> |
Cc: | "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: pg_restore failing with "ERROR: out of memory" |
Date: | 2008-03-19 19:06:39 |
Message-ID: | 17408.1205953599@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Aaron Brown <abrown(at)bzzagent(dot)com> writes:
> Im attempting to do something that should be a trivially simple task. I
> want to do a data only dump from my production data in the public schema and
> restore it on another machine.
Does it really need to be data-only? A regular schema+data dump usually
restores a lot faster.
Your immediate problem is probably that it's running out of memory for
pending foreign-key triggers. Even if it didn't run out of memory, the
ensuing one-tuple-at-a-time checks would take forever. You'd be better
off dropping the FK constraint, loading the data, and re-creating the
constraint.
There's further discussion of bulk-loading tricks in the manual:
http://www.postgresql.org/docs/8.2/static/populate.html
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Aaron Brown | 2008-03-19 19:08:59 | Re: pg_restore failing with "ERROR: out of memory" |
Previous Message | Bhella Paramjeet-PFCW67 | 2008-03-19 18:50:11 | Postgres database and firewall |