Skip site navigation (1) Skip section navigation (2)

Re: Problems restoring big tables

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: arnaulist(at)andromeiberica(dot)com
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: Problems restoring big tables
Date: 2007-01-06 03:02:45
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-admin
Arnau <arnaulist(at)andromeiberica(dot)com> writes:
>    I have to restore a database that its dump using custom format (-Fc) 
> takes about 2.3GB. To speed the restore first I have restored everything 
> except (played with pg_restore -l) the contents of some tables that's 
> where most of the data is stored.

I think you've outsmarted yourself by creating indexes and foreign keys
before loading the data.  That's *not* the way to make it faster.

> pg_restore: ERROR:  out of memory
> DETAIL:  Failed on request of size 32.
> CONTEXT:  COPY statistics_operators, line 25663678: "137320348  58618027 

I'm betting you ran out of memory for deferred-trigger event records.
It's best to load the data and then establish foreign keys ... indexes
too.  See
for some of the underlying theory.  (Note that pg_dump/pg_restore
gets most of this stuff right already; it's unlikely that you will
improve matters by manually fiddling with the load order.  Instead,
think about increasing maintenance_work_mem and checkpoint_segments,
which pg_restore doesn't risk fooling with.)

			regards, tom lane

In response to


pgsql-admin by date

Next:From: GeoffreyDate: 2007-01-06 10:01:37
Subject: Re: vacuum fails with 'invalid page header' message
Previous:From: Tom LaneDate: 2007-01-06 02:47:42
Subject: Re: Can't See Data - Plz Help!

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group