Skip site navigation (1) Skip section navigation (2)

Re: Examining very large dumps

From: Achilleas Mantzios <achill(at)matrix(dot)gatewaynet(dot)com>
To: pgsql-admin(at)postgresql(dot)org
Subject: Re: Examining very large dumps
Date: 2008-04-17 05:46:24
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-admin
Στις Thursday 17 April 2008 08:25:22 ο/η Tom Lane έγραψε:
> Achilleas Mantzios <achill(at)matrix(dot)gatewaynet(dot)com> writes:
> >> Did you make the dump using 8.3's pg_dump?
> > Yes, with 8.3.1's pg_dump (data only dump)
> That would be your problem.  *Don't* use a data-only dump, it
> lobotomizes all intelligence in the system and leaves it up to you
> to deal with foreign-key ordering issues.  There are lots of
> performance arguments against that as well.  See the advice at

Ooops, now it seems i have an issue.
The whole point i went this way, was because i wanted to have a schema-only dump first,
in order to clean it from everything it had to do with contrib/tsearch2, contrib/intarray, dbsize
as well as to edit the triggers (substitute tsearch2 with tsvector_update_trigger), update the tsearch2 indexes
to use GIN.

So the plan was:
1) i take the schema-only dump
2) i edit the schema dump
3) i create the db
4) import _int.sql
5) import the schema
6) restore data
This procedure is kind of the official upgrade noted on
and described on

I am reading this link right away.

Any thoughts very welcome.
> 			regards, tom lane

Achilleas Mantzios

In response to


pgsql-admin by date

Next:From: Julius TuskenisDate: 2008-04-17 06:16:03
Subject: Re: Data Synchronization between client and server
Previous:From: Tom LaneDate: 2008-04-17 05:25:22
Subject: Re: Examining very large dumps

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group