Re: Schema-qualified statements in pg_dump output

From: Owen Hartnett <owen(at)clipboardinc(dot)com>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>, Bernd Helmle <mailings(at)oopsware(dot)de>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Schema-qualified statements in pg_dump output
Date: 2008-07-22 03:53:19
Message-ID: p06240802c4ab05ed1bbe@[]
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

At 8:34 AM +0100 7/11/08, Simon Riggs wrote:
>On Mon, 2008-07-07 at 15:46 +0200, Bernd Helmle wrote:
>> There's a behavior in pg_dump that annoyed me a little bit, the last few
>> times i had to deal with it:
>> Consider you have to dump a specific namespace only, you are going to use
>> pg_dump -n <your_schema> [-t <tables>].
>> I found it a common use case to restore this dump into a different schema
>> by simply changing the search_path. With included ownerships this doesn't
>> work, since pg_dump always outputs the necessary DDL as follows:
>> Okay, it isn't too hard to use sed to replace the necessary statements to
>> use the correct schema, but i think it would be much nicer if pg_dump would
>> omit the schema-qualified table name here. I'd like to create a patch for
>> this, if we agree on changing this behavior?
>The use case you mention is something that would be of value to many
>people, and I support your efforts to add a new option for this.
>No useful workarounds exist without flaws: i) editing with sed might
>well end up editing character data in the table(s) at the same time and
>you may never even notice. ii) reloading to the same schema (renaming
>etc) is not acceptable if the target has a production schema of that
>name already. iii) manually editing a large file is problematic.
>Tom's posted comments that you need to look at all of the places the
>schemaname is used to see what we will need/not need to change. It's
>more than just altering the owner, but that doesn't mean we don't want
>it or its impossible.
>Please pursue this further.

I've been looking into this matter, although I'm a noob apropos
PostgreSQL hacking. What I thought was a better way was to alter
pg_dump to accept a flag -m <masquerade_name>. It would require the
-n <schema_name> option or fail.

It would generate a schema dump where all the references to
<schema_name> were replaced by <masquerade_name>.

This would allow you to easily make a copy of a schema into a new schema.

My needs are that my production database is the "public" schema, and
each year I want to archive "fy2007", "fy2008", etc. schemas which
have the final information for those years. So at the end of this
year, I want to duplicate the "public" schema into the "fy2008"
schema, and continue with "public."

I could do the pg_dump "public", rename "public" to "fy2008" and then
restore "public," but this requires being without "public" for a
short interval. It would be better for me to simply:

pgsql database < pg_dump -c -n public -m fy2008

And that would give you a completely mechanical way to duplicate a
schema, which means I could put it in a script that users could call.

From what I've seen, it would mean finding where the schema is
currently accessed in the code, then substituting on the -m flag.

Having already done this with manually editing the files, it really
cries out for a better procedure.

Perhaps my solution is excessive compared to the other offered
solution, but it would have the benefit that the user would know
precisely what he was doing by the flag setting.


In response to


Browse pgsql-hackers by date

  From Date Subject
Next Message Matthew T. O'Connor 2008-07-22 04:31:26 Re: Concurrent VACUUM and ANALYZE
Previous Message Jonah H. Harris 2008-07-22 02:51:33 Re: Concurrent VACUUM and ANALYZE