From: | "CAJ CAJ" <pguser(at)gmail(dot)com> |
---|---|
To: | "Andrew Dunstan" <andrew(at)dunslane(dot)net> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Updating large postgresql database with blobs |
Date: | 2007-03-12 17:29:01 |
Message-ID: | 467669b30703121029s4f2c2820t404befb25168aa04@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
<snip>
> > What is the fastest way to upgrade postgres for large databases that
> > has binary objects?
>
> Your procedure dumps and restore the databases twice. This seems less
> than sound. My prediction is that you could get a 50% speed improvement
> by fixing that ...
Thanks for the response. This'd be wonderful if I can get my process right.
My assumptions (probably incorrect) are that pgdump has to be excuted twice
on a database with blobs. Once to get the data and once to get the blob
(using the -b flag).
The only thing you really need pg_dumpall for is the global tables. I
> would just use pg_dumpall -g to get those, and then use pg_dump -F c +
> pg_restore for each actual database.
This makes sense :) I assume that running pg_dump with -b will get all of
the data including the blobs?
Another thing is to make sure that pg_dump/pg_restore are not competing
> with postgres for access to the same disk(s). One way to do that is to
> run them from a different machine - they don't have to be run on the
> server machine - of course then the network can become a bottleneck, so
> YMMV.
We are using separate servers for dump and restore.
Thanks again for your suggestions. This helps immensely.
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Dunstan | 2007-03-12 17:40:10 | Re: Updating large postgresql database with blobs |
Previous Message | Tom Lane | 2007-03-12 17:17:25 | Re: Bitmapscan changes |