From: | "Hugo <Nabble>" <hugo(dot)tech(at)gmail(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: pg_dump and thousands of schemas |
Date: | 2012-05-29 05:21:03 |
Message-ID: | 1338268863476-5710341.post@n5.nabble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-performance |
Thanks again for the hard work, guys.
When I said that the schemas were empty, I was talking about data, not
tables. So you are right that each schema has ~20 tables (plus indices,
sequences, etc.), but pretty much no data (maybe one or two rows at most).
Data doesn't seem to be so important in this case (I may be wrong though),
so the sample database should be enough to find the weak spots that need
attention.
> but in the mean time it can be circumvented
> by using -Fc rather than -Fp for the dump format.
> Doing that removed 17 minutes from the run time.
We do use -Fc in our production server, but it doesn't help much (dump time
still > 24 hours). Actually, I tried several different dump options without
success. It seems that you guys are very close to great improvements here.
Thanks for everything!
Best,
Hugo
--
View this message in context: http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766p5710341.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2012-05-29 06:16:53 | Re: Uh, I change my mind about commit_delay + commit_siblings (sort of) |
Previous Message | Alex | 2012-05-29 04:27:21 | Re: libpq URL syntax vs SQLAlchemy |
From | Date | Subject | |
---|---|---|---|
Next Message | Tatsuo Ishii | 2012-05-29 09:51:49 | Re: pg_dump and thousands of schemas |
Previous Message | Tom Lane | 2012-05-28 22:26:36 | Re: pg_dump and thousands of schemas |