From: | Guillaume Lelarge <guillaume(at)lelarge(dot)info> |
---|---|
To: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com> |
Cc: | anj patnaik <patna73(at)gmail(dot)com>, Scott Mead <scottm(at)openscg(dot)com>, Melvin Davidson <melvin6925(at)gmail(dot)com>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: question |
Date: | 2015-10-16 06:27:49 |
Message-ID: | CAECtzeV2tnrWi7Sf7ih0L9SKGhDCmEu5c6-5T_cYPtqTmdDGpg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
2015-10-15 23:05 GMT+02:00 Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>:
> On 10/15/2015 01:35 PM, anj patnaik wrote:
>
>> Hello all,
>> I will experiment with -Fc (custom). The file is already growing very
>> large.
>>
>> I am running this:
>> ./pg_dump -t RECORDER -Fc postgres | gzip > /tmp/dump
>>
>> Are there any other options for large tables to run faster and occupy
>> less disk space?
>>
>
> Yes, do not double compress. -Fc already compresses the file.
>
>
Right. But I'd say "use custom format but do not compress with pg_dump".
Use the -Z0 option to disable compression, and use an external
multi-threaded tool such as pigz or pbzip2 to get faster and better
compression.
--
Guillaume.
http://blog.guillaume.lelarge.info
http://www.dalibo.com
From | Date | Subject | |
---|---|---|---|
Next Message | Karsten Hilbert | 2015-10-16 06:56:32 | Re: ID column naming convention |
Previous Message | Gavin Flower | 2015-10-16 01:28:25 | Re: ID column naming convention |