The images are stored in whatever format our users load them as, so we
don't have any control over their compression or lack thereof.
I ran pg_dump with the arguments you suggested, and my 4 GB test table
finished backing up in about 25 minutes, which seems great. The only
problem is that the resulting backup file was over 9 GB. Using -Z2
resulting in a 55 minute 6GB backup.
Here's my interpretation of those results: the TOAST tables for our
image files are compressed by Postgres. During the backup, pg_dump
uncompresses them, and if compression is turned on, recompresses the
backup. Please correct me if I'm wrong there.
If we can't find a workable balance using pg_dump, then it looks like
our next best alternative may be a utility to handle filesystem backups,
which is a little scary for on-site, user-controlled servers.
From: Tom Lane [mailto:tgl(at)sss(dot)pgh(dot)pa(dot)us]
Sent: Saturday, April 12, 2008 9:46 PM
To: Ryan Wells
Subject: Re: [ADMIN] Slow pg_dump
"Ryan Wells" <ryan(dot)wells(at)soapware(dot)com> writes:
> We have several tables that are used to store binary data as bytea (in
> this example image files),
Precompressed image formats, no doubt?
> pg_dump -i -h localhost -p 5432 -U postgres -F c -v -f
> "backupTest.backup" -t "public"."images" db_name
Try it with -Z0, or even drop the -Fc completely, since it's certainly
not very helpful on a single-table dump. Re-compressing already
compressed data is not only useless but impressively slow ...
Also, drop the -i, that's nothing but a foot-gun.
regards, tom lane
In response to
pgsql-admin by date
|Next:||From: Phillip Smith||Date: 2008-04-14 23:22:40|
|Subject: Re: Slow pg_dump |
|Previous:||From: Tena Sakai||Date: 2008-04-14 20:58:47|
|Subject: Re: "missing" library file|