| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | "Phillip Smith" <phillip(dot)smith(at)weatherbeeta(dot)com(dot)au> |
| Cc: | "'Ryan Wells'" <ryan(dot)wells(at)soapware(dot)com>, pgsql-admin(at)postgresql(dot)org |
| Subject: | Re: Slow pg_dump |
| Date: | 2008-04-15 00:58:48 |
| Message-ID: | 25879.1208221128@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-admin |
"Phillip Smith" <phillip(dot)smith(at)weatherbeeta(dot)com(dot)au> writes:
>> Here's my interpretation of those results: the TOAST tables for
>> our image files are compressed by Postgres. During the backup,
>> pg_dump uncompresses them, and if compression is turned on,
>> recompresses the backup. Please correct me if I'm wrong there.
No, the TOAST tables aren't compressed, they're pretty much going to be
the raw image data (plus a bit of overhead).
What I think is happening is that COPY OUT is encoding the bytea
data fairly inefficiently (one byte could go to \\nnn, five bytes)
and the compression on the pg_dump side isn't doing very well at buying
that back.
I experimented a bit and noticed that pg_dump -Fc is a great deal
smarter about storing large objects than big bytea fields --- it seems
to be pretty nearly one-to-one with the original data size when storing
a compressed file that was put into a large object. I dunno if it's
practical for you to switch from bytea to large objects, but in the near
term I think that's your only option if the dump file size is a
showstopper problem for you.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Ryan Wells | 2008-04-15 01:08:33 | Re: Slow pg_dump |
| Previous Message | Tena Sakai | 2008-04-15 00:46:07 | Re: Slow pg_dump |