Re: problems with large objects dump

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Sergio Gabriel Rodriguez <sgrodriguez(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: problems with large objects dump
Date: 2012-09-20 14:35:08
Message-ID: 16863.1348151708@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Sergio Gabriel Rodriguez <sgrodriguez(at)gmail(dot)com> writes:
> Our production database, postgres 8.4 has an approximate size of 200 GB,
> most of the data are large objects (174 GB), until a few months ago we used
> pg_dump to perform backups, took about 3-4 hours to perform all the
> process. Some time ago the process became interminable, take one or two
> days to process, we noticed that the decay process considerably to startup
> backup of large object, so we had to opt for physical backups.

Hm ... there's been some recent work to reduce O(N^2) behaviors in
pg_dump when there are many objects to dump, but I'm not sure that's
relevant to your situation, because before 9.0 pg_dump didn't treat
blobs as full-fledged database objects. You wouldn't happen to be
trying to use a 9.0 or later pg_dump would you? Exactly what 8.4.x
release is this, anyway?

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Sergio Gabriel Rodriguez 2012-09-20 15:53:10 Re: problems with large objects dump
Previous Message Sergio Gabriel Rodriguez 2012-09-20 12:06:35 problems with large objects dump