Re: Practical maximums (was Re: PostgreSQL theoretical

From: Jeff Davis <pgsql(at)j-davis(dot)com>
To: Ron Johnson <ron(dot)l(dot)johnson(at)cox(dot)net>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Practical maximums (was Re: PostgreSQL theoretical
Date: 2006-08-07 17:00:01
Message-ID: 1154970001.12968.17.camel@dogma.v10.wvs
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Mon, 2006-07-31 at 09:53 -0500, Ron Johnson wrote:

> > The evasive answer is that you probably don't run regular full pg_dump
> > on such databases.
>
> Hmmm.
>

You might want to use PITR for incremental backup or maintain a standby
system using Slony-I ( www.slony.info ).

> >> Are there any plans of making a multi-threaded, or even
> >> multi-process pg_dump?
> >
> > What do you hope to accomplish by that? pg_dump is not CPU bound.
>
> Write to multiple tape drives at the same time, thereby reducing the
> total wall time of the backup process.

pg_dump just produces output. You could pretty easily stripe that output
across multiple devices just by using some scripts. Just make sure to
write a script that can reconstruct the data again when you need to
restore. You don't need multi-threaded pg_dump, you just need to use a
script that produces multiple output streams. Multi-threaded design is
only useful for CPU-bound applications.

Doing full backups of that much data is always a challenge, and I don't
think PostgreSQL has limitations that another database doesn't.

Regards,
Jeff Davis

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Marian POPESCU 2006-08-07 17:02:12 Re: XPath question - big trouble
Previous Message Csaba Nagy 2006-08-07 16:48:08 Re: XPath question - big trouble