| From: | Jeff Davis <pgsql(at)j-davis(dot)com> |
|---|---|
| To: | Ron Johnson <ron(dot)l(dot)johnson(at)cox(dot)net> |
| Cc: | pgsql general <pgsql-general(at)postgresql(dot)org> |
| Subject: | Re: Practical maximums (was Re: PostgreSQL theoretical |
| Date: | 2006-08-07 22:10:11 |
| Message-ID: | 1154988611.12968.56.camel@dogma.v10.wvs |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
On Mon, 2006-08-07 at 16:53 -0500, Ron Johnson wrote:
> > It will be difficult to have a consistent dump though. You can't do
> > that with separate transactions. (And you can't have multiple
> > simultaneous readers without separate transactions.)
>
> Absolutely, you're right. All "threads" must run from within the
> same read-only transaction.
>
The idea is that pg_dump already works and already creates a good
backup. Why not split up the data after pg_dump produces it? Of course
it should be split up in a stream fashion, like I suggested before in
this thread with my multiplexing script.
Regards,
Jeff Davis
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Jeff Davis | 2006-08-07 22:11:36 | Re: Practical maximums (was Re: PostgreSQL theoretical |
| Previous Message | Peter Eisentraut | 2006-08-07 22:09:45 | Re: psql: absolutes and toggles |