Re: pg_dump and large files - is this a problem?

From: Philip Warner <pjw(at)rhyme(dot)com(dot)au>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: "PostgreSQL Development" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_dump and large files - is this a problem?
Date: 2002-10-03 13:10:48
Message-ID: 5.1.0.14.0.20021003230559.032fd028@mail.rhyme.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

At 11:06 AM 2/10/2002 -0400, Tom Lane wrote:
>It needs to get done; AFAIK no one has stepped up to do it. Do you want
>to?

My limited reading of off_t stuff now suggests that it would be brave to
assume it is even a simple 64 bit number (or even 3 32 bit numbers). One
alternative, which I am not terribly fond of, is to have pg_dump write
multiple files - when we get to 1 or 2GB, we just open another file, and
record our file positions as a (file number, file position) pair. Low tech,
but at least we know it would work.

Unless anyone knows of a documented way to get 64 bit uint/int file
offsets, I don't see we have mush choice.

----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 0500 83 82 82 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp5.ai.mit.edu:11371 |/

Browse pgsql-hackers by date

  From Date Subject
Next Message Mario Weilguni 2002-10-03 13:18:44 Re: pg_dump and large files - is this a problem?
Previous Message Nigel J. Andrews 2002-10-03 12:56:03 Re: Large databases, performance