Re: pg_dump and large files - is this a problem?

From: Giles Lean <giles(at)nemeton(dot)com(dot)au>
To: Philip Warner <pjw(at)rhyme(dot)com(dot)au>
Cc: "PostgreSQL Development" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_dump and large files - is this a problem?
Date: 2002-10-03 21:15:29
Message-ID: 13309.1033679729@nemeton.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


Philip Warner writes:

> My limited reading of off_t stuff now suggests that it would be brave to
> assume it is even a simple 64 bit number (or even 3 32 bit numbers).

What are you reading?? If you find a platform with 64 bit file
offsets that doesn't support 64 bit integral types I will not just be
surprised but amazed.

> One alternative, which I am not terribly fond of, is to have pg_dump
> write multiple files - when we get to 1 or 2GB, we just open another
> file, and record our file positions as a (file number, file
> position) pair. Low tech, but at least we know it would work.

That does avoid the issue completely, of course, and also avoids
problems where a platform might have large file support but a
particular filesystem might or might not.

> Unless anyone knows of a documented way to get 64 bit uint/int file
> offsets, I don't see we have mush choice.

If you're on a platform that supports large files it will either have
a straightforward 64 bit off_t or else will support the "large files
API" that is common on Unix-like operating systems.

What are you trying to do, exactly?

Regards,

Giles

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Alvaro Herrera 2002-10-03 21:29:47 Re: DROP COLUMN misbehaviour with multiple inheritance
Previous Message Jean-Luc Lachance 2002-10-03 21:12:02 use [PERF] instead of