From: | Ole Gjerde <gjerde(at)icebox(dot)org> |
---|---|
To: | Bruce Momjian <maillist(at)candle(dot)pha(dot)pa(dot)us> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [HACKERS] tables > 1 gig |
Date: | 1999-06-18 18:25:03 |
Message-ID: | Pine.LNX.4.05.9906181318570.13506-100000@snowman.icebox.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, 18 Jun 1999, Bruce Momjian wrote:
[snip - mdtruncate patch]
While talking about this whole issue, there is one piece missing.
Currently there is no way to dump a database/table over 2 GB.
When it hits the 2GB OS limit, it just silently stops and gives no
indication that it didn't finish.
It's not a problem for me yet, but I'm getting very close. I have one
database with 3 tables over 2GB(in postgres space), but they still come
out under 2GB after a dump. I can't do a pg_dump on the whole database
however, which would be very nice.
I suppose it wouldn't be overly hard to have pg_dump/pg_dumpall do
something similar to what postgres does with segments. I haven't looked
at it yet however, so I can't say for sure.
Comments?
Ole Gjerde
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Hoffmann | 1999-06-18 19:13:06 | has anybody else used r-tree indexes in 6.5? |
Previous Message | Tom Lane | 1999-06-18 17:36:42 | Re: [HACKERS] Installation procedure wishest |