Re: [HACKERS] tables > 1 gig

From: Hannu Krosing <hannu(at)trust(dot)ee>
To: Ole Gjerde <gjerde(at)icebox(dot)org>
Cc: Bruce Momjian <maillist(at)candle(dot)pha(dot)pa(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgreSQL(dot)org>
Subject: Re: [HACKERS] tables > 1 gig
Date: 1999-06-19 09:36:18
Message-ID: 376B6492.8BE29B7F@trust.ee
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Ole Gjerde wrote:
>
> On Fri, 18 Jun 1999, Bruce Momjian wrote:
> [snip - mdtruncate patch]
>
> While talking about this whole issue, there is one piece missing.
> Currently there is no way to dump a database/table over 2 GB.
> When it hits the 2GB OS limit, it just silently stops and gives no
> indication that it didn't finish.
>
> It's not a problem for me yet, but I'm getting very close. I have one
> database with 3 tables over 2GB(in postgres space), but they still come
> out under 2GB after a dump. I can't do a pg_dump on the whole database
> however, which would be very nice.
>
> I suppose it wouldn't be overly hard to have pg_dump/pg_dumpall do
> something similar to what postgres does with segments. I haven't looked
> at it yet however, so I can't say for sure.
>
> Comments?

As pg_dump writes to stdout, you can just use standard *nix tools:

1. use compressed dumps

pg_dump really_big_db | gzip > really_big_db.dump.gz

reload with

gunzip -c really_big_db.dump.gz | psql newdb
or
cat really_big_db.dump.gz | gunzip | psql newdb

2. use split

pg_dump really_big_db | split -b 1m - really_big_db.dump.

reload with

cat really_big_db.dump.* | pgsql newdb

-----------------------
Hannu

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Hannu Krosing 1999-06-19 10:17:35 Re: [HACKERS] Update on my 6.4.2 progress
Previous Message Wayne Piekarski 1999-06-19 07:39:08 Update on my 6.4.2 progress