Skip site navigation (1) Skip section navigation (2)

Re: cursor interface to libpq

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Thomas Lockhart <lockhart(at)alumni(dot)caltech(dot)edu>
Cc: "Kirby Bohling (TRSi)" <kbohling(at)oasis(dot)novia(dot)net>, pgsql-interfaces(at)postgresql(dot)org
Subject: Re: cursor interface to libpq
Date: 2000-09-22 16:31:57
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-interfaces
Thomas Lockhart <lockhart(at)alumni(dot)caltech(dot)edu> writes:
> afaik this should all work. You can run pg_dump and pipe the output to a
> tape drive or to gzip. You *know* that a real backup will take something
> like the size of the database (maybe a factor of two or so less) since
> the data has to go somewhere.

pg_dump in default mode (ie, dump data as COPY commands) doesn't have a
problem with huge tables because the COPY data is just dumped out in a
streaming fashion.

If you insist on using the "dump data as insert commands" option then
huge tables cause a memory problem in pg_dump, but on the other hand you
are going to get pretty tired of waiting for such a script to reload,
too.  I recommend just using the default behavior ...

			regards, tom lane

In response to

pgsql-interfaces by date

Next:From: Chris HaasDate: 2000-09-22 20:00:49
Subject: Re: upgrade pgaccess v0.96 to v0.98 fatal error: attribute querytables not found
Previous:From: Cedar CoxDate: 2000-09-22 12:22:57
Subject: ODBC - invalid protocol character

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group