Skip site navigation (1) Skip section navigation (2)

Re: Troubles dumping a very large table.

From: Dimitri Fontaine <dfontaine(at)hi-media(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Merlin Moncure" <mmoncure(at)gmail(dot)com>, "Ted Allen" <tallen(at)blackducksoftware(dot)com>
Subject: Re: Troubles dumping a very large table.
Date: 2008-12-29 12:48:37
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance

Le vendredi 26 décembre 2008, Tom Lane a écrit :
> Yeah, if he's willing to use COPY BINARY directly.  AFAIR there is not
> an option to get pg_dump to use it.  

Would it be possible to consider such an additional switch to pg_dump?

Of course the DBA has to know when to use it safely, but if the plan is to be 
able to restore later dump on the same machine to recover from some human 
error (oops, forgot the WHERE clause to this DELETE statement), it seems it 
would be a good idea.


In response to

pgsql-performance by date

Next:From: Laszlo NagyDate: 2008-12-29 13:11:48
Subject: Re: Slow table update - SOLVED!
Previous:From: Gregory WilliamsonDate: 2008-12-29 11:00:44
Subject: Re: Slow table update

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group