Le mardi 26 février 2008, Tom Lane a écrit :
> Or in more practical terms in this case, we have to balance
> speed against potentially-large costs in maintainability, datatype
> extensibility, and suchlike issues if we are going to try to get more
> than percentage points out of straight COPY.
Could COPY begin with checking the table type involved and use some internal
knowledge about -core types to avoid extensibility costs, if any? Ok that
sounds as a maintainability cost :)
Or maybe just provide an option to pg_dump to force usage of binary COPY
format, which then allow pg_restore to skip alltogether the data parsing. If
that's not the case (no data parsing), maybe it's time for another COPY
format to be invented?
On the binary compatibility between architectures, I'm wondering whether using
pg_dump in binary format from the new architecture couldn't be a solution.
Of course, when you only have the binary archives, lost server A and need to
get the data to server B which do not share the A architecture, you're not in
a comfortable situation. But pg_dump binary option would make clear you don't
want to use it for your regular backups...
And it wouldn't help the case when data is not coming from PostgreSQL. It
could still be a common enough use case to bother?
Just trying to put some ideas in the game, hoping this is more helpful than
They did not know it was impossible, so they did it! -- Mark Twain
In response to
pgsql-hackers by date
|Next:||From: craigp||Date: 2008-02-27 10:02:29|
|Subject: win32 build problem (cvs, msvc 2005 express)|
|Previous:||From: Magnus Hagander||Date: 2008-02-27 09:28:19|
|Subject: Re: Proposed changes to DTrace probe implementation|