> Mark Woodward wrote:
>>>>> create table as select ...; followed by a copy of that table
>>>>> if it really is faster then just the usual select & fetch?
>>>> Why "create table?"
>>> Just to simulate and time the proposal.
>>> SELECT ... already works over the network and if COPY from a
>>> select (which would basically work like yet another wire
>>> protocol) isnt significantly faster, why bother?
>> Because the format of COPY is a common transmiter/receiver for
>> like this:
>> pg_dump -t mytable | psql -h target -c "COPY mytable FROM STDIN"
>> With a more selective copy, you can use pretty much this mechanism to
>> limit a copy to a sumset of the records in a table.
> Ok, but why not just implement this into pg_dump or psql?
> Why bother the backend with that functionality?
Because "COPY" runs on the back-end, not the front end, and the front end
may not even be in the same city as the backend. When you issue a "COPY"
the file it reads or writes local to the backend. True, the examples I
gave may not show how that is important, but consider this:
psql -h remote masterdb -c "COPY (select * from mytable where ID <
xxlastxx) as mytable TO '/replicate_backup/mytable-060602.pgc'"
This runs completely in the background and can serve as a running backup.
In response to
pgsql-hackers by date
|Next:||From: Jim Nasby||Date: 2006-06-02 21:26:06|
|Subject: bgwriter statistics|
|Previous:||From: Tino Wildenhain||Date: 2006-06-02 21:22:14|
|Subject: Re: COPY (query) TO file|