| From: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
|---|---|
| To: | David Fetter <david(at)fetter(dot)org>, Andreas Pflug <pgadmin(at)pse-consulting(dot)de>, Chris Browne <cbbrowne(at)acm(dot)org>, pgsql-hackers(at)postgresql(dot)org |
| Subject: | Re: Possible TODO item: copy to/from pipe |
| Date: | 2006-05-31 20:12:17 |
| Message-ID: | 447DF8A1.7070207@dunslane.net |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Alvaro Herrera wrote:
> Andrew Dunstan wrote:
>
>>
>> But why is that hugely better than piping psql output to gzip?
>>
>
> psql output has already travelled over the network.
>
>
As I understand Tom's suggestion, it does not involve compression of
over the wire data. He suggested that on the server you would be able to do:
COPY mytable TO '| gzip >/home/tgl/mytable.dump.gz';
and that there could be an equivalent extension on psql's \copy command, as an alternative to doing
psql -c "COPY mytable TO stdout" mydb | gzip ...
It's the second piece especially that seems to me unnecessary.
So I am still unconvinced.
cheers
andrew
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Dave Page | 2006-05-31 20:27:28 | Re: Possible TODO item: copy to/from pipe |
| Previous Message | Tom Lane | 2006-05-31 20:10:42 | Re: Possible TODO item: copy to/from pipe |