On Wed, May 31, 2006 at 11:03:14AM -0400, Tom Lane wrote:
> After re-reading what I just wrote to Andreas about how compression
> of COPY data would be better done outside the backend than inside,
> it struck me that we are missing a feature that's fairly common in
> Unix programs. Perhaps COPY ought to have the ability to pipe its
> output to a shell command, or read input from a shell command.
> Maybe something like
> COPY mytable TO '| gzip >/home/tgl/mytable.dump.gz';
That's a great syntax :)
COPY mytable FROM 'create_sample_data --table mytable --rows 10000000 |';
would be cool.
> (I'm not wedded to the above syntax, it's just an off-the-cuff
It will be familiar to Perl users, for better or worse. Come to that,
should the prefixes > and >> also mean their corresponding shell
> Of course psql would need the same capability, since the server-side
> copy would still be restricted to superusers.
> You can accomplish COPY piping now through psql, but it's a bit awkward:
> psql -c "COPY mytable TO stdout" mydb | gzip ...
> Thoughts? Is this worth doing, or is the psql -c approach good enough?
I think it's worth doing :)
David Fetter <david(at)fetter(dot)org> http://fetter.org/
phone: +1 415 235 3778 AIM: dfetter666
Remember to vote!
In response to
pgsql-hackers by date
|Next:||From: Andreas Pflug||Date: 2006-05-31 15:31:13|
|Subject: Re: copy with compression progress n|
|Previous:||From: Andrew Dunstan||Date: 2006-05-31 15:26:49|
|Subject: Re: Possible TODO item: copy to/from pipe|