From: | David Fetter <david(at)fetter(dot)org> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: Possible TODO item: copy to/from pipe |
Date: | 2006-05-31 15:28:33 |
Message-ID: | 20060531152832.GB27220@fetter.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, May 31, 2006 at 11:03:14AM -0400, Tom Lane wrote:
> After re-reading what I just wrote to Andreas about how compression
> of COPY data would be better done outside the backend than inside,
> it struck me that we are missing a feature that's fairly common in
> Unix programs. Perhaps COPY ought to have the ability to pipe its
> output to a shell command, or read input from a shell command.
> Maybe something like
>
> COPY mytable TO '| gzip >/home/tgl/mytable.dump.gz';
That's a great syntax :)
Similarly,
COPY mytable FROM 'create_sample_data --table mytable --rows 10000000 |';
would be cool.
> (I'm not wedded to the above syntax, it's just an off-the-cuff
> thought.)
It will be familiar to Perl users, for better or worse. Come to that,
should the prefixes > and >> also mean their corresponding shell
things?
> Of course psql would need the same capability, since the server-side
> copy would still be restricted to superusers.
Roight.
> You can accomplish COPY piping now through psql, but it's a bit awkward:
>
> psql -c "COPY mytable TO stdout" mydb | gzip ...
>
> Thoughts? Is this worth doing, or is the psql -c approach good enough?
I think it's worth doing :)
Cheers,
D
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
phone: +1 415 235 3778 AIM: dfetter666
Skype: davidfetter
Remember to vote!
From | Date | Subject | |
---|---|---|---|
Next Message | Andreas Pflug | 2006-05-31 15:31:13 | Re: copy with compression progress n |
Previous Message | Andrew Dunstan | 2006-05-31 15:26:49 | Re: Possible TODO item: copy to/from pipe |