Skip site navigation (1) Skip section navigation (2)

Re: Possible TODO item: copy to/from pipe

From: David Fetter <david(at)fetter(dot)org>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: Possible TODO item: copy to/from pipe
Date: 2006-05-31 15:28:33
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
On Wed, May 31, 2006 at 11:03:14AM -0400, Tom Lane wrote:
> After re-reading what I just wrote to Andreas about how compression
> of COPY data would be better done outside the backend than inside,
> it struck me that we are missing a feature that's fairly common in
> Unix programs.  Perhaps COPY ought to have the ability to pipe its
> output to a shell command, or read input from a shell command.
> Maybe something like
> 	COPY mytable TO '| gzip >/home/tgl/mytable.dump.gz';

That's a great syntax :)


COPY mytable FROM 'create_sample_data --table mytable --rows 10000000 |';

would be cool.

> (I'm not wedded to the above syntax, it's just an off-the-cuff
> thought.)

It will be familiar to Perl users, for better or worse.  Come to that,
should the prefixes > and >> also mean their corresponding shell

> Of course psql would need the same capability, since the server-side
> copy would still be restricted to superusers.


> You can accomplish COPY piping now through psql, but it's a bit awkward:
> 	psql -c "COPY mytable TO stdout" mydb | gzip ...
> Thoughts?  Is this worth doing, or is the psql -c approach good enough?

I think it's worth doing :)

David Fetter <david(at)fetter(dot)org>
phone: +1 415 235 3778        AIM: dfetter666
                              Skype: davidfetter

Remember to vote!

In response to

pgsql-hackers by date

Next:From: Andreas PflugDate: 2006-05-31 15:31:13
Subject: Re: copy with compression progress n
Previous:From: Andrew DunstanDate: 2006-05-31 15:26:49
Subject: Re: Possible TODO item: copy to/from pipe

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group