>> I did not find a solution so far; and for bulk data transfers I now
>> >programmed a workaround.
>> But that is surely based on some component installed on the server,
> Correct. I use a pyro-remote server. On request this remote server copies
> the relevant rows into a temporary table, uses a copy_to Call to push
> into a StringIO-Objekt (that's Pythons version of "In Memory File"),
> serializes that StringIO-Objekt, does a bz2-compression and transfers the
> whole block via VPN.
> I read on in this thread, and I scheduled to check on psycopg2 and what
> it is doing with cursors.
What about a SSH tunnel using data compression ?
If you fetch all rows from a query in one go, would it be fast ?
Also, PG can now COPY from a query, so you don't really need the temp
In response to
pgsql-performance by date
|Next:||From: Simon Riggs||Date: 2007-06-22 10:55:47|
|Subject: Re: PITR Backups|
|Previous:||From: Harald Armin Massa||Date: 2007-06-22 10:24:16|
|Subject: Re: Data transfer very slow when connected via DSL|