Re: SUBSTRING performance for large BYTEA

From: "Vance Maverick" <vmaverick(at)pgp(dot)com>
To: <pgsql-general(at)postgresql(dot)org>
Subject: Re: SUBSTRING performance for large BYTEA
Date: 2007-08-19 05:54:11
Message-ID: DAA9CBC6D4A7584ABA0B6BEA7EC6FC0B5D31FD@hq-exch01.corp.pgp.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Karsten Hilbert writes:
> Well, in my particular case it isn't so much that I *want*
> to access bytea in chunks but rather that under certain
> not-yet-pinned-down circumstances windows clients tend to go
> out-or-memory on the socket during *retrieval* (insertion is
> fine, as is put/get access from Linux clients). Doing
> chunked retrieval works on those boxen, too, so it's an
> option in our application (the user defines a chunk size
> that works, a size of 0 is treated as no-chunking).

This is my experience with a Java client too. Writing the data with
PreparedStatement.setBinaryStream works great for long strings, but
reading it with the complementary method ResultSet.getBinaryStream runs
into the memory problem, killing the Java VM.

Thanks to all for the useful feedback. I'm going to post a note to the
JDBC list as well to make this easier to find in the future.

Vance

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Phoenix Kiula 2007-08-19 08:28:59 Postgresql performance in production environment
Previous Message Felix Ji 2007-08-19 04:34:48 query large amount of data in c++ using libpq