| From: | Nicholas White <n(dot)j(dot)white(at)gmail(dot)com> |
|---|---|
| To: | Mikko Tiihonen <mikko(dot)tiihonen(at)nitorcreations(dot)com> |
| Cc: | pgsql-jdbc(at)postgresql(dot)org |
| Subject: | Re: Patch: Force Primitives |
| Date: | 2013-03-25 18:24:21 |
| Message-ID: | CA+=vxNZ9K7J+Pay-WqKtTsG0PTf2hxchfyX6ZQHWHtwdNuz-3g@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-jdbc |
> Do you see the binary transfers activating for array receives if you run
your prepared statement select in a loop?
That's the behaviour I see, athough something's setting my
m_prepareThreshold to 5 rather than 3. I'm essentially using postgres as a
persistent cache for my application server; when my app server starts it
loads large amounts of data from postgres using a series of select-*-from-x
queries. In order to minimise network I/O I'd ideally like a way to ensure
I'm using the binary protocol from the very first query. Should I submit
another patch that lets you configure this behaviour (either via a new
JDBCUrl parameter or whether binaryTransfer is explicitly specified)?
Separately, do you know why this behaviour is the default? Is the binary
encoding more expensive (either server-side or client-side) than text
encoding the data?
Thanks -
Nick
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Mikko Tiihonen | 2013-03-25 18:43:38 | Re: Patch: Force Primitives |
| Previous Message | Mikko Tiihonen | 2013-03-25 17:24:25 | Re: Patch: Force Primitives |