Re: large object I/O seeing \\xxx encoding with v3

From: Eric Marsden <emarsden(at)laas(dot)fr>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: PostgreSQL Interfaces <pgsql-interfaces(at)postgresql(dot)org>
Subject: Re: large object I/O seeing \\xxx encoding with v3
Date: 2004-08-13 15:39:21
Message-ID: wzir7qbcaja.fsf@melbourne.laas.fr
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-interfaces

>>>>> "tl" == Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:

ecm> When using the v3 fe/be protocol, data read and written to large
ecm> objects via the loread and lowrite functions seems to be \\xxx
ecm> encoded, as per literal escaping or data for the BYTEA type. For
ecm> instance, newlines written using lowrite() are later received as
ecm> \\012.

tl> It sounds to me like you have asked for textual rather than binary
tl> results from loread.

you're right; I am sending the type of the argument as 0, so it's text.

It seems to me that it would be more useful, and more consistent
with the way text is handled in the fe/be protocol, to use the
character encoding that was requested by the client (the equivalent
of PQsetClientEncoding) instead of this literal \xxx escaping.
Would be a backwards-incompatible change, though.


Thanks,

--
Eric Marsden <URL:http://www.laas.fr/~emarsden/>

In response to

Responses

Browse pgsql-interfaces by date

  From Date Subject
Next Message Tom Lane 2004-08-13 17:49:49 Re: large object I/O seeing \\xxx encoding with v3 protocol
Previous Message Tom Lane 2004-08-13 14:10:06 Re: large object I/O seeing \\xxx encoding with v3 protocol