From: | Pam Withnall <Pamw(at)zoom(dot)com(dot)au> |
---|---|
To: | "'pgsql-bugs(at)postgresql(dot)org'" <pgsql-bugs(at)postgresql(dot)org> |
Subject: | largeobject.write(byte[] buf) |
Date: | 2000-10-20 00:31:00 |
Message-ID: | D10ACC8031A3FD47A7860D3590D6A1D1F583@zoom-ads.zoomnet.zoom.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
I am having problems with large objects. I am using jdbc:postgresql driver
on linux.
I can insert a large object using PreparedStatement.setBytes(byte[] buf)
I can read the large object using ASCIISTream ( they are text objects)
BUT when I use large.object.write(byte[]buf), it overwrites my original
object once, then will not overwrite again.
for eg. if the large object is text "aaaaaaaaaaaaa", then I write "bbb" to
the object, the result is "bbbaaaaaaaaaa", correct.
THEN, if I write "ccccc" to the same object, the answer is "bbbccaaaaaaaa".
It overwrites the part of the object that was the original, but not the part
I overwrote at first.
If this is not fixable, I can unlink and create a new largeobject, but when
I use \lo_list, the unlinked objects are still there.
Can I get rid of them off my machine ??
Thanks, PW
From | Date | Subject | |
---|---|---|---|
Next Message | pgsql-bugs | 2000-10-20 17:53:09 | ESQL-C INSERT Problem |
Previous Message | Karel Zak | 2000-10-19 05:30:59 | Re: Re: 7.0.3 to_char() (was: [BUGS] Bugs in to_char function) |