From: | Jan Wieck <jan(at)wi3ck(dot)info> |
---|---|
To: | Andrew Dunstan <andrew(at)dunslane(dot)net>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Magnus Hagander <magnus(at)hagander(dot)net>, Robins Tharakan <tharakan(at)gmail(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pg_upgrade failing for 200+ million Large Objects |
Date: | 2021-03-21 16:56:02 |
Message-ID: | 5bdcb010-ecdd-c69a-b441-68002fc38483@wi3ck.info |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 3/21/21 7:47 AM, Andrew Dunstan wrote:
> One possible (probable?) source is the JDBC driver, which currently
> treats all Blobs (and Clobs, for that matter) as LOs. I'm working on
> improving that some: <https://github.com/pgjdbc/pgjdbc/pull/2093>
You mean the user is using OID columns pointing to large objects and the
JDBC driver is mapping those for streaming operations?
Yeah, that would explain a lot.
Thanks, Jan
--
Jan Wieck
Principle Database Engineer
Amazon Web Services
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2021-03-21 16:57:12 | Re: Fix pg_upgrade to preserve datdba (was: Re: pg_upgrade failing for 200+ million Large Objects) |
Previous Message | Jan Wieck | 2021-03-21 16:50:46 | Fix pg_upgrade to preserve datdba (was: Re: pg_upgrade failing for 200+ million Large Objects) |