From: | Oliver Jowett <oliver(at)opencloud(dot)com> |
---|---|
To: | Bill Schneider <bschneider(at)vecna(dot)com> |
Cc: | pgsql-jdbc(at)postgresql(dot)org |
Subject: | Re: JDBC memory usage |
Date: | 2004-07-24 00:11:34 |
Message-ID: | 4101A936.9040604@opencloud.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-jdbc |
Bill Schneider wrote:
> Hello,
>
> I noticed that I keep running out of memory when trying to run a query
> that returns 100,000 rows or so.
>
> Searching the archives, it seems that the JDBC driver reads the entire
> ResultSet into memory at once rather than using cursors. This is
> definitely not the desired behavior for this particular query.
>
> Has this been fixed recently? I'm using PostgreSQL 7.3.4 with the
> corresponding JDBC driver.
Do all of:
- use current development drivers and a 7.4 server, or pre-build-302
drivers and a 7.2/7.3/7.4 server
- call Connection.setAutoCommit(false)
- create statements that produce resultsets of type TYPE_FORWARD_ONLY
- call Statement.setFetchSize(<positive integer>)
- use queries that consist of a single SELECT with no trailing ';' (only
a requirement if using older drivers)
Then the driver should use cursors to batch access to the resultset.
-O
From | Date | Subject | |
---|---|---|---|
Next Message | Oliver Jowett | 2004-07-25 06:37:52 | Re: [JDBC] V3 protocol + DECLARE problems |
Previous Message | Tom Lane | 2004-07-23 21:43:43 | Re: conflict txns in serialization isolation |