Bill Schneider wrote:
> I noticed that I keep running out of memory when trying to run a query
> that returns 100,000 rows or so.
> Searching the archives, it seems that the JDBC driver reads the entire
> ResultSet into memory at once rather than using cursors. This is
> definitely not the desired behavior for this particular query.
> Has this been fixed recently? I'm using PostgreSQL 7.3.4 with the
> corresponding JDBC driver.
Do all of:
- use current development drivers and a 7.4 server, or pre-build-302
drivers and a 7.2/7.3/7.4 server
- call Connection.setAutoCommit(false)
- create statements that produce resultsets of type TYPE_FORWARD_ONLY
- call Statement.setFetchSize(<positive integer>)
- use queries that consist of a single SELECT with no trailing ';' (only
a requirement if using older drivers)
Then the driver should use cursors to batch access to the resultset.
In response to
pgsql-jdbc by date
|Next:||From: Oliver Jowett||Date: 2004-07-25 06:37:52|
|Subject: Re: [JDBC] V3 protocol + DECLARE problems|
|Previous:||From: Tom Lane||Date: 2004-07-23 21:43:43|
|Subject: Re: conflict txns in serialization isolation |