Re: JDBC and processing large numbers of rows

From: Dave Cramer <pg(at)fastcrypt(dot)com>
To: Guido Fiala <guido(dot)fiala(at)dka-gmbh(dot)de>
Cc: "pgsql-jdbc(at)postgresql(dot)org" <pgsql-jdbc(at)postgresql(dot)org>
Subject: Re: JDBC and processing large numbers of rows
Date: 2004-05-12 10:56:54
Message-ID: 1084359414.1536.149.camel@localhost.localdomain
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-jdbc

Guido,

No, this isn't the case, if you use cursors inside a transaction then
you will be able to have an arbitrarily large cursor open ( of any size
AFAIK )

--dc--
On Wed, 2004-05-12 at 02:37, Guido Fiala wrote:
> Reading all this i'd like to know if all this isn't just a tradeof between
> _where_ the memory is consumed?
>
> If your JDBC-client holds all in memory - it gets an OutOfMem-Exception.
>
> If your backend uses Cursors - it caches the whole resultset and probably
> starts swapping and gets slow (needs the memory of all users).
>
> If you use Limit and Offset the database has to do more to find the
> data-snippet and in worst case (last few records) still needs temporary the
> whole resultset? (not sure here)
>
> Is that just a "choose your poison" ? At least in the first case the memory of
> the Client _gets_ used too and not all load to the backend, on the other side
> - most the the user does not really read all the data at all, so it puts
> unnecessary load on all the hardware.
>
> Really like to know what the best way to go is then...
>
> Guido
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faqs/FAQ.html
>
>
>
> !DSPAM:40a1c98a223941159885930!
>
>
--
Dave Cramer
519 939 0336
ICQ # 14675561

In response to

Browse pgsql-jdbc by date

  From Date Subject
Next Message Oliver Jowett 2004-05-12 11:36:41 Re: JDBC and processing large numbers of rows
Previous Message Oliver Jowett 2004-05-12 10:50:10 Re: setAutoCommit(false)