Re: JDBC and processing large numbers of rows

From: Guido Fiala <guido(dot)fiala(at)dka-gmbh(dot)de>
To: pgsql-jdbc(at)postgresql(dot)org
Subject: Re: JDBC and processing large numbers of rows
Date: 2004-05-12 12:31:08
Message-ID: 200405121431.08734.guido.fiala@dka-gmbh.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-jdbc

Am Mittwoch, 12. Mai 2004 12:00 schrieb Kris Jurka:
> The backend spools to a file when a materialized cursor uses more than
> sort_mem amount of memory. This is not quite the same as swapping as it
> will consume disk bandwidth, but it won't hog memory from other
> applications.

Well thats good on one side, but from the side of the user its worse:

He will see a large drop in performance (factor 1000) ASAP the database starts
using disk for such things. Ok - once the database is to large to be hold in
memory it is disk-bandwith-limited anyway...

In response to

Responses

Browse pgsql-jdbc by date

  From Date Subject
Next Message Oliver Jowett 2004-05-12 13:23:16 Re: JDBC and processing large numbers of rows
Previous Message Oliver Jowett 2004-05-12 11:36:41 Re: JDBC and processing large numbers of rows