From: | Joshua Tolley <eggyknap(at)gmail(dot)com> |
---|---|
To: | mladen(dot)gogala(at)vmsinfo(dot)com |
Cc: | "pgsql-novice(at)postgresql(dot)org" <pgsql-novice(at)postgresql(dot)org> |
Subject: | Re: Sphinx indexing problem |
Date: | 2010-05-24 12:36:46 |
Message-ID: | AANLkTiktsvCazJ1viLG3BYpynD8ocG2soQK-YR1hoLSA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
On Mon, May 24, 2010 at 6:02 AM, Mladen Gogala
<mladen(dot)gogala(at)vmsinfo(dot)com> wrote:
>> Joshua Tolley wrote:
>>> Is there anything I can do to prevent the API from attempting to put the
>>> entire query result in memory?
>> Use a cursor, and fetch chunks of the result set one at a time.
> I would have done so, had I written the application. Unfortunately, the
> application was written by somebody else. Putting the entire result set in
> memory is a bad idea and Postgres client should be changed, probably by
> adding some configuration options, like maximum memory that the client is
> allowed to consume and a "swap file". These options should be configurable
> per user, not system-wide. As I have said in my post, I do have a solution
> for my immediate problem but this slows things down:
You're definitely right; the current behavior is painful in some
cases. Using a cursor is the typical solution, in cases where it's
possible. The change you have in mind is on the TODO list (cf.
http://wiki.postgresql.org/wiki/Todo, "Allow statement results to be
automatically batched to the client"); it hasn't been tackled at this
point.
- Josh
From | Date | Subject | |
---|---|---|---|
Next Message | Sean Davis | 2010-05-24 13:33:35 | Re: Sphinx indexing problem |
Previous Message | Mladen Gogala | 2010-05-24 12:36:34 | Re: timestamp of a row |