From: | Magnus Hagander <magnus(at)hagander(dot)net> |
---|---|
To: | Federico Di Gregorio <fog(at)dndg(dot)it> |
Cc: | Marko Kreen <markokr(at)gmail(dot)com>, psycopg(at)postgresql(dot)org |
Subject: | Re: libpq custom row processing |
Date: | 2012-08-07 13:34:57 |
Message-ID: | CABUevEzCd8bEzisjhw2TRrYp+7zC5PHz_mXQ+AL=eFqJ_qhUAA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | psycopg |
On Tue, Aug 7, 2012 at 3:25 PM, Federico Di Gregorio <fog(at)dndg(dot)it> wrote:
> On 07/08/12 15:14, Marko Kreen wrote:
>> My point is that the behavior is not something completely new,
>> that no-one has seen before.
>>
>> But it's different indeed from libpq default, so it's not something
>> psycopg can convert to using unconditionally. But as optional feature
>> it should be quite useful.
>
> I agree. As an opt-in feature would be quite useful for large datasets
> but then, named cursors already cover that ground. Not that I am against
> it, just I'd like to see why:
>
> curs = conn.cursor(row_by_row=True)
>
> would be better than:
>
> curs = conn.cursor("row_by_row")
>
> Is row by row faster than fetching from a named cursor? Does it add less
> overhead. If that's the case then would be nice to have it as a feature
> for optimizing queries returning large datasets.
A big win would be that you don't need to keep the whole dataset in
memory, wouldn't it? As you're looping through it, you can throw away
the old results...
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
From | Date | Subject | |
---|---|---|---|
Next Message | Marko Kreen | 2012-08-07 20:28:28 | Re: libpq custom row processing |
Previous Message | Federico Di Gregorio | 2012-08-07 13:25:38 | Re: libpq custom row processing |