Skip site navigation (1) Skip section navigation (2)

Re: libpq custom row processing

From: Magnus Hagander <magnus(at)hagander(dot)net>
To: Federico Di Gregorio <fog(at)dndg(dot)it>
Cc: Marko Kreen <markokr(at)gmail(dot)com>, psycopg(at)postgresql(dot)org
Subject: Re: libpq custom row processing
Date: 2012-08-07 13:34:57
Message-ID: (view raw, whole thread or download thread mbox)
Lists: psycopg
On Tue, Aug 7, 2012 at 3:25 PM, Federico Di Gregorio <fog(at)dndg(dot)it> wrote:
> On 07/08/12 15:14, Marko Kreen wrote:
>> My point is that the behavior is not something completely new,
>> that no-one has seen before.
>> But it's different indeed from libpq default, so it's not something
>> psycopg can convert to using unconditionally.  But as optional feature
>> it should be quite useful.
> I agree. As an opt-in feature would be quite useful for large datasets
> but then, named cursors already cover that ground. Not that I am against
> it, just I'd like to see why:
> curs = conn.cursor(row_by_row=True)
> would be better than:
> curs = conn.cursor("row_by_row")
> Is row by row faster than fetching from a named cursor? Does it add less
> overhead. If that's the case then would be nice to have it as a feature
> for optimizing queries returning large datasets.

A big win would be that you don't need to keep the whole dataset in
memory, wouldn't it? As you're looping through it, you can throw away
the old results...

 Magnus Hagander

In response to

psycopg by date

Next:From: Marko KreenDate: 2012-08-07 20:28:28
Subject: Re: libpq custom row processing
Previous:From: Federico Di GregorioDate: 2012-08-07 13:25:38
Subject: Re: libpq custom row processing

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group