On Tue, Jan 10, 2012 at 11:28 PM, <maplabs(at)light42(dot)com> wrote:
> Hi All-
> I am just returning to a psycopg2 project after a break..
> We are using 2.4.2 on Linux, python 2.7, postgres 9.1
> If I make a query that returns truly large results
> that is, both a lot of data per record, and a lot of records
> What is the best way to handle that ?
> Right now I just declare a cursor, execute() and then
> for rec in cursor.fetchall():
> .... stuff ..
> on the hardware it is on, it is working fine apparently..
> but it got me wondering..
> fetchone() fetchmany(..) and fetchall()
> will all basically pull the data at once, yes?
> a server side cursor is the only way to bring data piece by piece, yes?
Yes, you're on the right track.
fetchone/many/all on a regular cursor will all pull the data on the
client in a single gulp (when execute() is called).
You can use a named cursor to pull the results a piece at time, at
which point the methods will do what they say: fetchall will pull all
together again (so you are back at square one, just worse), fetchone
will pull them one at time, with minimal memory usage on the client
but greater overhead (one query/result roundtrip per record). With
fetchmany you can balance memory/time usage. Even better, from 2.4, if
you iterate directly on the cursor (not on the list returned by
fetchall), records are read cursor.itersize at time, so you have a
handy syntax and optimal memory/time usage:
cur = cnn.cursor('name')
cur.itersize = 100
for record in cur:
# do stuff
this will bring the records 100 at a time from the server to the client.
All the details should be available on
In response to
psycopg by date
|Next:||From: Peter Irbizon||Date: 2012-01-11 14:21:09|
|Subject: after application close MS Visual C++ runtime library error occurs|
|Previous:||From: maplabs||Date: 2012-01-10 22:28:34|
|Subject: large return resuls|