Skip site navigation (1) Skip section navigation (2)

New Python bindings for PostgreSQL

From: Cristian Gafton <gafton(at)rpath(dot)com>
To: pgsql-announce(at)postgresql(dot)org
Subject: New Python bindings for PostgreSQL
Date: 2006-09-04 07:19:03
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-announce
A few days ago I started off trying to improve the PyGreSQL bindings by 
adding support for prepared statements, bind parameters and server side 
cursors. As I progressed through this task, the changes I have been making 
to PyGreSQL became more and more extensive, modifying the original's 
behavior in incompatible ways.

Things I changed from PyGreSQL 3.8.1:

- compatible only with PostgreSQL 8.0+ and Python 2.3+.

- added support bind parameters. The bind parameters are sent to the 
server separate from the query, not injected into the sql query body, so 
there is no need for expensive and error-prone escaping.

- added support for prepared statements through an extension of the DB API 
2.0 - db.prepare(sql). SQL statements that get executed 
multiple times with varying parameters can benefit from this.

- support server side cursors, useful for looping over large datasets one 
or more row(s) at a time instead of downloading the entire dataset in the 
client memory after each query (which is the default behavior). This is 
implemented through an extension of the DB API 2.0 by calling 
db.itercursor() instead of db.cursor() to instantiate a cursor.

- most common data types (numbers, strings, BYTEAs blobs, booleans, etc) 
are automatically converted to the right Python types as results are 
fetched in the C module. One notable exception are the datetime types, for 
which I have not had time to implement the conversion.[1] The datetime 
types are returned in Python as strings, where you can convert them as you 
see fit.

- complex dataypes (arrays, geometric types, composite types) are not 
supported yet - but those can be probably added once I figure out a 
priority (better yet: patches welcomed! :-). Those are also returned to 
the python caller as server-produced strings.

Preserving the old behavior for existing users of PyGreSQL is a challenge, 
seeing how the native PostgreSQL bind parameters syntax ($1, $2, etc) is 
very different from what PyGreSQL simulated (python-style %s or %(var)s). 
I don't have an interest in maintaining string-based argument-escaping 

Based on all that, it feels like it is better to release this as a new 
project. I'd like to acknowledge and thank the PyGreSQL team for providing 
my starting point.

So, my python-pgsql bindings can be found (for now) here:

I have tried to document the core functionality in the README file. There 
is also a file that shows you the code in action. Take a look at 
the code in - it explains things quicker and faster for those who 
can grok python. Please try them out and let me know what you think.


[1] We do not use native database datetime types on fields we use in the 
application. We attempt to support multiple SQL backends - and it seems 
every one has different kinds of datetime fields which behave differently, 
making this a real pain. Thus, we represent datetime stuff as Unix time 
integers in our schema, in a case of "the best solution is the simplest" 
that has worked very well for us so far. YMMV.

Cristian Gafton
rPath, Inc.

pgsql-announce by date

Next:From: ContinuentDate: 2006-09-08 00:50:33
Subject: Press Release: Announcing uni/cluster for PostgreSQL
Previous:From: David FetterDate: 2006-09-04 05:27:53
Subject: == PostgreSQL Weekly News - September 03 2006 ==

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group