Async processing of rows

From: Nat! <nat(at)mulle-kybernetik(dot)com>
To: pgsql-interfaces(at)postgresql(dot)org
Subject: Async processing of rows
Date: 2008-09-15 10:38:19
Message-ID: 228B07DD-DE01-40CA-9385-D5D94DFAFF4A@mulle-kybernetik.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-interfaces

Hi

I will be writing an EOF (http://en.wikipedia.org/wiki/Enterprise_Objects_Framework
) adaptor for Postgres. Due to the way these are structured, I want to
process the result data row by row and not in one big tuple array. I
looked into the pg-library and it seems that this is possible, albeit
not without adding something to the API.

PQgetResult seems to loop as long as PGASYNC_BUSY is set, and that
appears to be set as long as there are rows being sent from the
server. Correct ?

So I what I think I need to do is write a function PQgetNextResult
that only blocks if there is not enough data available for reading in
one row.

A cursory glance at pqParseInput3 shows, that I can't call it with
incomplete input, as data is discarded even if the parse is
incomplete, mainly, this piece of code discards 'id' if msgLength can
not be completely read, which makes me wary:

conn->inCursor = conn->inStart;
if (pqGetc(&id, conn))
return;
if (pqGetInt(&msgLength, 4, conn))
{
/* (nat) expected to see: pqUngetc( id, conn); */
return;
}

So am I missing something or is this basically correct ?

Ciao
Nat!
----------------------------------------------
I'd like to fly
But my wings have been so denied -- Cantrell

Responses

Browse pgsql-interfaces by date

  From Date Subject
Next Message Tom Lane 2008-09-15 12:43:30 Re: Async processing of rows
Previous Message Keary Suska 2008-08-27 23:31:57 Re: FW: Libpq and the SQL RETURNING clause