Re: Speed dblink using alternate libpq tuple storage

From: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)oss(dot)ntt(dot)co(dot)jp>
To: markokr(at)gmail(dot)com
Cc: greg(at)2ndquadrant(dot)com, pgsql-hackers(at)postgresql(dot)org, mmoncure(at)gmail(dot)com, shigeru(dot)hanada(at)gmail(dot)com
Subject: Re: Speed dblink using alternate libpq tuple storage
Date: 2012-03-08 05:35:21
Message-ID: 20120308.143521.11600191.horiguchi.kyotaro@oss.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hello,

> > # - We expect PQisBusy(), PQconsumeInput()(?) and
> > # - PQgetResult() to exit immediately and we can
> > # - call PQgetResult(), PQskipResult() or
> > # - PQisBusy() after.
> > | 1 - OK ("I'm done with the row")
> > | - save result and getAnotherTuple returns 0.
> >
> > The lines prefixed with '#' is the desirable behavior I have
> > understood from the discussion so far. And I doubt that it works
> > as we expected for PQgetResult().
>
> No, the desirable behavior is already implemented and documented -
> the "stop parsing" return code affects only PQisBusy(). As that
> is the function that does the actual parsing.

I am satisfied with your answer. Thank you.

> The main plus if such scheme is that we do not change the behaviour
> of any existing APIs.

I agree with the policy.

> You optimized libpq_gettext() calls, but it's wrong - they must
> wrap the actual strings so that the strings can be extracted
> for translating.

Ouch. I remember to have had an build error about that before...

> Fixed in attached patch.

I'm sorry to annoy you.

> > > My suggestion - check in getAnotherTuple whether resultStatus is
> > > already error and do nothing then. This allows internal pqAddRow
> > > to set regular "out of memory" error. Otherwise give generic
> > > "row processor error".
..
> The suggestion was about getAnotherTuple() - currently it sets
> always "error in row processor". With such check, the callback
> can set the error result itself. Currently only callbacks that
> live inside libpq can set errors, but if we happen to expose
> error-setting function in outside API, then the getAnotherTuple()
> would already be ready for it.

I see. And I found it implemented in your patch.

> See attached patch, I did some generic comment/docs cleanups
> but also minor code cleanups:
>
> - PQsetRowProcessor(NULL,NULL) sets Param to PGconn, instead NULL,
> - pqAddRow sets "out of memory" error itself on PGconn.
> - getAnotherTuple(): when callback returns -1, it checks if error
> - dropped the error_saveresult label, it was unnecessary branch.

Ok, I've confirmed them.

> - put libpq_gettext() back around strings.
> - made functions survive conn==NULL.
> - dblink: refreshed regtest result, as now we get actual

Thank you for fixing my bugs.

> - dblink: changed skipAll parameter for PQskipResult() to TRUE,
> as dblink uses PQexec which can send several queries.

I agree to the change. I intended to allow to receive the results
after skipping the current result for failure. But that seems not
only not very likely, but also to be something dangerous.

> - Synced PQgetRow patch with return value changes.
>
> - Synced demos at https://github.com/markokr/libpq-rowproc-demos
> with return value changes.

> I'm pretty happy with current state. So tagging it
> ReadyForCommitter.

Thank you very much for all your help.

regards,

--
Kyotaro Horiguchi
NTT Open Source Software Center

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Pavel Stehule 2012-03-08 07:35:42 Re: poll: CHECK TRIGGER?
Previous Message Bruce Momjian 2012-03-08 04:39:21 Re: pg_upgrade --logfile option documentation