SRF optimization question

From: Jeremy Drake <pgsql(at)jdrake(dot)com>
To: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: SRF optimization question
Date: 2007-02-03 23:51:38
Message-ID: Pine.BSO.4.64.0702031543010.28908@resin.csoft.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

I am writing a set returning function in C. There are cases where I can
know definitively, upfront, that this function will only return one row.
I have noticed, through happenstance of partially converted function, that
I can mark a normal, non-set returning function as returning SETOF
something, while not utilizing the SRF macros and using PG_RETURN_DATUM,
and it still works as returning one row.

I am wondering, if it is an acceptable optimization that if I know
up-front that a function will only return one row, to avoid all of the
SRF overhead of setting up a new memory context, and a function context
struct, and requiring an extra call to my function to tell Postgres that I
am done sending rows, to simply not use the SRF stuff and interact with
Postgres as though I was not returning SETOF? Is this a sane idea, or did
I just stumble into an accidental feature when I changed my CREATE
FUNCTION statement without changing my C code?

--
UNIX was half a billion (500000000) seconds old on
Tue Nov 5 00:53:20 1985 GMT (measuring since the time(2) epoch).
-- Andy Tannenbaum

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Bruce Momjian 2007-02-03 23:58:14 Re: Dead code in _bt_split?
Previous Message Bruce Momjian 2007-02-03 23:36:28 Re: 8.1.5 release note