Re: Custom allocators in libpq

From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Aaron Patterson <tenderlove(at)ruby-lang(dot)org>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Custom allocators in libpq
Date: 2017-08-29 01:26:13
Message-ID: CAMsr+YHK0uR28=cq0jG4WY+jSyaHq1W8s6CPYyeMriyanTq4gQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 29 August 2017 at 05:15, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:

> Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com> writes:
> > On 8/28/17 15:11, Tom Lane wrote:
> >> ... but it seems like you're giving up a lot of the possible uses if
> >> you don't make it apply uniformly. I admit I'm not sure how we'd handle
> >> the initial creation of a connection object with a custom malloc. The
> >> obvious solution of requiring the functions to be specified at PQconnect
> >> time seems to require Yet Another PQconnect Variant, which is not very
> >> appetizing.
>
> > I would have expected a separate function just to register the callback
> > functions, before doing anything else with libpq. Similar to libxml:
> > http://xmlsoft.org/xmlmem.html
>
> I really don't much care for libxml's solution, because it implies
> global variables, with the attendant thread-safety issues. That's
> especially bad if you want a passthrough such as a memory context
> pointer, since it's quite likely that different call sites would
> need different passthrough values, even assuming that a single set
> of callback functions would suffice for an entire application.
> That latter assumption isn't so pleasant either. One could expect
> that by using such a solution, postgres_fdw could be expected to
> break, say, a libpq-based DBI library inside plperl.

Yeah, the 'register a malloc() function pointer in a global via a
registration function call' approach seems fine and dandy until you find
yourself with an app that, via shared library loads, has more than one
different user of libpq with its own ideas about memory allocation.

RTLD_LOCAL can help, but may introduce other issues.

So there doesn't seem much way around another PQconnect variant. Yay? We
could switch to a struct-passing argument model, but by the time you add
the necessary "nfields" argument to allow you to know how much of the
struct you can safely access, etc, just adding new connect functions starts
to look good in comparison.

Which reminds me, it kind of stinks that PQconnectdbParams and PQpingParams
accept key and value char* arrays, but PQconninfoParse produces
a PQconninfoOption* . This makes it seriously annoying when you want to
parse a connstring, make some transformations and pass it to a connect
function. I pretty much always just put the user's original connstring in
'dbname' and set expand_dbname = true instead.

It might make sense to have any new function accept PQconninfoOption*. Or a
variant of PQconninfoParse that populates k/v arrays with 'n' extra fields
allocated and zeroed on return, I guess.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Masahiko Sawada 2017-08-29 01:27:50 Re: pgbench: Skipping the creating primary keys after initialization
Previous Message Robert Haas 2017-08-29 01:16:19 Re: show "aggressive" or not in autovacuum logs