Re: "Dynamic routing" to different databases

From: Igor Neyman <ineyman(at)perceptron(dot)com>
To: Alex Balashov <abalashov(at)evaristesys(dot)com>, "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org>
Subject: Re: "Dynamic routing" to different databases
Date: 2015-10-01 13:44:44
Message-ID: A76B25F2823E954C9E45E32FA49D70ECCD54D23C@mail.corp.perceptron.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-admin

-----Original Message-----
From: pgsql-admin-owner(at)postgresql(dot)org [mailto:pgsql-admin-owner(at)postgresql(dot)org] On Behalf Of Alex Balashov
Sent: Wednesday, September 30, 2015 7:24 PM
To: pgsql-admin(at)postgresql(dot)org
Subject: Re: [ADMIN] "Dynamic routing" to different databases

After some more research, my conclusion - unless anyone has better suggestions - is to just bite the bullet and partition these multiple databases into schemas.

That's going to take a lot of code modification, but not nearly as much as a full-on application-layer multitenant alternative.

On 09/30/2015 11:23 AM, Alex Balashov wrote:

> Hello all,
>
> We have a rather complex service delivery application that is
> presently designed with a "single-tenant" use case in mind: one
> self-contained PostgreSQL database with all necessary tables, and one
> service consuming that database, configured via a static connection string.
>
> We are trying to figure out the most economical technical
> possibilities for retrofitting "multi-tenant" functionality into it;
> you might think of it as a kind of virtual hosting, where different
> requests coming into the consuming service, segregated by, say,
> request URI domain, result in very different behaviour.
>
> Obviously, one way to do this is to add an additional layer of
> abstraction at the database schema and the application level itself,
> creating nested entities and so forth. This is impractical, however;
> it would be nothing short of a wholesale overhaul of the entire
> codebase (which consists in large part of PostgreSQL stored procedures
> for the business layer).
>
> So, what we're trying to figure out is the feasibility of grafting
> multi-tenant functionality into the existing paradigm. For instance,
> we could have a "meta-data" database with a table that defines DNS
> domains and maps them to other database names/connection info on the
> same server. We could then create a new and self-contained database
> for each "tenant". I think that would make the most sense by far.
>
> The major limitation is that the consuming service can connect to one
> database and one database only, so what is needed is some way to
> present foreign databases through one PostgreSQL connection handle to
> one particular database.
>
> The dblink facility seems to provide ways of doing this, but what I am
> not clear on is: 1) how transparently the foreign database can be
> rendered, vs. having to graft in lots of dblink_*() calls and 2)
> whether there is a significant performance hit in using dblink,
> assuming the "remote" tenant databases are located on the same
> cluster. Can this be done using foreign data wrappers or something of the sort, for instance?
>
> Failing that, is there some functionality in PgPool or another
> connection proxy which can effectively take
>
> SELECT * FROM tbl META_ATTRIB = 'domain.com';
>
> and transform this into
>
> SELECT * FROM tbl;
>
> on the appropriate database for 'domain.com', which is a mapping that
> would be maintained in something that is, at least in principle,
> dynamically reloadable?
>
> Advice much appreciated!
>
> -- Alex
>

Why should it " take a lot of code modification"?
Modifying "search_path" is not enough?

Regards,
Igor Neyman

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Alex Balashov 2015-10-01 14:47:14 Re: "Dynamic routing" to different databases
Previous Message John Scalia 2015-09-30 23:34:08 Re: hot standby, how to disable WAL archiving ?