Re: looking for a secure

From: Paul Ramsey <pramsey(at)refractions(dot)net>
To: Fran Fabrizio <ffabrizio(at)mmrd(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: looking for a secure
Date: 2001-07-31 17:08:27
Message-ID: 3B66E60B.8E3F337C@refractions.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Fran,
Is it truly necessary for your 10000 client programs to interact
*directly* with the database? If their mode of operation is going to be
relatively simple (send this structured upload, receive this structured
download) would it not be better to use a web-based script as the
interface to the clients, and have that script do the actual database
manipulations? The clients could POST upload data to the script (as an
XML fragment, for example) and download data from the script. The script
would be exposted to the outside world, not the database, and could have
some data validation routines to ensure that nasty people are not
hacking at your interface. And if it gets compromised, it's running
under web server permissions, not database permissions. Also, the
database can then be placed back behind the firewall, with the script
playing middleman.
The other nice thing about this scenario is that if you decide in 18
months that PostgreSQL 8 or whatever is the greatest thing since sliced
bread, but the PG wire protocol has changed, you can upgrade your server
without having to upgrade your clients (the PG team have been good about
not monkeying with the wire protocol though, so this is probably moot).
Paul

Fran Fabrizio wrote:
>
> Hi,
>
> I apologize in advance for the long post but I'm faced with a thorny
> problem and my lack of experience is showing. We are trying to figure
> out the best arrangement for our Pg server. We have what appears to me
> to be a difficult-to-satisfy set of security requirements/implications.
> Here's our setup...
>
> We have a Pg database here at the central office. We have two main
> groups of clients that need to talk to this database.
>
> In the field, we have/need to plan for upwards of 10,000 client sites
> that need to both put data into (log messages) and take data out of
> (downloading patches) this database. These 10,000 clients are talking
> to us over the internet, not a private network of any sort.
>
> We also have a couple of dozen people that we need to access a different
> section of the db through a web interface.
>
> Some of the data in the database is of a sensitive nature (IP addresses,
> account names and passwords to connect to the client sites).
>
> So, the challenge is to provide access to this db from the internet
> while making it reasonably hard to allow the wrong people to get
> access. We've been tossing over a lot of different scenarios, and
> here's what we've come up with so far:
>
> Scenario 1: We put a Pg server outside our firewall, and another one
> behind it. The outer database contains only a subset of the total db
> schema, just enough to receive the log messages and provide the patches
> that need to be available for download. The internet clients connect to
> this outer database. The internal database contains the full db
> schema. Then, the log and patch tables are replicated over to the
> internal, main database.
>
> Cons: replication doesn't seem to be a solid product yet, would require
> two-way replication (log messages need to be moved internally, new
> available patches need to be moved onto the outer db), means we have two
> databases to maintain
>
> Scenario 2: Same hardware setup as Scenario 1 but instead of
> replication we have a cron'ed perl script or psql script or something
> similar select from one db and insert into the other, and vice versa.
>
> Cons: still have two seperate databases, not real time, seems like a
> hack to me
>
> Scenario 3: Punch a hole through the firewall or move the main Pg
> database outside of the firewall and make the main database available on
> the internet.
>
> Cons: security implications
>
> Scenario 3 seems the most elegant to me. It avoids having to set up
> some sort of replication/copying scheme and having the same data stored
> in two different places. But we are understandably nervous about
> hanging that main db out there on the internet. So, I'm looking for the
> best recipe to minimize risk. Here's what I've thought about so far:
>
> - all 10,000 clients can get a separate Pg user account. performance
> issues? can we then restrict to a certain user/IP combo? can we
> restrict what actions they can take, what tables they can see, or just
> whether or not they have access to the db? does this even help?
>
> - SSL? is this even possible? The db client on those 10,000 machines
> is going to be a very lightweight C program out of necessity (perl and
> other languages is not supported, these machines are old and often we
> don't have permission to install new languages on them anyway)
>
> - the sensitive data fields can be encrypted in some reversible but
> secure fashion when we store them in the database
>
> - we can use things like tripwire, etc... to detect any unauthorized
> access to the db server machine
>
> - i have a nagging feeling i'm not seeing the big picture. does
> postgres have some other built-in security features that would help
> secure the box? revers lookups, maybe? or something else?
>
> I'm really interested in seeing what other people have done to alleviate
> these types of concerns, and what if anything I am missing as I approach
> the problem.
>
> Thanks for your time,
> Fran
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo(at)postgresql(dot)org

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Fran Fabrizio 2001-07-31 17:13:00 Re: looking for a secure
Previous Message Mitch Vincent 2001-07-31 17:00:01 Re: looking for a secure