Re: performance problem - 10.000 databases

From: "Matt Clark" <matt(at)ymogen(dot)net>
To: "Marek Florianczyk" <franki(at)tpi(dot)pl>, "Jamie Lawrence" <postgres(at)jal(dot)org>
Cc: <pgsql-admin(at)postgresql(dot)org>
Subject: Re: performance problem - 10.000 databases
Date: 2003-10-31 14:30:57
Message-ID: OAEAKHEHCMLBLIDGAFELIEDNDPAA.matt@ymogen.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Hmm, maybe you need to back off a bit here on your expectations. You said your test involved 400 clients simultaneously running
queries that hit pretty much all the data in each client's DB. Why would you expect that to be anything *other* than slow?

And does it reflect expected production use? Unless those 10,000 sites are all fantastically popular, surely it's more likely that
only a small number of queries will be in progress at any given time? You're effectively simulating running 400 _very_ popular
dynamic websites off one 2-cpu DB server.

You also said that CPU is pegged at 100%. Given that you've got 400 backends all competing for CPU time you must have an insane
load average too, so improving the connect time might prove to be of no use, as you could well just get fasert connects and then
slower queries!

Sorry this email wasn't more constructive ;-)

M

> -----Original Message-----
> From: pgsql-admin-owner(at)postgresql(dot)org [mailto:pgsql-admin-owner(at)postgresql(dot)org]On Behalf Of Marek Florianczyk
> Sent: 31 October 2003 13:20
> To: Jamie Lawrence
> Cc: Matt Clark; pgsql-admin(at)postgresql(dot)org
> Subject: Re: [ADMIN] performance problem - 10.000 databases
>
>
> W liście z pią, 31-10-2003, godz. 13:54, Jamie Lawrence pisze:
> > On Fri, 31 Oct 2003, Matt Clark wrote:
> >
> > > I was more thinking that it might be possible to manage the security at a different level than the DB.
> > >
> >
> >
> > We do this with users and permissions.
> >
> > Each virtual host has an apache config include specifying a db user,
> > pass (and database, although most of them use the same one).
> > Permissions on the database tables are set so that a given vhost can
> > only access their own data.
> >
> > Our setup is mod_perl. Don't know how one would go about doing this with
> > PHP, but I imagine it has some mechanism for per-vhost variables or
> > similar.
>
> So, as I understand apache vhost can only connect to specified database.
> Strange... no PHP only mod_perl that fetch data from database and writes
> html document ? So, clients don't make any scripts, and don't use
> function like pgconnect? Do they use CGI with mod_perl, and they write
> scripts in perl ? Interesting.
> Don't know if it's possible with PHP, don't think so.
> But... If I would have 200, or even 900 clients I would do apache with
> vhost. But when I have 10.000 clients, apache cannot work with vhosts. (
> some system limitation ) So we use our own dynamic vhost module. When
> request is made to server, it checks domain part of the request, and
> search i LDAP what is DocumentRoot for that domain, and then return
> proper file. Config looks like it was only one vhost, but it works with
> 10.000 domains ;)
> No, I think that your solution, would not work for us.
> Everything is complicated when a large number of anything occurs. ;)
>
> greetings
> sorry for my bad english
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to majordomo(at)postgresql(dot)org)

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Marek Florianczyk 2003-10-31 14:59:46 Re: performance problem - 10.000 databases
Previous Message Tom Lane 2003-10-31 14:23:39 Re: performance problem - 10.000 databases