On Tue, Jul 21, 2009 at 9:47 PM, Kelvin Quee<kelvinq(at)gmail(dot)com> wrote:
> Hi Performance Wizards!
> I need advice on this.
> I have a db which is being constantly updated and queried by a few
> computers. We are doing datamining. The machine is running on a
> moderately powered machine and processors constantly hit 90%.
When your CPUs say 90%, is that regular user / sys %, or is it wait %?
The difference is very important.
What kind of hardware are you running on btw? # cpus, memory, # of
drives,type, RAID controller if any?
> At the same time, we need to present these data on a web interface.
> The performance for the web interface is now very sluggish as most of
> the power is occupied by the mining process.
> I have thought of a few ways out of this -
> 1) Buy a mega powered machine (temporal solution, quick fix)
Depends very much on what your bound by, CPU or IO. If adding a
couple of 15K SAS drives would double your performance then u don't
need a super powerful machine.
> 2) Do a master-slave configuration
Often a good choice.
> 3) Separate the DB into 2 - One for pure mining purposes, the other
> purely for web serving
> For (2), I do not know if it will be very effective since the master
> will probably have many changes at any moment. I do not understand how
> the changes will be propagated from the master to the slave without
> impacting the slave's performance. Anyone with more experience here?
> (3) seems ideal but is a very very painful solution!
> We can possibly use a message queue system but again I am not familiar
> with MQ. Will need to do more research.
That could be a very complex solution.
> If you were me, how would you solve this problem?
Slony, most likely.
In response to
pgsql-performance by date
|Next:||From: Kelvin Quee||Date: 2009-07-22 07:52:34|
|Subject: Re: Master/Slave, DB separation or just spend $$$?|
|Previous:||From: Oleg Bartunov||Date: 2009-07-22 05:11:15|
|Subject: Re: Atomic access to large arrays|