Skip site navigation (1) Skip section navigation (2)

Re: hardware performance and some more

From: Kasim Oztoprak <kasim(at)saglik(dot)gov(dot)tr>
To: shridhar_daithankar(at)persistent(dot)co(dot)in
Subject: Re: hardware performance and some more
Date: 2003-07-24 18:25:38
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
On 24 Jul 2003 17:08 EEST you wrote:

> On 24 Jul 2003 at 15:54, Kasim Oztoprak wrote:
> > The questions for this explanation are:
> >       1 - Can we use postgresql within clustered environment?
> >       2 - if the answer is yes, in which method can we use postgresql within a cluster?
> >       active - passive or active - active?
> Coupled with linux-HA( See heartbeat service, it *should* 
> be possible to run postgresql in active-passive clustering.
> If postgresql supported read-only database so that several nodes could read off 
> a single disk but only one could update that, a sort of active-active should be 
> possible as well. But postgresql can not have a read only database. That would 
> be a handy addition in such cases..

so in the master and slave configuration we can use the system within clustering environment. 

> > Now, the second question is related to the performance of the database. Assuming we have a 
> > dell's poweredge 6650 with 4 x 2.8 Ghz Xeon processors having 2 MB of cache for each, with the 
> > main memory of lets say 32 GB. We can either use a small SAN from EMC or we can put all disks 
> > into the machines with the required raid confiuration.
> > 
> > We will install RedHat Advanced Server 2.1 to the machine as the operating system and postgresql as 
> > the database server. We have a database having 25 millions records  having the length of 250 bytes 
> > on average for each record. And there are 1000 operators accessing the database concurrently. The main 
> > operation on the database (about 95%) is select rather than insert, so do you have any idea about 
> > the performance of the system? 
> Assumig 325 bytes per tuple(250 bytes field 24-28 byte header varchar fields) 
> gives 25 tuples per 8K page, there would be 8GB of data. This configuration 
> could fly with 12-16GB of RAM. After all data is read that is. You can cut down 
> on other requirements as well. May be a 2x opteron with 16GB RAMmight be a 
> better fit but check out how much CPU cache it has.

we do not have memory problem or disk problems. as I have seen in the list the best way to 
use disks are using raid 10 for data and raid 1 for os. we can put as much memory as 
we require. 

now the question, if we have 100 searches per second and in each search if we need 30 sql
instruction, what will be the performance of the system in the order of time. Let us say
we have two machines described aove in a cluster.

> A grep -rwn across data directory would fill the disk cache pretty well..:-)
> Bye
>  Shridhar
> --
> Egotism, n:	Doing the New York Times crossword puzzle with a pen.Egotist, n:	A 
> person of low taste, more interested in himself than me.		-- Ambrose Bierce, 
> "The Devil's Dictionary"
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo(at)postgresql(dot)org)


pgsql-performance by date

Next:From: Greg StarkDate: 2003-07-24 18:29:33
Subject: Re: Tuning PostgreSQL
Previous:From: William YuDate: 2003-07-24 16:42:56
Subject: Re: hardware performance and some more

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group