Re: Speed / Server

From: Nikolas Everett <nik9000(at)gmail(dot)com>
To: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
Cc: anthony(at)resolution(dot)com, pgsql-performance(at)postgresql(dot)org
Subject: Re: Speed / Server
Date: 2009-10-06 13:21:08
Message-ID: d4e11e980910060621j39a16056h8b781189ae9ba3ac@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

If my un-word wrapping is correct your running ~90% user cpu. Yikes. Could
you get away with fewer disks for this kind of thing?

On Mon, Oct 5, 2009 at 5:32 PM, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>wrote:

> On Mon, Oct 5, 2009 at 7:30 AM, Nikolas Everett <nik9000(at)gmail(dot)com> wrote:
> >
> >> But you should plan on partitioning to multiple db servers up front
> >> and save pain of conversion later on. A dual socket motherboard with
> >> 16 to 32 SAS drives and a fast RAID controller is WAY cheaper than a
> >> similar machine with 4 to 8 sockets is gonna be. And if you gotta go
> >> there anyway, might as well spend your money on other stuff.
> >>
> >
> > I agree. If you can partition that sensor data across multiple DBs and
> have
> > your application do the knitting you might be better off. If I may be so
> > bold, you might want to look at splaying the systems out across your
> > backends. I'm just trying to think of a dimension that you won't want to
> > aggregate across frequently.
>
> Agreed back. If there's a logical dimension to split data on, it
> becomes much easier to throw x machines at it than to try and build
> one ubermachine to handle it all.
>
> > On the other hand, one of these 16 to 32 SAS drive systems with a raid
> card
> > will likely get you a long way.
>
> Yes they can. We're about to have to add a third db server, cause
> this is the load on our main slave db:
>
> procs -----------memory---------- ---swap-- -----io---- --system--
> -----cpu------
> r b swpd free buff cache si so bi bo in cs us sy id
> wa st
> 22 0 220 633228 229556 28432976 0 0 638 304 0 0 21
> 3 73 3 0
> 19 1 220 571980 229584 28435180 0 0 96 1111 7091 9796 90
> 6 4 0 0
> 20 0 220 532208 229644 28440244 0 0 140 3357 7110 9175 90
> 6 3 0 0
> 19 1 220 568440 229664 28443688 0 0 146 1527 7765 10481
> 90 7 3 0 0
> 9 1 220 806668 229688 28445240 0 0 99 326 6661 10326
> 89 6 5 0 0
> 9 0 220 814016 229712 28446144 0 0 54 1544 7456 10283
> 90 6 4 0 0
> 11 0 220 782876 229744 28447628 0 0 96 406 6619 9354 90
> 5 5 0 0
> 29 1 220 632624 229784 28449964 0 0 113 994 7109 9958 90
> 7 3 0 0
>
> It's working fine. This has a 16 15k5 SAS disks. A 12 Disk RAID-10,
> a 2 disk mirror for pg_xlog / OS, and two spares. It has 8 opteron
> cores and 32Gig ram. We're completely CPU bound because of the type of
> app we're running. So time for slave number 2...
>

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Joshua Tolley 2009-10-06 13:33:04 Re: Dumping + restoring a subset of a table?
Previous Message Shaul Dar 2009-10-06 13:16:27 Dumping + restoring a subset of a table?