Re: 1 TB of memory

From: "Luke Lonergan" <llonergan(at)greenplum(dot)com>
To: "Jim Nasby" <jnasby(at)pervasive(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: 1 TB of memory
Date: 2006-03-17 07:24:00
Message-ID: C03FA410.1F607%llonergan@greenplum.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Jim,

On 3/16/06 10:44 PM, "Luke Lonergan" <llonergan(at)greenplum(dot)com> wrote:

> Plus - need more speed? Add 12 more servers, and you'd run at 12.8GB/s and
> have 96TB of disk to work with, and you'd *still* spend less on HW and SW
> than the SSD.

And I forgot to mention that with these 16 servers you'd have 64 CPUs and
256GB of RAM working for you in addition to the 96TB of disk. Every query
would use all of that RAM and all of those CPUs, all at the same time.

By comparison, with the SSD, you'd have 1 CPU trying to saturate 1
connection to the SSD. If you do anything other than just access the data
there (order by, group by, join, aggregation, functions), you'll be faced
with trying to have 1 CPU do all the work on 1 TB of data. I suggest that
it won't be any faster than having the 1 TB on disk for most queries, as you
would be CPU bound.

By comparison, with the MPP system, all 64 CPUs would be used at one time to
process the N TB of data and if you grew from N TB to 2N TB, you could
double the machine size and it would take the same amount of time to do 2N
as it did to do N. That's what data parallelism and scaling is all about.
Without it, you don't have a prayer of using all 1TB of data in queries.

- Luke

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Guillaume Cottenceau 2006-03-17 10:09:50 planner with index scan cost way off actual cost, advices to tweak cost constants?
Previous Message Luke Lonergan 2006-03-17 06:44:25 Re: 1 TB of memory