Re: 1 TB of memory

From: "Merlin Moncure" <mmoncure(at)gmail(dot)com>
To: "Rodrigo Madera" <rodrigo(dot)madera(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: 1 TB of memory
Date: 2006-03-17 14:57:36
Message-ID: b42b73150603170657k3b32705do9adb4ca64f10ea21@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 3/17/06, Rodrigo Madera <rodrigo(dot)madera(at)gmail(dot)com> wrote:
> I don't know about you databasers that crunch in some selects, updates
> and deletes, but my personal developer workstation is planned to be a
> 4x 300GB SATA300 with a dedicated RAID stripping controller (no
> checksums, just speedup) and 4x AMD64 CPUs... not to mention 2GB for
> each processor... all this in a nice server motherboard...

no doubt, that will handle quite a lot of data. in fact, most
databases (contrary to popular opinion) are cpu bound, not i/o bound.
However, at some point a different set of rules come into play. This
point is constantly chaning due to the relentless march of hardware
but I'd suggest that at around 1TB you can no longer count on things
to run quickly just depending on o/s file caching to bail you out.
Or, you may have a single table + indexes thats 50 gb that takes 6
hours to vacuum sucking all your i/o.

another useful aspect of SSD is the relative value of using system
memory is much less, so you can reduce swappiness and tune postgres to
rely more on the filesystem and give all your memory to work_mem and
such.

merlin

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Kevin Grittner 2006-03-17 15:08:23 Re: Background writer configuration
Previous Message PFC 2006-03-17 14:55:31 Re: Background writer configuration