Re: SSD options, small database, ZFS

From: Amitabh Kant <amitabhkant(at)gmail(dot)com>
To: Greg Smith <greg(at)2ndquadrant(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: SSD options, small database, ZFS
Date: 2011-11-18 17:19:48
Message-ID: CAPTAQB+RsmDMtGFz0dzuFx+Pa2StfGH+b7N=xj1MsmfVY_KKtg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Fri, Nov 18, 2011 at 3:39 PM, Greg Smith <greg(at)2ndquadrant(dot)com> wrote:

> On 11/17/2011 10:44 PM, CSS wrote:
>
>> Is there any sort of simple documentation on the query planner that might
>> cover how things like increased RAM could impact how a query is executed?
>>
>
> There is no *simple* documentation on any part of the query planner that's
> also accurate. Query planning is inherently complicated.
>
> I think this point wasn't quite made clearly. PostgreSQL has no idea how
> much memory is in your system; it doesn't try to guess or detect it.
> However, when people move from one system to a larger one, they tend to
> increase some of the query planning parameters in the postgresql.conf to
> reflect the new capacity. That type of change can cause various types of
> query plan changes. Let's say your old system has 16GB of RAM and you set
> effective_cache_size to 12GB; if you upgrade to a 64GB server, it seems
> logical to increase that value to 48GB to keep the same proportions. But
> that will can you different plans, and it's possible they will be worse.
> There's a similar concern if you change work_mem because you have more
> memory, because that will alter how plans do things like sorting and hashing
>
> But you don't have to make any changes. You can migrate to the new
> hardware with zero modifications to the Postgres configuration, then
> introduce changes later.
>
> The whole memorys speed topic is also much more complicated than any
> simple explanation can cover. How many banks of RAM you can use
> effectively changes based on the number of CPUs and associated chipset too.
> Someone just sent me an explanation recently of why I was seeing some
> strange things on my stream-scaling benchmark program. That dove into a
> bunch of trivia around how the RAM is actually accessed on the motherboard.
> One of the reasons I keep so many samples on that program's page is to
> help people navigate this whole maze, and have some data points to set
> expectations against. See https://github.com/gregs1104/**stream-scaling<https://github.com/gregs1104/stream-scaling>for the code and the samples.
>
> --
> Greg Smith 2ndQuadrant US greg(at)2ndQuadrant(dot)com Baltimore, MD
> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
>
>
>
Greg

On a slightly unrelated note, you had once (
http://archives.postgresql.org/pgsql-general/2011-08/msg00944.php) said to
limit shared_buffers max to 8 GB on Linux and leave the rest for OS
caching. Does the same advice hold on FreeBSD systems too?

Amitabh

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Scott Marlowe 2011-11-18 17:38:52 Re: Benchmarking tools, methods
Previous Message Greg Matthews 2011-11-18 16:37:07 Re: probably cause (and fix) for floating-point assist faults on itanium