Re: postgres 7.4 at 100%

From: Frank Knobbe <frank(at)knobbe(dot)us>
To: josh(at)agliodbs(dot)com
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: postgres 7.4 at 100%
Date: 2004-06-28 21:46:55
Message-ID: 1088459215.551.18.camel@localhost
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Mon, 2004-06-28 at 14:40, Josh Berkus wrote:
> As one of the writers of that article, let me point out:
>
> " -- Medium size data set and 256-512MB available RAM: 16-32MB (2048-4096)
> -- Large dataset and lots of available RAM (1-4GB): 64-256MB (8192-32768) "
>
> While this is probably a little conservative, it's still way bigger than 40.

I agree that 40 is a bit weak :) Chris' system has only 512 MB of RAM
though. I thought the quick response "..for any kind of production
server, try 5000-10000..." -- without considering how much memory he has
-- was a bit... uhm... eager.

Besides, if the shared memory is used to queue client requests,
shouldn't that memory be sized according to workload (i.e. amount of
clients, transactions per second, etc) instead of just taking a
percentage of the total amount of memory? If there only a few
connections, why waste shared memory on that when the memory could be
better used as file system cache to prevent PG from going to the disk so
often?

I understand tuning PG is almost an art form, yet it should be based on
actual usage patterns, not just by system dimensions, don't you agree?

Regards,
Frank

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Josh Berkus 2004-06-28 22:47:38 Re: postgres 7.4 at 100%
Previous Message Josh Berkus 2004-06-28 19:40:02 Re: postgres 7.4 at 100%