Skip site navigation (1) Skip section navigation (2)

how much mem to give postgres?

From: Josh Close <narshe(at)gmail(dot)com>
To: POSTGRES-PERFORMANCE <pgsql-performance(at)postgresql(dot)org>
Subject: how much mem to give postgres?
Date: 2004-10-19 22:31:59
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
I'm trying to figure out what I need to do to get my postgres server
moving faster. It's just crawling right now. It's on a p4 HT with 2
gigs of mem.

I was thinking I need to increase the amount of shared buffers, but
I've been told "the sweet spot for shared_buffers is usually on the
order of 10000 buffers". I already have it set at 21,078. If you have,
say 100 gigs of ram, are you supposed to still only give postgres

Also, do I need to up the shmmax at all? I've used the formula "250 kB
+ 8.2 kB * shared_buffers + 14.2 kB * max_connections up to infinity"
but it's never quite high enough, so I just make sure it's above the
amount that the postgres log says it needs.

What else can I do to speed this server up? I'm running vacuum analyze
on the heavily updated/inserted/deleted db's once an hour, and doing a
full vacuum once a night. Should I change the vacuum mem setting at

Are there any other settings I should be concerned with? I've heard
about the effective_cache_size setting, but I haven't seen anything on
what the size should be.

Any help would be great. This server is very very slow at the moment.

Also, I'm using a kernel with high mem enabled, so all the ram
is recognized.




pgsql-performance by date

Next:From: Tom LaneDate: 2004-10-19 22:39:48
Subject: Re: futex results with dbt-3
Previous:From: Kevin BarnardDate: 2004-10-19 22:09:21
Subject: Re: [PERFORM] Performance on Win32 vs Cygwin

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group