Re: Need advice on postgresql.conf settings

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Shane | SkinnyCorp" <shanew(at)skinnycorp(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Need advice on postgresql.conf settings
Date: 2004-11-09 21:56:56
Message-ID: 5508.1100037416@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

"Shane | SkinnyCorp" <shanew(at)skinnycorp(dot)com> writes:
> The real issue is this, we have THE SAME queries taking anywhere from .001 -
> 90.0 seconds... the server is using 98% of the available RAM at all times
> (because of the persistant connections via php), and I don't know what to
> do.

I have a feeling that the answer is going to boil down to "buy more RAM"
--- it sounds a lot like you're just overstressing your server. The
more active backends you have, the more RAM goes to process-local
memory, and the less is available for kernel disk cache. Even if you
don't go into outright swapping, the amount of disk I/O needed goes up
the smaller the kernel disk cache gets.

Another possible line of attack is to use persistent (pooled)
connections to cut down the number of live backend processes you need.
However, depending on what your application software is, that might
take more time/effort (= money) than dropping in some more RAM.

You can investigate this theory by watching "top" output (the first few
lines about memory usage, not the process listing) as well as "vmstat"
output.

> uptime: 12:23:08 up 132 days, 19:16, 2 users, load average: 19.75,
> 17.34, 18.86

Load averages approaching 20 are not good either ... what sort of box
are you running on anyway?

As for the postgresql.conf settings, the only ones I'd seriously question
are

max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each
max_fsm_relations = 1000 # min 100, ~50 bytes each

These are the defaults, and are probably too small for a DB exceeding a
hundred meg or so.

max_files_per_process = 3052 # min 25

You really have your kernel set to support 3052 * 75 simultaneously open
files? Back this off. I doubt values beyond a couple hundred buy
anything except headaches.

wal_buffers = 192

This is an order-of-magnitude overkill too, especially if your
transactions are mostly small. I know it's only a megabyte or two,
but you evidently need that RAM more elsewhere.

enable_seqscan = false

I don't think this is a good idea in general.

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Jim C. Nasby 2004-11-09 22:23:45 seqscan strikes again
Previous Message patrick ~ 2004-11-09 19:26:44 Re: vacuum analyze slows sql query