Doug Y wrote:
> I've been having some performance issues with a DB I use. I'm trying
> to come up with some performance recommendations to send to the
> CPU0: Pentium III (Coppermine) 1000MHz (256k cache)
> CPU1: Pentium III (Coppermine) 1000MHz (256k cache)
> Memory: 3863468 kB (4 GB)
> OS: Red Hat Linux release 7.2 (Enigma)
> Kernel: 2.4.9-31smp
> I/O I believe is a 3-disk raid 5.
> /proc/sys/kernel/shmmax and /proc/sys/kernel/shmall were set to 2G
> Postgres version: 7.3.4
> The DB schema is, well to put it nicely... not exactly normalized. No
> constraints to speak of except for the requisite not-nulls on the
> primary keys (many of which are compound). Keys are mostly varchar(256)
> Ok for what I'm uncertain of...
> According to http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
> Its more of a staging area and more isn't necessarily better. That psql
> relies on the OS to cache data for later use.
> But according to
> http://www.ca.postgresql.org/docs/momjian/hw_performance/node3.html its
> where psql caches previous data for queries because the OS cache is
> slower, and should be as big as possible without causing swap.
> Those seem to be conflicting statements. In our case, the
> "administrator" kept increasing this until performance seemed to
> increase, which means its now 250000 (x 8k is 2G).
> Is this just a staging area for data waiting to move to the OS cache, or
> is this really the area that psql caches its data?
It is the area where postgresql works. It updates data in this area and pushes
it to OS cache for disk writes later.
By experience, larger does not mean better for this parameter. For multi-Gig RAM
machines, the best(on an average for wide variety of load) value found to be
around 10000-15000. May be even lower.
It is a well known fact that raising this parameter unnecessarily decreases the
performance. You indicate that best performance occurred at 250000. This is very
very large compared to other people's experience.
> Again, according to the Varlena guide this tells psql how much system
> memory is available for it to do its work in.
> until recently, this was set at the default value of 1000. It was just
> recently increased to 180000 (1.5G)
> according to
> it should be about 25% of memory?
No rule of thumb. It is amount of memory OS will dedicate to psotgresql data
buffers. Depending uponn what else you run on machine, it could be
straight-forward or noodly value to calculate. For a 4GB machine, 1.5GB is quite
good but coupled with 2G of shared buffers it could push the machines to swap
storm. And swapping shared buffers is a big performance hit.
> Finally sort_mem:
> Was until recently left at the default of 1000. Is now 16000.
Sort memory is per sort not per query or per connection. So depending upon how
many concurrent connections you entertain, it could take quite a chuck of RAM.
> Increasing the effective cache and sort mem didn't seem to make much of
> a difference. I'm guessing the eff cache was probably raised a bit too
> much, and shared_buffers is way to high.
I agree. For shared buffers start with 5000 and increase in batches on 1000. Or
set it to a high value and check with ipcs for maximum shared memory usage. If
share memory usage peaks at 100MB, you don't need more than say 120MB of buffers.
> What can I do to help determine what the proper settings should be
> and/or look at other possible choke points. What should I look for in
> iostat, mpstat, or vmstat as red flags that cpu, memory, or i/o bound?
Yes. vmstat is usually a lot of help to locate the bottelneck.
> DB maintenance wise, I don't believe they were running vacuum full until
> I told them a few months ago that regular vacuum analyze no longer
> cleans out dead tuples. Now normal vac is run daily, vac full weekly
> (supposedly). How can I tell from the output of vacuum if the vac fulls
> aren't being done, or not done often enough? Or from the system tables,
> what can I read?
In 7.4 you can do vacuum full verbose and it will tell you the stats at the end.
For 7.3.x, its not there.
I suggest you vacuum full database once.(For large database, dumping restoring
might work faster. Dump/restore and vacuum full both lock the database
exclusively i.e. downtime. So I guess faster the better for you. But there is no
tool/guideline to determine which way to go.)
> Is there anywhere else I can look for possible clues? I have access to
> the DB super-user, but not the system root/user.
Other than hardware tuning, find out slow/frequent queries. Use explain analyze
to determine why they are so slow. Forgetting to typecast a where clause and
using sequential scan could cost you lot more than mistuned postgresql
> Thank you for your time. Please let me know any help or suggestions you
> may have. Unfortunately upgrading postgres, OS, kernel, or re-writing
> schema is most likely not an option.
I hope you can change your queries.
In response to
pgsql-performance by date
|Next:||From: Manfred Koizar||Date: 2004-05-12 10:17:27|
|Subject: Re: [PERFORM] Quad processor options|
|Previous:||From: Andrew McMillan||Date: 2004-05-12 08:58:13|
|Subject: Re: Quad processor options|