From: | "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com> |
---|---|
To: | "Williams, Travis L, NPONS" <tlw(at)att(dot)com> |
Cc: | <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Performance question.. |
Date: | 2003-06-13 16:39:57 |
Message-ID: | Pine.LNX.4.33.0306131030030.20410-100000@css120.ihs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, 11 Jun 2003, Williams, Travis L, NPONS wrote:
> All,
> I'm looking for ideas on tweaking pgsql.. here is my machine stats
>
> Processor 0 runs at 550 MHz
> Processor 1 runs at 550 MHz
> Page Size : 4096
> Phys Pages: 131072
> Total Physical memory = 536870912 (512MB)
SNIP
> # Shared Memory Size
> #
> shared_buffers = 128 # 2*max_connections, min 16
WAYYYY too small. Try 500 to 2000 for starters. Note that bigger isn't
always better, it's about fitting shared_buffers to your usage. This is
set in 8k blocks, so 1000 is really only about 8 meg. Bigger servers have
settings as high as 32768 which is 256 Megs.
SNIP
> # Non-shared Memory Sizes
> #
> #sort_mem = 512 # min 32
> #vacuum_mem = 8192 # min 1024
Try setting your sort mem a little higher. It's measured in k, so 8192
would be 8 megs.
> # Write-ahead log (WAL)
> #
> #wal_files = 0 # range 0-64
> #wal_sync_method = fsync # the default varies across platforms:
> # # fsync, fdatasync, open_sync, or open_datasync
> #wal_debug = 0 # range 0-16
> #commit_delay = 0 # range 0-100000
> #commit_siblings = 5 # range 1-1000
> #checkpoint_segments = 3 # in logfile segments (16MB each), min 1
> #checkpoint_timeout = 300 # in seconds, range 30-3600
> #fsync = true
If you're doing a lot of writing, look at using more than one WAL file and
putting the pg_xlog directory on another drive. You have to shutdown the
postmaster, copy over the pg_xlog dir, move the on in $PGDATA out of the
way, and link to the "new" directory then restart the postmaster.
Also, if you're doing lots of writes, setting a higher commit_delay and
commit_siblings can help.
> #effective_cache_size = 1000 # default in 8k pages
If your machine has 512 Meg of ram, you want to see how much
(approximately) is used by the OS as file cache/buffer. Divide that by 8k
and put that number into effective_cache_size.
> #random_page_cost = 4
For machines with fast RAID subsystems, random_page_cost may need to be
lowered. somewhere between 1 and 2. If all your dataset fits in memory,
set it to 1. I use 1.4 as a setting on my machine with 1.5 gig.
> #cpu_tuple_cost = 0.01
> #cpu_index_tuple_cost = 0.001
> #cpu_operator_cost = 0.0025
SNIP
That's all I can think of. If you can afford more memory, that would be
your best upgrade right now.
From | Date | Subject | |
---|---|---|---|
Next Message | Ernest E Vogelsinger | 2003-06-13 16:41:53 | Re: Query planner question |
Previous Message | Ian Barwick | 2003-06-13 16:28:28 | Re: How can I insert a UTF-8 character with psql? |