Torsten Schulz wrote:
> Gaetano Mendola wrote:
>> Torsten Schulz wrote:
>>> Yes, I know: very difficult question, but I don't know what to do now.
>>> Our Server:
>>> Dual-CPU with 1.2 GHz
>>> 1.5 GB RAM
>>> Our Problem: We are a Community. Between 19 and 21 o clock we have
>>> >350 User in the Community. But then, the Database are very slow. And
>>> we have per CPU ~20-30% idle-time.
>> May we know the postgres version that you are running and
>> see the query that run slow ?
> Postgres: 7.3.2
> Query: All queries
> max_connections = 1000 # Must be, if lower then 500 we become
> shared_buffers = 5000 # 2*max_connections, min 16
> max_fsm_relations = 1000 # min 10, fsm is free space map
> max_fsm_pages = 2000000 # min 1000, fsm is free space map
> max_locks_per_transaction = 64 # min 10
> wal_buffers = 2000 # min 4
> sort_mem = 32768 # min 32
> vacuum_mem = 32768 # min 1024
> fsync = false
> enable_seqscan = true
> enable_indexscan = true
> enable_tidscan = true
> enable_sort = true
> enable_nestloop = true
> enable_mergejoin = true
> enable_hashjoin = true
> effective_cache_size = 96000 # default in 8k pages
With 500 connection at the sime time 32MB for sort_mem can be too much.
What say "iostat 1" and "vmstat 1" ?
Try also to reduce this costs:
random_page_cost = 2.5
cpu_tuple_cost = 0.005
cpu_index_tuple_cost = 0.0005
BTW take a query and show us the result of explain analyze.
In response to
pgsql-performance by date
|Next:||From: Torsten Schulz||Date: 2003-11-24 21:33:05|
|Subject: Re: Optimize|
|Previous:||From: Torsten Schulz||Date: 2003-11-24 20:48:22|