Skip site navigation (1) Skip section navigation (2)

Re: Performance problems with a higher number of clients

From: Alfranio Tavares Correia Junior <alfranio(at)lsd(dot)di(dot)uminho(dot)pt>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance problems with a higher number of clients
Date: 2003-12-12 02:35:16
Message-ID: 3FD92964.8040709@lsd.di.uminho.pt (view raw or flat)
Thread:
Lists: pgsql-performance
Thanks for the advices,
The performance is a bit better now. Unfortunately, the machine does not 
allow
to put more than 200 - ~250 users without noticing swap hell.
I have to face the fact that I don't have enough memory....

I used the following configuration:

effective_cache_size = 65000 
shared_buffers = 10000         
random_page_cost = 2          
cpu_index_tuple_cost = 0.0005 
sort_mem = 512    - I tested each query to see the amount of space 
required to sort as Jeff suggested --> nothing above this value

I tested the system with 100, 200, 300, 400, 500 and finally 250 users.
Until ~250 users the system presents good response time and the swap 
almost does not exist.
During these expirements, I also started psql and tried to run some 
queries.
Unfortunately, even with ~250 users there is one query that takes too 
long to finish...
In fact, I canceled its execution after 5 minutes waiting to see anything.

This is the query:

select count(distinct(s_i_id))
       from stock, order_line
   where ol_w_id = _xx_ and
   ol_d_id = _xx_ and
   ol_o_id between _xx_ and
   _xx_ and
   s_w_id = ol_w_id and
   s_i_id = ol_i_id and
   s_quantity < _xx_;

When the system has no load, after a vacuum -f, I can execute the query 
and the plan produced is presented as follows:
 Aggregate  (cost=49782.16..49782.16 rows=1 width=4) (actual 
time=52361.573..52361.574 rows=1 loops=1)
   ->  Nested Loop  (cost=0.00..49780.24 rows=768 width=4) (actual 
time=101.554..52328.913 rows=952 loops=1)
         ->  Index Scan using pk_order_line on order_line o  
(cost=0.00..15779.32 rows=8432 width=4) (actual time=84.352..151.345 
rows=8964 loops=1)
               Index Cond: ((ol_w_id = 4) AND (ol_d_id = 4) AND (ol_o_id 
 >= 100) AND (ol_o_id <= 1000))
         ->  Index Scan using pk_stock on stock  (cost=0.00..4.02 rows=1 
width=4) (actual time=5.814..5.814 rows=0 loops=8964)
               Index Cond: ((stock.s_w_id = 4) AND (stock.s_i_id = 
"outer".ol_i_id))
               Filter: (s_quantity < 20)
 Total runtime: 52403.673 ms
(8 rows)

The talbes are designed as follows:

--ROWS ~5000000
CREATE TABLE stock (
    s_i_id int NOT NULL ,
    s_w_id int NOT NULL ,
    s_quantity int NULL ,
    s_dist_01 char (24) NULL ,
    s_dist_02 char (24) NULL ,
    s_dist_03 char (24) NULL ,
    s_dist_04 char (24) NULL ,
    s_dist_05 char (24) NULL ,
    s_dist_06 char (24) NULL ,
    s_dist_07 char (24) NULL ,
    s_dist_08 char (24) NULL ,
    s_dist_09 char (24) NULL ,
    s_dist_10 char (24) NULL ,
    s_ytd int NULL ,
    s_order_cnt int NULL ,
    s_remote_cnt int NULL ,
    s_data char (50) NULL
);

--ROWS ~15196318
CREATE TABLE order_line (
    ol_o_id int NOT NULL ,
    ol_d_id int NOT NULL ,
    ol_w_id int NOT NULL ,
    ol_number int NOT NULL ,
    ol_i_id int NULL ,
    ol_supply_w_id int NULL ,
    ol_delivery_d timestamp NULL ,
    ol_quantity int NULL ,
    ol_amount numeric(6, 2) NULL ,
    ol_dist_info char (24) NULL
);

ALTER TABLE stock ADD
CONSTRAINT PK_stock PRIMARY KEY
        (
                s_w_id,
                s_i_id
        );
ALTER TABLE order_line  ADD
        CONSTRAINT PK_order_line PRIMARY KEY
        (
                ol_w_id,
                ol_d_id,
                ol_o_id,
                ol_number
        );
CREATE  INDEX IX_order_line ON order_line(ol_i_id);

Any suggestion ?


Tom Lane wrote:

>Alfranio Correia Junior <alfranio(at)lsd(dot)di(dot)uminho(dot)pt> writes:
>  
>
>>I am facing a problem trying to put 500 concurrent users accessing
>>a postgresql instance.
>>    
>>
>
>I think you're going to need to buy more RAM.  1Gb of RAM means there
>is a maximum of 2Mb available per Postgres process before you start
>to go into swap hell --- in practice a lot less, since you have to allow
>for other things like the kernel and other applications.
>
>AFAIR TPC-C doesn't involve any complex queries, so it's possible you
>could run it with only 1Mb of workspace per process, but not when
>you've configured
>
>  
>
>>sort_mem = 10240
>>    
>>
>
>That's ten times more than your configuration can possibly support.
>(I don't recall whether TPC-C uses any queries that would sort, so
>it's possible this setting isn't affecting you; but if you are doing
>any sorts then it's killing you.)
>
>Bottom line is you probably need more RAM.
>
>			regards, tom lane
>
>---------------------------(end of broadcast)---------------------------
>TIP 5: Have you checked our extensive FAQ?
>
>               http://www.postgresql.org/docs/faqs/FAQ.html
>  
>



In response to

pgsql-performance by date

Next:From: Shridhar DaithankarDate: 2003-12-12 06:35:49
Subject: Re: Hardware suggestions for Linux/PGSQL server
Previous:From: David ShadovitzDate: 2003-12-12 01:50:31
Subject: Re: Measuring execution time for sql called from PL/pgSQL

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group