Skip site navigation (1) Skip section navigation (2)

Re: LWLock contention: I think I understand the problem

From: Hannu Krosing <hannu(at)tm(dot)ee>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org, jwbaker(at)acm(dot)org
Subject: Re: LWLock contention: I think I understand the problem
Date: 2002-01-03 19:55:10
Message-ID: 3C34B71E.2040207@tm.ee (view raw or flat)
Thread:
Lists: pgsql-hackerspgsql-odbc

Tom Lane wrote:

>>It would be interesting to test pgbench
>>using scaling factors that allowed most of the tables to sit in shared
>>memory buffers.  
>>
Thats why I recommended testing on ram disk ;)

>>Then, we wouldn't be testing disk i/o and would be
>>testing more backend processing throughput.  (Tom, is that true?)
>>
>
>Unfortunately, at low scaling factors pgbench is guaranteed to look
>horrible because of contention for the "branches" rows.  
>
Not really! See graph in my previous post - the database size affects 
performance
much more !

-s 1 is faster than -s 128 for all cases except 7.1.3 where it becomse 
slower when
nr of clients is > 16

>I think that
>it'd be necessary to adjust the ratios of branches, tellers, and
>accounts rows to make it possible to build a small pgbench database
>that didn't show a lot of contention.
>
My understanding is that pgbench is meant to have some level of 
contention and should
be tested up to ( -c = 10 times -s ), as each test client should emulate 
a real "teller" and
there are 10 tellers per -s.

>BTW, I realized over the weekend that the reason performance tails off
>for more clients is that if you hold tx/client constant, more clients
>means more total updates executed, which means more dead rows, which
>means more time spent in unique-index duplicate checks. 
>
Thats the point I tried to make by modifying Tatsuos script to do what 
you describe.
I'm not smart enough to attribute it directly to index lookups but my 
gut feeling told
me that dead tuples must be the culprit ;)

I first tried to counter the slowdown by running a concurrent new-type 
vacuum process
but it made things 2X slower still (38 --> 20 tps for -s 100 with 
original nr for -t )

> We know we want
>to change the way that works, but not for 7.2.  At the moment, the only
>way to make a pgbench run that accurately reflects the impact of
>multiple clients and not the inefficiency of dead index entries is to
>scale tx/client down as #clients increases, so that the total number of
>transactions is the same for all test runs.
>
Yes. My test also showed that the impact of per-client startup costs is 
much smaller
than the impact of increased numer of transactions.

I posted the modified script that does exactly that (512 total transactions
for 1-2-4-8-16-32-64-128 concurrent clients ) about a week ago together 
with a
graph of results.

------------------------
Hannu








In response to

Responses

pgsql-hackers by date

Next:From: Adam HaberlachDate: 2002-01-03 20:32:44
Subject: Re: SET DATESTYLE to time_t style for client libraries?
Previous:From: Bruce MomjianDate: 2002-01-03 19:54:58
Subject: Re: More problem with scripts

pgsql-odbc by date

Next:From: Tom LaneDate: 2002-01-03 23:39:59
Subject: Re: LWLock contention: I think I understand the problem
Previous:From: Hannu KrosingDate: 2002-01-03 19:37:17
Subject: Re: LWLock contention: I think I understand the problem

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group