Tom Lane wrote:
>>It would be interesting to test pgbench
>>using scaling factors that allowed most of the tables to sit in shared
Thats why I recommended testing on ram disk ;)
>>Then, we wouldn't be testing disk i/o and would be
>>testing more backend processing throughput. (Tom, is that true?)
>Unfortunately, at low scaling factors pgbench is guaranteed to look
>horrible because of contention for the "branches" rows.
Not really! See graph in my previous post - the database size affects
much more !
-s 1 is faster than -s 128 for all cases except 7.1.3 where it becomse
nr of clients is > 16
>I think that
>it'd be necessary to adjust the ratios of branches, tellers, and
>accounts rows to make it possible to build a small pgbench database
>that didn't show a lot of contention.
My understanding is that pgbench is meant to have some level of
contention and should
be tested up to ( -c = 10 times -s ), as each test client should emulate
a real "teller" and
there are 10 tellers per -s.
>BTW, I realized over the weekend that the reason performance tails off
>for more clients is that if you hold tx/client constant, more clients
>means more total updates executed, which means more dead rows, which
>means more time spent in unique-index duplicate checks.
Thats the point I tried to make by modifying Tatsuos script to do what
I'm not smart enough to attribute it directly to index lookups but my
gut feeling told
me that dead tuples must be the culprit ;)
I first tried to counter the slowdown by running a concurrent new-type
but it made things 2X slower still (38 --> 20 tps for -s 100 with
original nr for -t )
> We know we want
>to change the way that works, but not for 7.2. At the moment, the only
>way to make a pgbench run that accurately reflects the impact of
>multiple clients and not the inefficiency of dead index entries is to
>scale tx/client down as #clients increases, so that the total number of
>transactions is the same for all test runs.
Yes. My test also showed that the impact of per-client startup costs is
than the impact of increased numer of transactions.
I posted the modified script that does exactly that (512 total transactions
for 1-2-4-8-16-32-64-128 concurrent clients ) about a week ago together
graph of results.
In response to
pgsql-hackers by date
|Next:||From: Adam Haberlach||Date: 2002-01-03 20:32:44|
|Subject: Re: SET DATESTYLE to time_t style for client libraries?|
|Previous:||From: Bruce Momjian||Date: 2002-01-03 19:54:58|
|Subject: Re: More problem with scripts|
pgsql-odbc by date
|Next:||From: Tom Lane||Date: 2002-01-03 23:39:59|
|Subject: Re: LWLock contention: I think I understand the problem |
|Previous:||From: Hannu Krosing||Date: 2002-01-03 19:37:17|
|Subject: Re: LWLock contention: I think I understand the problem|