From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Peter Geoghegan <pg(at)bowt(dot)ie> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pgbench: faster version of tpcb-like transaction |
Date: | 2017-08-26 23:59:14 |
Message-ID: | CAMkU=1ytyy1yD0niHH0k0u-xd9EXs-Bi+Q2CgBT7LO4ZSrVqAQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sat, Aug 26, 2017 at 4:28 PM, Peter Geoghegan <pg(at)bowt(dot)ie> wrote:
> On Sat, Aug 26, 2017 at 3:53 PM, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:
> > I get nearly a 3 fold speed up using the new transaction, from 9184 to
> 26383
> > TPS, on 8 CPU machine using scale 50 and:
> >
> > PGOPTIONS="-c synchronous_commit=off" pgbench -c32 -j32 -T60 -b tpcb-like
>
> What about with "-M prepared"? I think that most of us use that
> setting already, especially with CPU-bound workloads.
>
I still get a 2 fold improvement, from 13668 to 27036, when both
transactions are tested with -M prepared.
I am surprised, I usually haven't seen that much difference for the default
queries between prepared or not, to the point that I got out of the habit
of testing with it. But back when I was testing with and without
systematically, I did notice that it changed a lot depending on hardware
and concurrency. And of course from version to version different
bottlenecks come and go.
And thanks to Tom for letting me put -M at the end of the command line now.
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2017-08-27 00:15:45 | Re: pgbench: faster version of tpcb-like transaction |
Previous Message | Peter Geoghegan | 2017-08-26 23:28:27 | Re: pgbench: faster version of tpcb-like transaction |