Re: Speedup twophase transactions

From: Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru>
To: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Speedup twophase transactions
Date: 2015-12-10 12:41:39
Message-ID: D9CBF3A6-7CDA-467D-B462-E0BE05A5DC4B@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox
Thread:
Lists: pgsql-hackers

Michael, Jeff thanks for reviewing and testing.

> On 10 Dec 2015, at 02:16, Michael Paquier <michael(dot)paquier(at)gmail(dot)com> wrote:
>
> This has better be InvalidXLogRecPtr if unused.

Yes, that’s better. Changed.

> On 10 Dec 2015, at 02:16, Michael Paquier <michael(dot)paquier(at)gmail(dot)com> wrote:

> + if (gxact->prepare_lsn)
> + {
> + XlogReadTwoPhaseData(gxact->prepare_xlogptr, &buf, NULL);
> + }
> Perhaps you mean prepare_xlogptr here?

Yes, my bad. But funnily I have this error even number of times: code in CheckPointTwoPhase also uses prepare_lsn instead of xlogptr, so overall this was working well, that’s why it survived my own tests and probably Jeff’s tests.
I think that’s a bad variable naming, for example because lsn in pg_xlogdump points to start of the record, but here start used as xloptr and end as lsn.
So changed both variables to prepare_start_lsn and prepare_end_lsn.

> On 10 Dec 2015, at 09:48, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:
> I've tested this through my testing harness which forces the database
> to go through endless runs of crash recovery and checks for
> consistency, and so far it has survived perfectly.

Cool! I think that patch is most vulnerable to following type of workload: prepare transaction, do a lot of stuff with database to force checkpoints (or even recovery cycles), and commit it.

> On 10 Dec 2015, at 09:48, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:

> Can you give the full command line? -j, -c, etc.

pgbench -h testhost -i && pgbench -h testhost -f 2pc.pgb -T 300 -P 1 -c 64 -j 16 -r

where 2pc.pgb as in previous message.

Also all this applies to hosts with uniform memory. I tried to run patched postgres on NUMA with 60 physical cores and patch didn’t change anything. Perf top shows that main bottleneck is access to gxact, but on ordinary host with 1/2 cpu’s that access even not in top ten heaviest routines.

> On 10 Dec 2015, at 09:48, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:

> Why are you incrementing :scale ?

That’s a funny part, overall 2pc speed depends on how you will name your prepared transaction. Concretely I tried to use random numbers for gid’s and it was slower than having constantly incrementing gid. Probably that happens due to linear search by gid in gxact array on commit. So I used :scale just as a counter, bacause it is initialised on pgbench start and line like “\set scale :scale+1” works well. (may be there is a different way to do it in pgbench).

> I very rapidly reach a point where most of the updates are against
> tuples that don't exist, and then get integer overflow problems.

Hmm, that’s strange. Probably you set scale to big value, so that 100000*:scale is bigger that int4? But i thought that pgbench will change aid columns to bigint if scale is more than 20000.

Attachment Content-Type Size
2pc_xlog.v2.diff application/octet-stream 14.3 KB
unknown_filename text/plain 95 bytes

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Victor Wagner 2015-12-10 13:36:18 Re: Is postgresql on Windows compatible with flex 2.6.0?
Previous Message Michael Paquier 2015-12-10 12:20:57 Re: Error with index on unlogged table