|From:||Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>|
|To:||Andres Freund <andres(at)anarazel(dot)de>|
|Cc:||Claudio Freire <klaussfreire(at)gmail(dot)com>, Юрий Соколов <funny(dot)falcon(at)gmail(dot)com>, Sokolov Yura <funny(dot)falcon(at)postgrespro(dot)ru>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, PostgreSQL-Dev <pgsql-hackers(at)postgresql(dot)org>|
|Subject:||Re: Small improvement to compactify_tuples|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
I've been getting less and less excited about this patch, because I still
couldn't measure any above-the-noise performance improvement without
artificial exaggerations, and some cases seemed actually slower.
However, this morning I had an epiphany: why are we sorting at all?
There is no requirement that these functions preserve the physical
ordering of the tuples' data areas, only that the line-pointer ordering be
preserved. Indeed, reorganizing the data areas into an ordering matching
the line pointers is probably a good thing, because it should improve
locality of access in future scans of the page. This is trivial to
implement if we copy the data into a workspace area and back again, as
I was already proposing to do to avoid memmove. Moreover, at that point
there's little value in a separate compactify function at all: we can
integrate the data-copying logic into the line pointer scan loops in
PageRepairFragmentation and PageIndexMultiDelete, and get rid of the
costs of constructing the intermediate itemIdSortData arrays.
That led me to the attached patch, which is the first version of any
of this work that produces an above-the-noise performance win for me.
I'm seeing 10-20% gains on this modified version of Yura's original
psql -f test3setup.sql
pgbench -M prepared -c 3 -s 10000000 -T 300 -P 3 -n -f test3.sql
(sql scripts also attached below; I'm using 1GB shared_buffers and
fsync off, other parameters stock.)
However, there are a couple of objections that could be raised to
1. It's trading off per-byte work, in the form of an extra memcpy,
to save sorting work that has per-tuple costs. Therefore, the relatively
narrow tuples used in Yura's example offer a best-case scenario;
with wider tuples the performance might be worse.
2. On a platform with memmove not so much worse than memcpy as I'm
seeing on my RHEL6 server, trading memmove for memcpy might not be
such a win.
To address point 1, I tried some measurements on the standard pgbench
scenario, which uses significantly wider tuples. In hopes of addressing
point 2, I also ran the measurements on a laptop running Fedora 25
(gcc 6.4.1, glibc 2.24); I haven't actually checked memmove vs memcpy
on that machine, but at least it's a reasonably late-model glibc.
What I'm getting from the standard pgbench measurements, on both machines,
is that this patch might be a couple percent slower than HEAD, but that is
barely above the noise floor so I'm not too sure about it.
So I think we should seriously consider the attached, but it'd be a
good idea to benchmark it on a wider variety of platforms and test
regards, tom lane
|Next Message||Robert Haas||2017-11-07 21:46:48||Re: pg_stat_wal_write statistics view|
|Previous Message||Robert Haas||2017-11-07 21:36:16||Re: Fix a typo in dsm_impl.c|