| From: | Aleksander Alekseev <a(dot)alekseev(at)postgrespro(dot)ru> |
|---|---|
| To: | Andres Freund <andres(at)anarazel(dot)de> |
| Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
| Subject: | Re: Patch: fix lock contention for HASHHDR.mutex |
| Date: | 2015-12-17 16:03:42 |
| Message-ID: | 20151217190342.07e4533a@fujitsu |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
> It'd really like to see it being replaced by a queuing lock
> (i.e. lwlock) before we go there. And then maybe partition the
> freelist, and make nentries an atomic.
I believe I just implemented something like this (see attachment). The
idea is to partition PROCLOCK hash table manually into NUM_LOCK_
PARTITIONS smaller and non-partitioned hash tables. Since these tables
are non-partitioned spinlock is not used and there is no lock
contention.
On 60-core server we gain 3.5-4 more TPS according to benchmark
described above. As I understand there is no performance degradation in
other cases (different CPU, traditional pgbench, etc).
If this patch seems to be OK I believe we could consider applying the
same change not only to PROCLOCK hash table.
| Attachment | Content-Type | Size |
|---|---|---|
| shard-proclock-hash-table.patch | text/x-patch | 13.5 KB |
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tom Lane | 2015-12-17 16:19:28 | Re: Using a single standalone-backend run in initdb (was Re: Bootstrap DATA is a pita) |
| Previous Message | Tomas Vondra | 2015-12-17 16:00:47 | Re: WIP: bloom filter in Hash Joins with batches |