|From:||"Jamison, Kirk" <k(dot)jamison(at)jp(dot)fujitsu(dot)com>|
|Subject:||[PATCH] Speedup truncates of relation forks|
|Views:||Raw Message | Whole Thread | Download mbox|
Attached is a patch to speed up the performance of truncates of relations.
This is also my first time to contribute my own patch,
and I'd gladly appreciate your feedback and advice.
Whenever we truncate relations, it scans the shared buffers thrice
(one per fork) which can be time-consuming. This patch improves
the performance of relation truncates by initially marking the
pages-to-be-truncated of relation forks, then simultaneously
truncating them, resulting to an improved performance in VACUUM,
autovacuum operations and their recovery performance.
B. Patch Details
The following functions were modified:
1. FreeSpaceMapTruncateRel() and visibilitymap_truncate()
a. CURRENT HEAD: These functions truncate the FSM pages and unused VM pages.
b. PATCH: Both functions only mark the pages to truncate and return a block number.
- We used to call smgrtruncate() in these functions, but these are now moved inside the RelationTruncate() and smgr_redo().
- The tentative renaming of the functions are: MarkFreeSpaceMapTruncateRel() and visibilitymap_mark_truncate(). Feel free to suggest better names.
a. HEAD: Truncate FSM and VM first, then write WAL, and lastly truncate main fork.
b. PATCH: Now we mark FSM and VM pages first, write WAL, mark MAIN fork pages, then truncate all forks (MAIN, FSM, VM) simultaneously.
a. HEAD: Truncate main fork and the relation during XLOG replay, create fake rel cache for FSM and VM, truncate FSM, truncate VM, then free fake rel cache.
b. PATCH: Mark main fork dirty buffers, create fake rel cache, mark fsm and vm buffers, truncate marked pages of relation forks simultaneously, truncate relation during XLOG replay, then free fake rel cache.
4. smgrtruncate(), DropRelFileNodeBuffers()
- input arguments are changed to array of forknum and block numbers, int nforks (size of forkNum array)
- truncates the pages of relation forks simultaneously
I modified the function because it calls DropRelFileNodeBuffers. However, this is a dead code that can be removed.
I did not remove it for now because that's not for me but the community to decide.
C. Performance Test
I setup a synchronous streaming replication between a master-standby.
autovacuum = off
wal_level = replica
max_wal_senders = 5
wal_keep_segments = 16
max_locks_per_transaction = 10000
#shared_buffers = 8GB
#shared_buffers = 24GB
Objective: Measure VACUUM execution time; varying shared_buffers size.
1. Create table (ex. 10,000 tables). Insert data to tables.
2. DELETE FROM TABLE (ex. all rows of 10,000 tables)
3. psql -c "\timing on" (measures total execution of SQL queries)
4. VACUUM (whole db)
If you want to test with large number of relations,
you may use the stored functions I used here:
1) 128MB shared_buffers = 48.885 seconds
2) 8GB shared_buffers = 5 min 30.695 s
3) 24GB shared_buffers = 14 min 13.598 s
1) 128MB shared_buffers = 42.736 s
2) 8GB shared_buffers = 2 min 26.464 s
3) 24GB shared_buffers = 5 min 35.848 s
The performance significantly improved compared to HEAD,
especially for large shared buffers.
Would appreciate to hear your thoughts, comments, advice.
Thank you in advance.
|Next Message||Jonathan S. Katz||2019-06-11 07:35:36||Release scheduled for 2019-06-20|
|Previous Message||Andres Freund||2019-06-11 07:12:21||Re: pgbench rate limiting changes transaction latency computation|