|From:||Andrey Borodin <x4mmm(at)yandex-team(dot)ru>|
|Cc:||Vladimir Leskov <vladimirlesk(at)yandex-team(dot)ru>|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
I was reviewing Paul Ramsey's TOAST patch and noticed that there is a big room for improvement in performance of pglz compression and decompression.
With Vladimir we started to investigate ways to boost byte copying and eventually created test suit to investigate performance of compression and decompression.
This is and extension with single function test_pglz() which performs tests for different:
1. Data payloads
2. Compression implementations
3. Decompression implementations
Currently we test mostly decompression improvements against two WALs and one data file taken from pgbench-generated database. Any suggestion on more relevant data payloads are very welcome.
My laptop tests show that our decompression implementation  can be from 15% to 50% faster.
Also I've noted that compression is extremely slow, ~30 times slower than decompression. I believe we can do something about it.
We focus only on boosting existing codec without any considerations of other compression algorithms.
Any comments are much appreciated.
Most important questions are:
1. What are relevant data sets?
2. What are relevant CPUs? I have only XEON-based servers and few laptops\desktops with intel CPUs
3. If compression is 30 times slower, should we better focus on compression instead of decompression?
Best regards, Andrey Borodin.
|Next Message||Michael Paquier||2019-05-13 03:09:52||Re: cleanup & refactoring on reindexdb.c|
|Previous Message||David Rowley||2019-05-13 02:19:45||Re: PostgreSQL 12: Feature Highlights|