From: | Michail Nikolaev <michail(dot)nikolaev(at)gmail(dot)com> |
---|---|
To: | Michael Paquier <michael(at)paquier(dot)xyz> |
Cc: | Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, Andrey Borodin <x4mmm(at)yandex-team(dot)ru>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Tels <nospam-pg-abuse(at)bloodgate(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [WIP PATCH] Index scan offset optimisation using visibility map |
Date: | 2018-10-02 06:55:45 |
Message-ID: | CANtu0og-AU-KtHCxmOPUfqpVWxFi6UmsHfrFMpTh_3t61FejFQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hello.
> Okay, it has been more than a couple of days and the patch has not been
> updated, so I am marking as returned with feedback.
Yes, it is more than couple of days passed, but also there is almost no
feedback since 20 Mar after patch design was changed :)
But seriously - I still working on it and was digging into just last night
( https://github.com/michail-nikolaev/postgres/commits/index_only_fetch )
The main issue currently is a cost estimation. In right case (10m relation,
0.5 index correlation, 0.1 selectivity for filter) - it works like a charm
with 200%-400% performance boost.
But the same case with 1.0 selectivity gives 96% comparing to baseline. So,
to do correct cost estimation I need correct selectivity of filter
predicate.
Currently I am thinking to calculate it on fly - and switch to the new
method if selectivity is small. But it feels a little akward.
Thanks,
Michail.
From | Date | Subject | |
---|---|---|---|
Next Message | John Naylor | 2018-10-02 07:06:48 | Re: inconsistency and inefficiency in setup_conversion() |
Previous Message | Michael Paquier | 2018-10-02 06:55:08 | Re: pgsql: Improve autovacuum logging for aggressive and anti-wraparound ru |