Re: Proposal: Improve bitmap costing for lossy pages

From: Dilip Kumar <dilipbalaut(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Proposal: Improve bitmap costing for lossy pages
Date: 2017-06-08 14:44:05
Message-ID: CAFiTN-toFL3kN8hT1NDygqPU8H_dtDUugSy_CZqg9nSD2m=vFQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, May 18, 2017 at 8:07 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

Thanks for the feedback and sorry for the delayed response.

> You might need to adjust effective_cache_size.

You are right. But, effective_cache_size will have the impact on the
number of pages_fetched when it's used as parameterized path (i.e
inner side of the nested loop). But for our case where we see the
wrong number of pages got estimated (Q10), it was for the
non-parameterized path. However, I have also tested with high
effective cache size but did not observe any change.

> The Mackert and Lohman
> formula isn't exactly counting unique pages fetched.

Right.

>It will count
> the same page twice if it thinks the page will be evicted from the
> cache after the first fetch and before the second one.

That too when loop count > 1. If loop_count =1 then seems like it
doesn't consider the effective_cache size. But, actually, multiple
tuples can fall into the same page.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2017-06-08 14:46:31 Re: List of hostaddrs not supported
Previous Message Amit Kapila 2017-06-08 14:03:37 Re: Broken hint bits (freeze)