Re: Proposal: Improve bitmap costing for lossy pages

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Dilip Kumar <dilipbalaut(at)gmail(dot)com>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Proposal: Improve bitmap costing for lossy pages
Date: 2017-05-18 14:37:33
Message-ID: CA+TgmoaJOTG+eP5KYP+tK-1XW=6c+WzA_UgA2_P6MnWGTA04-A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, May 18, 2017 at 2:52 AM, Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
> Most of the queries show decent improvement, however, Q14 shows
> regression at work_mem = 4MB. On analysing this case, I found that
> number of pages_fetched calculated by "Mackert and Lohman formula" is
> very high (1112817) compared to the actual unique heap pages fetched
> (293314). Therefore, while costing bitmap scan using 1112817 pages and
> 4MB of work_mem, we predicted that even after we lossify all the pages
> it can not fit into work_mem, hence bitmap scan was not selected.

You might need to adjust effective_cache_size. The Mackert and Lohman
formula isn't exactly counting unique pages fetched. It will count
the same page twice if it thinks the page will be evicted from the
cache after the first fetch and before the second one.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2017-05-18 14:51:59 Re: [Proposal] Allow users to specify multiple tables in VACUUM commands
Previous Message Robert Haas 2017-05-18 14:30:38 Re: [bug fix] PG10: libpq doesn't connect to alternative hosts when some errors occur