Re: Proposal: Improve bitmap costing for lossy pages

From: amul sul <sulamul(at)gmail(dot)com>
To: Dilip Kumar <dilipbalaut(at)gmail(dot)com>
Cc: Alexander Kuzmenkov <a(dot)kuzmenkov(at)postgrespro(dot)ru>, Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Proposal: Improve bitmap costing for lossy pages
Date: 2017-11-09 08:55:24
Message-ID: CAAJ_b94+TAaBj4VeYGuEJUGdQSV1X4KfLhNR8pqrJJZ1Rc=aww@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi Dilip,

v6 patch:
42 + /*
43 + * Estimate number of hashtable entries we can have within
maxbytes. This
44 + * estimates the hash cost as sizeof(PagetableEntry).
45 + */
46 + nbuckets = maxbytes /
47 + (sizeof(PagetableEntry) + sizeof(Pointer) + sizeof(Pointer));

It took me a little while to understand this calculation. You have moved this
code from tbm_create(), but I think you should move the following
comment as well:

tidbitmap.c:
276 /*
277 * Estimate number of hashtable entries we can have within
maxbytes. This
278 * estimates the hash cost as sizeof(PagetableEntry), which
is good enough
279 * for our purpose. Also count an extra Pointer per entry
for the arrays
280 * created during iteration readout.
281 */

Regards,
Amul

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Pavel Stehule 2017-11-09 09:11:20 Re: proposal - Default namespaces for XPath expressions (PostgreSQL 11)
Previous Message Amit Kapila 2017-11-09 08:47:48 Re: why not parallel seq scan for slow functions