From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Rahila Syed <rahilasyed90(at)gmail(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Anastasia Lubennikova <a(dot)lubennikova(at)postgrespro(dot)ru>, Anastasia Lubennikova <lubennikovaav(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Rahila Syed <rahilasyed(dot)90(at)gmail(dot)com> |
Subject: | Re: Parallel Index Scans |
Date: | 2017-02-01 05:58:55 |
Message-ID: | CAA4eK1JARdXXZpsaaSjnBDtzhfJW5RyrfRBujNesKNbbAJmHuQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Jan 31, 2017 at 5:53 PM, Rahila Syed <rahilasyed90(at)gmail(dot)com> wrote:
> Hello,
>
>>Agreed, that it makes sense to consider only the number of pages to
>>scan for computation of parallel workers. I think for index scan we
>>should consider both index and heap pages that need to be scanned
>>(costing of index scan consider both index and heap pages). I thin
>>where considering heap pages matter more is when the finally selected
>>rows are scattered across heap pages or we need to apply a filter on
>>rows after fetching from the heap. OTOH, we can consider just pages
>>in the index as that is where mainly the parallelism works
> IMO, considering just index pages will give a better estimate of work to be
> done
> in parallel. As the amount of work/number of pages divided amongst workers
> is irrespective of
> the number of heap pages scanned.
>
Yeah, I understand that point and I can see there is strong argument
to do that way, but let's wait and see what others including Robert
have to say about this point.
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Paquier | 2017-02-01 05:59:58 | Re: tuplesort_gettuple_common() and *should_free argument |
Previous Message | Kyotaro HORIGUCHI | 2017-02-01 05:58:23 | Re: asynchronous execution |