Re: Gather Merge

From: Rushabh Lathia <rushabh(dot)lathia(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Gather Merge
Date: 2017-01-17 11:49:57
Message-ID: CAGPqQf1OoZ-izrNKY1w78TN9EzgXmA2MrhyuBg4JkanV61w__A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Jan 13, 2017 at 10:52 AM, Rushabh Lathia <rushabh(dot)lathia(at)gmail(dot)com>
wrote:

>
>
> On Thu, Jan 12, 2017 at 8:50 AM, Robert Haas <robertmhaas(at)gmail(dot)com>
> wrote:
>
>> On Sun, Dec 4, 2016 at 7:36 PM, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>
>> wrote:
>> > On Thu, Nov 24, 2016 at 11:12 PM, Rushabh Lathia <
>> rushabh(dot)lathia(at)gmail(dot)com>
>> > wrote:
>> >> PFA latest patch with fix as well as few cosmetic changes.
>> >
>> > Moved to next CF with "needs review" status.
>>
>> I spent quite a bit of time on this patch over the last couple of
>> days. I was hoping to commit it, but I think it's not quite ready for
>> that yet and I hit a few other issues along the way. Meanwhile,
>> here's an updated version with the following changes:
>>
>> * Adjusted cost_gather_merge because we don't need to worry about less
>> than 1 worker.
>> * Don't charge double maintenance cost of the heap per 34ca0905. This
>> was pointed out previous and Rushabh said it was fixed, but it wasn't
>> fixed in v5.
>> * cost_gather_merge claimed to charge a slightly higher IPC cost
>> because we have to block, but didn't. Fix it so it does.
>> * Move several hunks to more appropriate places in the file, near
>> related code or in a more logical position relative to surrounding
>> code.
>> * Fixed copyright dates for the new files. One said 2015, one said 2016.
>> * Removed unnecessary code from create_gather_merge_plan that tried to
>> handle an empty list of pathkeys (shouldn't happen).
>> * Make create_gather_merge_plan more consistent with
>> create_merge_append_plan. Remove make_gather_merge for the same
>> reason.
>> * Changed generate_gather_paths to generate gather merge paths. In
>> the previous coding, only the upper planner nodes ever tried to
>> generate gather merge nodes, but that seems unnecessarily limiting,
>> since it could be useful to generate a gathered path with pathkeys at
>> any point in the tree where we'd generate a gathered path with no
>> pathkeys.
>> * Rewrote generate_ordered_paths() logic to consider only the one
>> potentially-useful path not now covered by the new code in
>> generate_gather_paths().
>> * Reverted changes in generate_distinct_paths(). I think we should
>> add something here but the existing logic definitely isn't right
>> considering the change to generate_gather_paths().
>> * Assorted cosmetic cleanup in nodeGatherMerge.c.
>> * Documented the new GUC enable_gathermerge.
>> * Improved comments. Dropped one that seemed unnecessary.
>> * Fixed parts of the patch to be more pgindent-clean.
>>
>>
> Thanks Robert for hacking into this.
>
>
>> Testing this against the TPC-H queries at 10GB with
>> max_parallel_workers_per_gather = 4, seq_page_cost = 0.1,
>> random_page_cost = 0.1, work_mem = 64MB initially produced somewhat
>> demoralizing results. Only Q17, Q4, and Q8 picked Gather Merge, and
>> of those only Q17 got faster. Investigating this led to me realizing
>> that join costing for parallel joins is all messed up: see
>> https://www.postgresql.org/message-id/CA+TgmoYt2pyk2CTyvYCtF
>> ySXN=jsorGh8_MJTTLoWU5qkJOkYQ(at)mail(dot)gmail(dot)com
>>
>> With that patch applied, in my testing, Gather Merge got picked for
>> Q3, Q4, Q5, Q6, Q7, Q8, Q10, and Q17, but a lot of those queries get a
>> little slower instead of a little faster. Here are the timings --
>> these are with EXPLAIN ANALYZE, so take them with a grain of salt --
>> first number is without Gather Merge, second is with Gather Merge:
>>
>> Q3 16943.938 ms -> 18645.957 ms
>> Q4 3155.350 ms -> 4179.431 ms
>> Q5 13611.484 ms -> 13831.946 ms
>> Q6 9264.942 ms -> 8734.899 ms
>> Q7 9759.026 ms -> 10007.307 ms
>> Q8 2473.899 ms -> 2459.225 ms
>> Q10 13814.950 ms -> 12255.618 ms
>> Q17 49552.298 ms -> 46633.632 ms
>>
>>
> This is strange, I will re-run the test again and post the results soon.
>
>
Here is the benchmark number which I got with the latest (v6) patch:

- max_worker_processes = DEFAULT (8)
- max_parallel_workers_per_gather = 4
- Cold cache environment is ensured. With every query execution - server is
stopped and also OS caches were dropped.
- The reported values of execution time (in ms) is median of 3 executions.
- power2 machine with 512GB of RAM
- With default postgres.conf

Timing with v6 patch on REL9_6_STABLE branch
(last commit: 8a70d8ae7501141d283e56b31e10c66697c986d5).

Query 3: 49888.300 -> 45914.426
Query 4: 8531.939 -> 7790.498
Query 5: 40668.920 -> 38403.658
Query 9: 90922.825 -> 50277.646
Query 10: 45776.445 -> 39189.086
Query 12: 21644.593 -> 21180.402
Query 15: 63889.061 -> 62027.351
Query 17: 142208.463 -> 118704.424
Query 18: 244051.155 -> 186498.456
Query 20: 212046.605 -> 159360.520

Timing with v6 patch on master branch:
(last commit: 0777f7a2e8e0a51f0f60cfe164d538bb459bf9f2)

Query 3: 45261.722 -> 43499.739
Query 4: 7444.630 -> 6363.999
Query 5: 37146.458 -> 37081.952
Query 9: 88874.243 -> 50232.088
Query 10: 43583.133 -> 38118.195
Query 12: 19918.149 -> 20414.114
Query 15: 62554.860 -> 61039.048
Query 17: 131369.235 -> 111587.287
Query 18: 246162.686 -> 195434.292
Query 20: 201221.952 -> 169093.834

Looking at this results it seems like patch is good to go ahead.
I did notice that with your tpch run, query 9, 18. 20 were unable to pick
gather merge plan and that are the queries which are the most benefited
with gather merge. On another observation, if the work_mem set to high
then some queries end up picking Hash Aggregate - even though gather
merge performs better (I manually tested that by forcing gather merge).
I am still looking into this issue.

Thanks,

--
Rushabh Lathia
www.EnterpriseDB.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Amit Kapila 2017-01-17 12:18:10 Re: Speed up Clog Access by increasing CLOG buffers
Previous Message Magnus Hagander 2017-01-17 11:45:08 Re: jsonb_delete with arrays