Re: POC: GROUP BY optimization

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Teodor Sigaev <teodor(at)sigaev(dot)ru>, Pgsql Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: GROUP BY optimization
Date: 2018-06-09 18:09:43
Message-ID: e33558a8-6528-f0c4-8f59-04ff7e2e529c@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 06/07/2018 06:22 PM, Teodor Sigaev wrote:
>>> Yes. But again, this description is a bit short. First it works after
>>> first patch and might get some preordered leading pathkeys. Second, it
>>> tries to match ORDER BY clause order if there is no preordered leading
>>> pathkeys from first patch (it was introduced in v7). And third, if there
>>> is a tail of unmatched pathkeys on previous stages then it will reorder
>>> that tail.
>>>
>>
>> OK, I haven't looked at v7 yet, but if I understand correctly it tries
>> to maintain the ordering as much as possible? Does that actually help? I
>> mean, the incremental sort patch allows the sorting to happen by pieces,
>> but here we still need to sort all the data, right?
>>
>> Can you give an example demonstrating the benefit?
>
> See tst.sql. queries are marked with opt (optimization is on) and noopt.
>
> Query 1: select count(*) from btg group by v, r;
> Query 2: select count(*) from btg group by n, v, r order by n;
>
> For both queries it's possible to reorder v and r column, n column has
> the single distinct value.
>
> On my laptop:
> Query 1opt vs 1noopt: 3177.500 ms vs 6604.493 ms
>       2opt vs 2noopt: 5800.307 ms vs 7486.967 ms
>
> So, what we see:
> 1) for query 1 optimization gives 2 times better performance, for query
> 2 only 30%. if column 'n' will be unique then time for query 1 and 2
> should be the same. We could add check for preordered pathkeys in
> get_cheapest_group_keys_order() and if estimate_num_groups(reordered
> pathkeys) is close to 1 then we could do not reordering of tail of
> pathkeys.
>

OK, those are nice improvements, although the examples are somewhat
extreme (only 1 or 2 distinct values). So yes, in some cases this
reordering clearly makes a big difference, but I still think we need to
improve the heuristics to minimize regressions.

I see v9 is now calling estimate_num_groups(), so it already benefits
from extended statistics. Nice! I think the algorithm is more or less
the greedy option I proposed in one of the earlier messages - I don't
know if it's sufficient or not, but the only other idea I have is
essentially an exhaustive search through all possible permutations.

So that's a nice improvement, although I think we should also consider
non-uniform distributions (using the per-column MCV lists).

> 2) Planing cost is the same for all queries. So, cost_sort() doesn't
> take into account even number of columns.
>

Yeah. As I wrote before, I think we'll need to come up with a model for
comparison_cost depending on the column order, which should make costs
for those paths different.

>> FWIW I think it would be useful to have "development GUC" that would
>> allow us to enable/disable these options during development, because
>> that makes experiments much easier. But then remove them before commit.
> Added, v9, debug_enable_group_by_match_order_by and
> debug_enable_cheapest_group_by. I also checked compatibility with
> incremental sort patch, and all works except small merge conflict which
> could be resolved right before committing.
>

OK. I think we should also have a debug GUC for the first patch (i.e.
one that would allow reordering group_pathkeys to match the input path).

Regarding the two GUCs introduced in v9, I'm not sure I 100% understand
what they are doing. Let me try explaining what I think they do:

1) debug_cheapest_group_by - Reorder GROUP BY clauses to using ndistinct
stats for columns, placing columns with more distinct values first (so
that we don't need to compare the following columns as often).

2) debug_group_by_match_order_by - Try to reorder the GROUP BY clauses
to match the ORDER BY, so that the group aggregate produces results in
the desired output (and we don't need an explicit Sort).

FWIW I doubt about the debug_group_by_match_order_by optimization. The
trouble is that it's only based on comparing the pathkeys lists, and
entirely ignores that the two result sets may operate on very different
numbers of rows.

Consider for example "SELECT * FROM t GROUP BY a,b,c,d ORDER BY c,d"
where table "t" has 1B rows, but there are only ~10k rows in the result
(which is fairly common in fact tables in star-schema DBs). IIUC the
optimization will ensure "c,d" is placed first in the GROUP BY, and then
we reorder "a,b" using ndistinct. But this may be much less efficient
than simply reordering (a,b,c,d) per ndistinct and then adding explicit
Sort on top of that (because on 10k rows that's going to be cheap).

So I think this somewhat contradicts the order-by-ndistinct optimization
and may easily undo it's benefits.

The other "issue" I have with get_cheapest_group_keys_order() is how it
interacts with group_keys_reorder_by_pathkeys(). I mean, we first call

n_preordered = group_keys_reorder_by_pathkeys(path->pathkeys, ...);

so the reordering tries to maintain ordering of the input path. Then if
(n_preordered == 0) we do this:

group_keys_reorder_by_pathkeys(root->sort_pathkeys, ...);

Doesn't that mean that if we happen to have input path with partially
matching ordering (so still requiring explicit Sort before grouping) we
may end up ignoring the ORDER BY entirely? Instead of using Sort that
would match the ORDER BY? I might have misunderstood this, though.

I'm not sure why the first group_keys_reorder_by_pathkeys() call does
not simply return 0 when the input path ordering is not sufficient for
the grouping? Is there some reasoning behind that (e.g. that sorting
partially sorted is faster)?

Overall I think this patch introduces four different optimizations that
are indeed very interesting:

1) consider additional sorted paths for grouping

2) reorder GROUP BY clauses per ndistinct to minimize comparisons

3) when adding Sort for grouping, maintain partial input ordering

4) when adding Sort for grouping, try producing the right output order
(if the ORDER BY was specified)

But at this point it's kinda mashed together in ad-hoc manner, using
very simple heuristics / assumptions. We'll need to figure out how to
combine those optimizations.

BTW I get compiler warnings that n_preordered_groups may be used
uninitialized in add_paths_to_grouping_rel, because it's only set in an
if/else branch, and then used further down.

> Next, I had a look on cost_incremental_sort() provided by
> incremental sort patch and, it's a pity, it doesn't solve our problem
> with the impact of the cost of per-column comparison function and
> number of its calls.

I currently doesn't, but that might be more an issue in the incremental
sort patch and we may need to fix that. Clearly the costing in general
in that patch needs more work, and I do recall Tom pointing out that the
current heuristics (estimating number of sort groups using ndistincts)
seems rather weak.

It may not be exactly the same problem, but it seems kinda similar to
what this patch does. I have a hunch that those patches will end up
inventing something fairly similar.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Geoghegan 2018-06-09 18:30:35 Re: JIT documentation fixes
Previous Message Daniel Gustafsson 2018-06-09 17:40:22 Re: JIT documentation fixes