|From:||Teodor Sigaev <teodor(at)sigaev(dot)ru>|
|To:||Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Pgsql Hackers <pgsql-hackers(at)postgresql(dot)org>|
|Subject:||Re: POC: GROUP BY optimization|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
> I see v9 is now calling estimate_num_groups(), so it already benefits
> from extended statistics. Nice! I think the algorithm is more or less
> the greedy option I proposed in one of the earlier messages - I don't
> know if it's sufficient or not, but the only other idea I have is
> essentially an exhaustive search through all possible permutations.
All possible permutation could be slow with extremely large group by clause, so
we will need some some stop limit - like join_collapse_limit. I don't like this
> So that's a nice improvement, although I think we should also consider
> non-uniform distributions (using the per-column MCV lists).
Could you clarify how to do that?
>> 2) Planing cost is the same for all queries. So, cost_sort() doesn't
>> take into account even number of columns.
> Yeah. As I wrote before, I think we'll need to come up with a model for
> comparison_cost depending on the column order, which should make costs
> for those paths different.
Yeah. But I still think it should be separated patch which will improve cost
estimation and plan choosing in other optimizer part too.
> OK. I think we should also have a debug GUC for the first patch (i.e.
> one that would allow reordering group_pathkeys to match the input path).
Added to 0002-opt_group_by_index_and_order-v10.patch .
> Regarding the two GUCs introduced in v9, I'm not sure I 100% understand
> what they are doing. Let me try explaining what I think they do:
> 1) debug_cheapest_group_by - Reorder GROUP BY clauses to using ndistinct
> stats for columns, placing columns with more distinct values first (so
> that we don't need to compare the following columns as often).
> 2) debug_group_by_match_order_by - Try to reorder the GROUP BY clauses
> to match the ORDER BY, so that the group aggregate produces results in
> the desired output (and we don't need an explicit Sort).
> FWIW I doubt about the debug_group_by_match_order_by optimization. The
> trouble is that it's only based on comparing the pathkeys lists, and
> entirely ignores that the two result sets may operate on very different
> numbers of rows.
> Consider for example "SELECT * FROM t GROUP BY a,b,c,d ORDER BY c,d"
> where table "t" has 1B rows, but there are only ~10k rows in the result
> (which is fairly common in fact tables in star-schema DBs). IIUC the
> optimization will ensure "c,d" is placed first in the GROUP BY, and then
> we reorder "a,b" using ndistinct. But this may be much less efficient
> than simply reordering (a,b,c,d) per ndistinct and then adding explicit
> Sort on top of that (because on 10k rows that's going to be cheap).
As you pointed in next email, this optimization is already implemented in
preprocess_groupclause(). And correct resolving of this issue could be done only
after implementing of correct cost_sort() - which will pay attention to
comparison cost, number of distinct value in columns and how often it's needed
to compare second and next columns.
> The other "issue" I have with get_cheapest_group_keys_order() is how it
> interacts with group_keys_reorder_by_pathkeys(). I mean, we first call
> n_preordered = group_keys_reorder_by_pathkeys(path->pathkeys, ...);
> so the reordering tries to maintain ordering of the input path. Then if
> (n_preordered == 0) we do this:
> group_keys_reorder_by_pathkeys(root->sort_pathkeys, ...);
> Doesn't that mean that if we happen to have input path with partially
> matching ordering (so still requiring explicit Sort before grouping) we
> may end up ignoring the ORDER BY entirely? Instead of using Sort that
> would match the ORDER BY? I might have misunderstood this, though.
Hm. I believe ordering of input of GROUP clause is always more expensive - just
because output of GROUP BY clause should have less number of rows than its
input, what means more cheap ordering. So here it uses ORDER BY only if we don't
have a group pathkey(s) matched by path pathkeys.
> I'm not sure why the first group_keys_reorder_by_pathkeys() call does
> not simply return 0 when the input path ordering is not sufficient for
> the grouping? Is there some reasoning behind that (e.g. that sorting
> partially sorted is faster)?
Hm, what do you mean - sufficient? group_keys_reorder_by_pathkeys() always tries
to order GROUP BY columns by preexisting pathkeys. But of course, incremental
sort will add more benefits here. I'd like to hope that incremental sort patch
is close to commit.
> Overall I think this patch introduces four different optimizations that
> are indeed very interesting:
> 1) consider additional sorted paths for grouping
> 2) reorder GROUP BY clauses per ndistinct to minimize comparisons
Agree (with a help of estimate_num_groups() and patch tries to maximize a number
of groups on each step)
> 3) when adding Sort for grouping, maintain partial input ordering
Agree - and incremental sort patch will helpful here.
> 4) when adding Sort for grouping, try producing the right output order
> (if the ORDER BY was specified)
> But at this point it's kinda mashed together in ad-hoc manner, using
> very simple heuristics / assumptions. We'll need to figure out how to
> combine those optimizations.
The used heuristics here:
1) if there is a some order - let's used it
2) if there is a required order - let's match that order to prevent extra sort node.
Incremental sort patch will improve cases where there is partial match of order.
> BTW I get compiler warnings that n_preordered_groups may be used
> uninitialized in add_paths_to_grouping_rel, because it's only set in an
> if/else branch, and then used further down.
Fixed, but I believe that compiler is not smart enough here.
Teodor Sigaev E-mail: teodor(at)sigaev(dot)ru
|Next Message||amul sul||2018-06-13 16:55:23||Re: Partitioning with temp tables is broken|
|Previous Message||Tom Lane||2018-06-13 16:10:40||Re: why partition pruning doesn't work?|