|From:||Jeevan Chalke <jeevan(dot)chalke(at)enterprisedb(dot)com>|
|To:||Ashutosh Bapat <ashutosh(dot)bapat(at)enterprisedb(dot)com>|
|Cc:||Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>, Rajkumar Raghuwanshi <rajkumar(dot)raghuwanshi(at)enterprisedb(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>|
|Subject:||Re: [HACKERS] Partition-wise aggregation/grouping|
|Views:||Raw Message | Whole Thread | Download mbox|
On Wed, Mar 28, 2018 at 7:21 PM, Ashutosh Bapat <
> On Tue, Mar 27, 2018 at 2:43 PM, Jeevan Chalke
> <jeevan(dot)chalke(at)enterprisedb(dot)com> wrote:
> > I am not sure on what we should Assetrt here. Note that we end-up here
> > when doing grouping, and thus I don't think we need any Assert here.
> > Let me know if I missed anything.
> Since we are just checking whether it's an upper relation and directly
> using root->upper_targets[UPPERREL_GROUP_AGG], I thought we could add
> an assert to verify that it's really the grouping rel we are dealing
> with. But I guess, we can't really check that from given relation. But
> then for a grouped rel we can get its target from RelOptInfo. So, we
> shouldn't need to root->upper_targets[UPPERREL_GROUP_AGG]. Am I
> missing something? For other upper relations we do not set the target
> yet but then we could assert that there exists one in the grouped
Yes. We fetch target from the grouped_rel itself.
Added Assert() per out off-list discussion.
> >> - get_agg_clause_costs(root, (Node *)
> >> root->parse->havingQual,
> >> + get_agg_clause_costs(root, fpinfo->havingQual,
> >> AGGSPLIT_SIMPLE, &aggcosts);
> >> }
> >> Should we pass agg costs as well through GroupPathExtraData to avoid
> >> calculating it again in this function?
> > Adding an extra member in GroupPathExtraData just for FDW does not look
> > to me.
> > But yes, if we do that, then we can save this calculation.
> > Let me know if its OK to have an extra member for just FDW use, will
> > a separate patch for that.
> I think that should be fine. A separate patch would be good, so that a
> committer can decide whether or not to include it.
Attached patch 0003 for this.
> >> + Node *havingQual;
> >> I am wondering whether we could use remote_conds member for storing
> > This havingQual is later checked for shippability and classified into
> > pushable and non-pushable quals and stored in remote_conds and
> > respectively.
> > Storing it directly in remote_conds and then splitting it does not look
> > to me.
> > Also, remote_conds is list of RestrictInfo nodes whereas havingQual is
> > So using that for storing havingQual does not make sense. So better to
> > a separate member in PgFdwRelationInfo.
> Ah sorry, I was wrong about remote_conds. remote_conds and local_conds
> are basically the conditions on the relation being pushed down.
> havingQuals are conditions on a grouped relation so treating them like
> baserestrictinfo or join conditions looks more straight forward,
> rather than having a separate member in PgFdwRelationInfo. So, remote
> havingQuals go into remote_conds and local havingQuals go to
In this version, I have not added anything in PgFdwRelationInfo.
Having qual is needed at two places; (1) in foreign_grouping_ok() to check
shippability, so passed this translated HAVING qual as a parameter to it,
and (2) estimating aggregates costs in estimate_path_cost_size(); there we
can use havingQual from root itself as costs won't change for parent and
Thus no need of storing a havingQual in PgFdwRelationInfo.
> Best Wishes,
> Ashutosh Bapat
> EnterpriseDB Corporation
> The Postgres Database Company
Technical Architect, Product Development
The Enterprise PostgreSQL Company
|Next Message||Jeevan Chalke||2018-03-29 13:02:34||Re: [HACKERS] Partition-wise aggregation/grouping|
|Previous Message||Tomas Vondra||2018-03-29 13:00:34||Re: Parallel Aggregates for string_agg and array_agg|