Re: PoC: Using Count-Min Sketch for join cardinality estimation

From: John Naylor <john(dot)naylor(at)enterprisedb(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: PoC: Using Count-Min Sketch for join cardinality estimation
Date: 2021-06-18 17:03:05
Message-ID: CAFBsxsGPTffwGMaRk_C9OHbc7b0GBfVXcerP-hTFQ1VgK4o5bw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Jun 16, 2021 at 8:23 PM Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com>
wrote:

> Not really, but to be fair even for the histograms it's only really
> supported by "seems to work in practice" :-(

Hmm, we cite a theoretical result in analyze.c, but I don't know if there
is something better out there:

* The following choice of minrows is based on the paper
* "Random sampling for histogram construction: how much is enough?"
* by Surajit Chaudhuri, Rajeev Motwani and Vivek Narasayya, in

What is more troubling to me is that we set the number of MCVs to the
number of histograms. Since b5db1d93d2a6 we have a pretty good method of
finding the MCVs that are justified. When that first went in, I
experimented with removing the MCV limit and found it easy to create value
distributions that lead to thousands of MCVs. I guess the best
justification now for the limit is plan time, but if we have a sketch also,
we can choose one or the other based on a plan-time speed vs accuracy
tradeoff (another use for a planner effort guc). In that scenario, for
tables with many MCVs we would only use them for restriction clauses.

--
John Naylor
EDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2021-06-18 17:20:21 Version reporting in pgbench
Previous Message John Naylor 2021-06-18 16:32:29 Re: disfavoring unparameterized nested loops