Re: DBT-3 with SF=20 got failed

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Kouhei Kaigai <kaigai(at)ak(dot)jp(dot)nec(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: DBT-3 with SF=20 got failed
Date: 2015-09-24 16:55:52
Message-ID: 56042B18.9000008@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 09/24/2015 05:18 PM, Tom Lane wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>> Of course, if we can postpone sizing the hash table until after the
>> input size is known, as you suggest, then that would be better still
>> (but not back-patch material).
>
> AFAICS, it works that way today as long as the hash fits in memory
> (ie, single-batch). We load into a possibly seriously undersized hash
> table, but that won't matter for performance until we start probing it.
> At the conclusion of loading, MultiExecHash will call
> ExecHashIncreaseNumBuckets which will re-hash into a better-sized hash
> table. I doubt this can be improved on much.
>
> It would be good if we could adjust the numbuckets choice at the
> conclusion of the input phase for the multi-batch case as well.
> The code appears to believe that wouldn't work, but I'm not sure if
> it's right about that, or how hard it'd be to fix if so.

So you suggest to use a small hash table even when we expect batching?

That would be rather difficult to do because of the way we derive
buckets and batches from the hash value - they must not overlap. The
current code simply assumes that once we start batching the number of
bits needed for buckets does not change anymore.

It's possible to rework of course - the initial version of the patch
actually did just that (although it was broken in other ways).

But I think the real problem here is the batching itself - if we
overestimate and start batching (while we could actually run with a
single batch), we've already lost.

But what about computing the number of expected batches, but always
start executing assuming no batching? And only if we actually fill
work_mem, we start batching and use the expected number of batches?

I.e.

1) estimate nbatches, but use nbatches=1

2) run until exhausting work_mem

3) start batching, with the initially estimated number of batches

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tomas Vondra 2015-09-24 16:58:42 Re: multivariate statistics / patch v7
Previous Message Tom Lane 2015-09-24 16:52:11 Re: TEXT vs VARCHAR join qual push down diffrence, bug or expected?