Re: DBT-3 with SF=20 got failed

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Kouhei Kaigai <kaigai(at)ak(dot)jp(dot)nec(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: DBT-3 with SF=20 got failed
Date: 2015-09-24 17:15:26
Message-ID: 56042FAE.5000603@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 09/24/2015 07:04 PM, Tom Lane wrote:
> Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> writes:
>> But what about computing the number of expected batches, but always
>> start executing assuming no batching? And only if we actually fill
>> work_mem, we start batching and use the expected number of batches?
>
> Hmm. You would likely be doing the initial data load with a "too
> small" numbuckets for single-batch behavior, but if you successfully
> loaded all the data then you could resize the table at little
> penalty. So yeah, that sounds like a promising approach for cases
> where the initial rowcount estimate is far above reality.

I don't understand the comment about "too small" numbuckets - isn't
doing that the whole point of using the proposed limit? The batching is
merely a consequence of how bad the over-estimate is.

> But I kinda thought we did this already, actually.

I don't think so - I believe we haven't modified this aspect at all. It
may not have been as pressing thanks to NTUP_PER_BUCKET=10 in the past.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2015-09-24 17:16:51 Re: No Issue Tracker - Say it Ain't So!
Previous Message Joe Conway 2015-09-24 17:12:58 Re: No Issue Tracker - Say it Ain't So!