Re: anti-join chosen even when slower than old plan

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: "Robert Haas" <robertmhaas(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: anti-join chosen even when slower than old plan
Date: 2010-11-10 23:07:15
Message-ID: 12595.1289430435@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

"Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> writes:
> Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> Unfortunately, to know how much data we're going to grovel
>> through, we need to know the plan; and to decide on the right
>> plan, we need to know how much data we're going to grovel through.

> And that's where they've been ending.

> The only half-sane answer I've thought of is to apply a different
> cost to full-table or full-index scans based on the ratio with
> effective cache size.

This might have some connection to some rather half-baked ideas I've
been having in connection with the generalized-inner-indexscan problem.
I don't have anything in the way of a coherent presentation to make yet,
but the thing I'm being forced to realize is that sane modeling of a
complex subplan that's on the inside of a nestloop join requires
treating *every* scan type as having different costs "the first time"
versus "during rescan". If the total amount of data touched in the
query is less than effective_cache_size, it's not unreasonable to
suppose that I/O costs during rescan might be zero, even for a seqscan or
a non-parameterized indexscan. In fact, only parameterized indexscans
would be able to touch pages they'd not touched the first time, and so
they ought to have higher not lower rescan costs in this environment.
But once the total data volume exceeds effective_cache_size, you have to
do something different since you shouldn't any longer assume the data is
all cached from the first scan. (This isn't quite as hard as the case
you're talking about, since I think the relevant data volume is the sum
of the sizes of the tables used in the query; which is easy to
estimate at the start of planning, unlike the portion of the tables
that actually gets touched.)

An idea that isn't even half-baked yet is that once we had a cost model
like that, we might be able to produce plans that are well-tuned for a
heavily cached environment by applying the "rescan" cost model even to
the first scan for a particular query. So that might lead to some sort
of "assume_cached" GUC parameter, and perhaps Kevin could tune his
reporting queries by turning that off instead of messing with individual
cost constants. Or maybe we could be smarter if we could extract an
estimate for the amount of data touched in the query ... but like you,
I don't see a good way to get that number soon enough.

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Robert Haas 2010-11-11 03:47:21 Re: anti-join chosen even when slower than old plan
Previous Message Kevin Grittner 2010-11-10 22:43:43 Re: anti-join chosen even when slower than old plan