Re: Erroneous cost estimation for nested loop join

From: Bruce Momjian <bruce(at)momjian(dot)us>
To: KAWAMICHI Ryoji <kawamichi(at)tkl(dot)iis(dot)u-tokyo(dot)ac(dot)jp>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: Erroneous cost estimation for nested loop join
Date: 2015-12-03 01:42:10
Message-ID: 20151203014210.GA12766@momjian.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Nov 30, 2015 at 04:29:43PM +0900, KAWAMICHI Ryoji wrote:
>
>
> Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> >
> > - If we're sequential scanning a small table, let's say less than 1/4
> > of shared_buffers, which is the point where synchronized scans kick
> > in, then assume the data is coming from shared_buffers.
> > - If we're scanning a medium-sized table, let's say less than
> > effective_cache_size, then assume the data is coming from the OS
> > cache. Maybe this is the same cost as the previous case, or maybe
> > it's slightly more.
> > - Otherwise, assume that the first effective_cache_size pages are
> > coming from cache and the rest has to be read from disk. This is
> > perhaps unrealistic, but we don't want the cost curve to be
> > discontinuous.
>
> I think this improvement is so reasonable, and I expect it will be merged
> into current optimizer code.
>
>
> > A problem with this sort of thing, of course, is that it's really hard
> > to test a proposed change broadly enough to be certain how it will
> > play out in the real world.
>
> That’s the problem we’re really interested in and trying to tackle.
>
> For example, with extensive experiments, I’m really sure my modification of
> cost model is effective for our environment, but I can’t see if it is also
> efficient or unfortunately harmful in general environments.
>
> And I think that, in postgres community, there must be (maybe buried)
> knowledge on how to judge the effectiveness of cost model modifications
> because someone should have considered something like that at each commit.
> I’m interested in it, and hopefully would like to contribute to finding
> a better way to improve the optimizer through cost model refinement.

No one mentioned the random page docs so I will quote it here:

http://www.postgresql.org/docs/9.5/static/runtime-config-query.html#RUNTIME-CONFIG-QUERY-CONSTANTS

Random access to mechanical disk storage is normally much more expensive
than four times sequential access. However, a lower default is used
(4.0) because the majority of random accesses to disk, such as indexed
reads, are assumed to be in cache. The default value can be thought of
as modeling random access as 40 times slower than sequential, while
expecting 90% of random reads to be cached.

If you believe a 90% cache rate is an incorrect assumption for your
workload, you can increase random_page_cost to better reflect the true
cost of random storage reads. Correspondingly, if your data is likely to
be completely in cache, such as when the database is smaller than the
total server memory, decreasing random_page_cost can be appropriate.
Storage that has a low random read cost relative to sequential, e.g.
solid-state drives, might also be better modeled with a lower value for
random_page_cost.

What we don't have is way to know how much is in the cache, not only at
planning time, but at execution time. (Those times are often
different for prepared queries.) I think that is the crux of what has
to be addressed here.

--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message David Rowley 2015-12-03 02:17:43 Re: Removing Functionally Dependent GROUP BY Columns
Previous Message Michael Paquier 2015-12-03 01:41:11 Re: psql: add \pset true/false