Re: Bitmap scan is undercosted?

From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Vitaliy Garnashevich <vgarnashevich(at)gmail(dot)com>
Cc: Justin Pryzby <pryzby(at)telsasoft(dot)com>, pgsql-performance(at)lists(dot)postgresql(dot)org
Subject: Re: Bitmap scan is undercosted?
Date: 2017-12-07 03:48:32
Message-ID: CAMkU=1z0ZD2idZ3cBpXj=crftsc_FtfD8KFR+er7oOfhEV1rOA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers pgsql-performance

On Sun, Dec 3, 2017 at 1:15 PM, Vitaliy Garnashevich <
vgarnashevich(at)gmail(dot)com> wrote:

> On 02/12/2017 23:17, Jeff Janes wrote:
>
> Right, so there is a cpu costing problem (which could only be fixed by
> hacking postgresql and recompiling it), but it is much smaller of a problem
> than the IO cost not being accurate due to the high hit rate. Fixing the
> CPU costing problem is unlikely to make a difference to your real query.
> If you set the page costs to zero, what happens to your real query?
>
> I can't reproduce the exact issue on the real database any more. The query
> started to use the slow bitmap scan recently, and had been doing so for
> some time lately, but now it's switched back to use the index scan. The
> table involved in the query gets modified a lot. It has hundreds of
> millions of rows. Lots of new rows are appended to it every day, the oldest
> rows are sometimes removed. The table is analyzed at least daily. It's
> possible that statistics was updated and that caused the query to run
> differently. But I still would like to understand why that issue happened,
> and how to properly fix it, in case the issue returns.
>

While your test case displays some cost estimation issues, there is really
no reason to think that they are the same issues your real query shows.
Particularly since you said the difference was a factor of 30 in the real
case, rather than 3. Any chance you can show EXPLAIN ANALYZE output for
the real query, but when it is acting up and when it is not? Something in
the plans might stand out to us as the obvious problem. On the other hand,
maybe nothing will stand out without having a replicable test case. The
only way to know is to try.

>
>
>
>> But I doubt that the settings seq_page_cost = random_page_cost = 0.0
>> should actually be used.
>>
>
> Why not? If your production server really has everything in memory during
> normal operation, that is the correct course of action. If you ever
> restart the server, then you could have some unpleasant time getting it
> back up to speed again, but pg_prewarm could help with that.
>
> In the real database, not everything is in memory. There are 200GB+ of
> RAM, but DB is 500GB+. The table involved in the query itself is 60GB+ of
> data and 100GB+ of indexes. I'm running the test case in a way where all
> reads are done from RAM, only to make it easier to reproduce and to avoid
> unrelated effects.
>

Is everything that the particular query in questions needs in memory, even
if other queries need things from disk? Or does the problematic query also
need things from disk? If the query does need to read things from disk,
the bitmap actually should be faster. Which reinforces the idea that maybe
the issue brought up by your test case is not the same as the issue brought
up by your real case, even if they both point in the same direction.

> As far as know, costs in Postgres were designed to be relative to
> seq_page_cost, which for that reason is usually defined as 1.0. Even if
> everything would be in RAM, accesses to the pages would still not have zero
> cost. Setting 0.0 just seems too extreme, as all other non-zero costs would
> become infinitely bigger.
>

When exploring things, 0.0 certain helps to simplify things. Yes, 0.05 or
something similar might be better for a completely cached database. The
problem is that it is very context dependent. Reading a page from
shared_buffers when there is no contention from other processes for the
same page is probably less than 0.01. If it is not in shared_buffers but
is in effective_cache_size, it is probably a few multiples of 0.01. If
there is contention either for that specific page, or for available buffers
into which to read pages, then it could be substantially higher yet.
Higher, none of those are things the planner is aware of.

If you really want to target the plan with the BitmapAnd, you should
> increase cpu_index_tuple_cost and/or cpu_operator_cost but not increase
> cpu_tuple_cost. That is because the unselective bitmap index scan does
> not incur any cpu_tuple_cost, but does incur index_tuple and operator
> costs. Unfortunately all other index scans in the system will also be
> skewed by such a change if you make the change system-wide.
>
> Exactly. I'd like to understand why the worse plan is being chosen, and 1)
> if it's fixable by tuning costs, to figure out the right settings which
> could be used in production, 2) if there is a bug in Postgres optimizer,
> then to bring some attention to it, so that it's eventually fixed in one of
> future releases, 3) if Postgres is supposed to work this way, then at least
> I (and people who ever read this thread) would understand it better.
>

I would argue that it is planner "bug", (quotes because it doesn't give
wrong answers, just sub-optimal plans) but one that is very hard to pin
down, and also depends on the hardware you are running on. Also, people
have made some optimizations to the machinery behind the bitmap code
recently, as well as the costing of the bitmap code, so if it is bug, the
size of it is changing with the version you are using. If your aim is to
improve the planner (rather than simply tuning the planner that currently
exists) then you should probably 1) make your test case use random number
generators, rather than modulus, to avoid cross-column correlation and
other such issues, 2) run it against 11dev code, which is where
improvements to PostgreSQL are targeted, rather than against production
versions, and 3) post to pgsql-hackers, rather than performance.

Cheers,

Jeff

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Craig Ringer 2017-12-07 04:45:50 Re: Postgres with pthread
Previous Message Tsunakawa, Takayuki 2017-12-07 03:44:04 RE: Postgres with pthread

Browse pgsql-performance by date

  From Date Subject
Next Message Jeff Janes 2017-12-07 04:51:16 Re: Bitmap scan is undercosted? - boolean correlation
Previous Message Matthew Hall 2017-12-07 02:04:54 Re: insert and query performance on big string table with pg_trgm