Re: JIT compiling with LLVM v12

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Noah Misch <noah(at)leadboat(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: JIT compiling with LLVM v12
Date: 2018-08-26 01:34:22
Message-ID: CA+Tgmob6SuFL_iXWyWwZw1-1R5wDcMvSZtyqf7SNajqKnPEX9Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Aug 22, 2018 at 6:43 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> Now you can say that'd be solved by bumping the cost up, sure. But
> obviously the row / cost model is pretty much out of whack here, I don't
> see how we can make reasonable decisions in a trivial query that has a
> misestimation by five orders of magnitude.

Before JIT, it didn't matter whether the costing was wrong, provided
that the path with the lowest cost was the cheapest path (or at least
close enough to the cheapest path not to bother anyone). Now it does.
If the intended path is chosen but the costing is higher than it
should be, JIT will erroneously activate. If you had designed this in
such a way that we added separate paths for the JIT and non-JIT
versions and the JIT version had a bigger startup cost but a reduced
runtime cost, then you probably would not have run into this issue, or
at least not to the same degree. But as it is, JIT activates when the
plan looks expensive, regardless of whether activating JIT will do
anything to make it cheaper. As a blindingly obvious example, turning
on JIT to mitigate the effects of disable_cost is senseless, but as
you point out, that's exactly what happens right now.

I'd guess that, as you read this, you're thinking, well, but if I'd
added JIT and non-JIT paths for every option, it would have doubled
the number of paths, and that would have slowed the planner down way
too much. That's certainly true, but my point is just that the
problem is probably not as simple as "the defaults are too low". I
think the problem is more fundamentally that the model you've chosen
is kinda broken. I'm not saying I know how you could have done any
better, but I do think we're going to have to try to figure out
something to do about it, because saying, "check-pg_upgrade is 4x
slower, but that's just because of all those bad estimates" is not
going to fly. Those bad estimates were harmlessly bad before, and now
they are harmfully bad, and similar bad estimates are going to exist
in real-world queries, and those are going to be harmful now too.

Blaming the bad costing is a red herring. The problem is that you've
made the costing matter in a way that it previously didn't.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2018-08-26 02:00:52 Re: has_table_privilege for a table in unprivileged schema causes an error
Previous Message Robert Haas 2018-08-26 01:08:53 Re: [HACKERS] WIP: long transactions on hot standby feedback replica / proof of concept