| From: | David Rowley <dgrowleyml(at)gmail(dot)com> |
|---|---|
| To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
| Cc: | Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
| Subject: | Re: scale parallel_tuple_cost by tuple width |
| Date: | 2026-03-30 22:51:35 |
| Message-ID: | CAApHDvpOPs-Ywcze5=eyi4s5hO1NM9RA8No20Q=s+0L3LiorHw@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
On Tue, 31 Mar 2026 at 03:17, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
> Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
> > While investigating a performance issue, I found that it was extremely
> > difficult to get a parallel plan in some cases due to the fixed
> > parallel_tuple_cost. But this cost is not really fixed - it's going to
> > be larger for larger tuples. So this proposal adjusts the cost used
> > according to how large we expect the results to be.
>
> Unfortunately, I'm afraid that this is going to produce mostly
> "garbage in, garbage out" estimates, because our opinion of how wide
> tuples-in-flight are is pretty shaky. (See get_expr_width and
> particularly get_typavgwidth, and note that we only have good
> statistics-based numbers for plain Vars not function outputs.)
> I agree that it could be useful to have some kind of adjustment here,
> but I fear that linear scaling is putting way too much faith in the
> quality of the data.
(I suspect you're saying this because of the "Benchmark 2" in the text
file, which contains aggregates which return a varlena type, of which
we won't estimate the width very well for...)
Sure, it's certainly true that there are cases where we don't get the
width estimate right, but that's not stopped us using them before. So
why is this case so much more critical? We now also have GROUP BY
before join abilities in the planner, which I suspect must also be
putting trust into the very same thing. Also, varlena-returning
Aggrefs aren't going to be the Gather/GatherMerge targetlist, so why
avoid making improvements in this area because we're not great at one
of the things that could be in the targetlist?
For the patch and the analysis: This reminds me of [1], where some
reverse-engineering of costs from query run-times was done, which
ended up figuring out what we set APPEND_CPU_COST_MULTIPLIER to. To
get that for this case would require various tests with different
tuple widths and ensuring that the costs scale linearly with the
run-time of the query with the patched version. Of course, the test
query would have to have perfect width estimates, but that could be
easy enough to do by populating a text typed GROUP BY column and
populating that with all the same width of text for each of the tests
before increasing the width for the next test, using a fixed-width
aggregate each time, e.g count(*). The "#define
PARALLEL_TUPLE_COST_REF_WIDTH 100" does seem to be quite a round
number. It would be good to know how close this is to reality.
Ideally, it would be good to see results from an Apple M<something>
chip and recent x86. In my experience, these produce very different
performance results, so it might be nice to find a value that is
somewhere in the middle of what we get from those machines. I suspect
having the GROUP BY column with text widths from 8 to 1024, increasing
in powers of two would be enough data points.
David
[1] https://postgr.es/m/CAKJS1f9UXdk6ZYyqbJnjFO9a9hyHKGW7B=ZRh-rxy9qxfPA5Gw@mail.gmail.com
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tom Lane | 2026-03-30 23:00:10 | Re: scale parallel_tuple_cost by tuple width |
| Previous Message | Melanie Plageman | 2026-03-30 22:37:27 | Re: Don't synchronously wait for already-in-progress IO in read stream |