Re: scale parallel_tuple_cost by tuple width

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: scale parallel_tuple_cost by tuple width
Date: 2026-03-30 14:17:33
Message-ID: 2005009.1774880253@sss.pgh.pa.us
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
> While investigating a performance issue, I found that it was extremely
> difficult to get a parallel plan in some cases due to the fixed
> parallel_tuple_cost. But this cost is not really fixed - it's going to
> be larger for larger tuples. So this proposal adjusts the cost used
> according to how large we expect the results to be.

Unfortunately, I'm afraid that this is going to produce mostly
"garbage in, garbage out" estimates, because our opinion of how wide
tuples-in-flight are is pretty shaky. (See get_expr_width and
particularly get_typavgwidth, and note that we only have good
statistics-based numbers for plain Vars not function outputs.)
I agree that it could be useful to have some kind of adjustment here,
but I fear that linear scaling is putting way too much faith in the
quality of the data.

How many cases have you checked with this modified code? Did it
make the plan worse in any cases?

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Heikki Linnakangas 2026-03-30 14:18:03 Re: Clean up NamedLWLockTranche stuff
Previous Message Antonin Houska 2026-03-30 13:29:47 Re: Teach isolation tester about injection points in background workers