Improving planner's checks for parallel-unsafety

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: pgsql-hackers(at)postgreSQL(dot)org
Subject: Improving planner's checks for parallel-unsafety
Date: 2016-08-18 16:39:47
Message-ID: 3740.1471538387@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Attached is a patch I'd fooled around with back in July but not submitted.
The idea is that, if our initial scan of the query tree found only
parallel-safe functions, there is no need to rescan subsets of the tree
looking for parallel-restricted functions. We can mechanize that by
saving the "maximum unsafety" level in PlannerGlobal and looking aside
at that value before conducting a check of a subset of the tree.

This is not a huge win, but it's measurable. I see about 3% overall TPS
improvement in pgbench on repeated execution of this test query:

select
abs(unique1) + abs(unique1),
abs(unique2) + abs(unique2),
abs(two) + abs(two),
abs(four) + abs(four),
abs(ten) + abs(ten),
abs(twenty) + abs(twenty),
abs(hundred) + abs(hundred),
abs(thousand) + abs(thousand),
abs(twothousand) + abs(twothousand),
abs(fivethous) + abs(fivethous),
abs(tenthous) + abs(tenthous),
abs(odd) + abs(odd),
abs(even) + abs(even)
from tenk1 limit 1;

This test case is admittedly a bit contrived, in that the number of
function calls that have to be checked is high relative to both the
planning cost and execution cost of the query. Still, the fact that
the difference is above the noise floor even in an end-to-end test
says that the current method of checking functions twice is pretty
inefficient.

I'll put this in the commitfest queue.

regards, tom lane

Attachment Content-Type Size
better-planner-proparallel-check-1.patch text/x-diff 24.6 KB

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Heikki Linnakangas 2016-08-18 16:51:49 Re: Password identifiers, protocol aging and SCRAM protocol
Previous Message Andres Freund 2016-08-18 16:24:58 Re: Pluggable storage