dynamic SQL - possible performance regression in 9.2

From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Peter Eisentraut <peter_e(at)gmx(dot)net>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Dong Ye <yed(at)vmware(dot)com>
Subject: dynamic SQL - possible performance regression in 9.2
Date: 2013-01-06 00:17:59
Message-ID: CAMkU=1xjSpoqM1+OmD65TONpCNadZYK5B_=upWtb8zD0TYh9Nw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wednesday, January 2, 2013, Tom Lane wrote:

> Jeff Janes <jeff(dot)janes(at)gmail(dot)com> writes:
> > Using a RULE-based partitioning instead with row by row insertion, the
> > plancache changes slowed it down by 300%, and this patch doesn't change
> > that. But that seems to be down to the insertion getting planned
> > repeatedly, because it decides the custom plan is cheaper than the
> generic
> > plan. Whatever savings the custom plan may have are clearly less than
> the
> > cost of doing the planning repeatedly.
>
> That scenario doesn't sound like it has anything to do with the one being
> discussed in this thread. But what do you mean by "rule-based
> partitioning" exactly? A rule per se wouldn't result in a cached plan
> at all, let alone one with parameters, which would be necessary to
> trigger any use of the custom-cached-plan code path.
>

Right, it is not related to the dynamic SQL, but is to the plan-cache.

> Test cases are way more interesting than hand-wavy complaints.
>

Sorry, when exiled to the hinterlands I have more time to test various
things but not a good enough connectivity to describe them well. I'm
attaching the test case to load 1e5 rows into a very skinny table with 100
partitions using rules.

"origin" is from a few days ago, "origin_reduce_copies" is Heikki's patch,
and "origin_one_shot" is your now-committed patch. (unshown are
e6faf910d75027 and e6faf910d75027_prev, but that is where the regression
was introduced)

JJ /usr/local/pgsql_REL9_1_7/
Time: 64252.6907920837 ms
JJ origin/
Time: 186657.824039459 ms
JJ origin_reduce_copies/
Time: 185370.236873627 ms
JJ origin_one_shot/
Time: 189104.484081268 ms

The root problem is that it thinks the generic plan costs about 50% more
than the custom one. I don't know why it thinks that, or how much it is
worth chasing it down.

On the other hand, your patch does fix almost all of the 9.2.[012]
regression of using the following dynamic SQL trigger (instead of RULES) to
load into the same test case.

CREATE OR REPLACE FUNCTION foo_insert_trigger()
RETURNS trigger AS $$
DECLARE tablename varchar(24);
BEGIN
tablename = 'foo_' || new.partition;
EXECUTE 'INSERT INTO '|| tablename ||' VALUES (($1).*)' USING NEW ;
RETURN NULL;
END;
$$
LANGUAGE plpgsql;

CREATE TRIGGER foo_insert_trigger
BEFORE INSERT ON foo
FOR EACH ROW EXECUTE PROCEDURE foo_insert_trigger();

Cheers,

Jeff

Attachment Content-Type Size
insert.pl text/x-perl 1.6 KB
rules_test.sql text/x-sql 19.0 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tatsuo Ishii 2013-01-06 02:03:18 Re: too much pgbench init output
Previous Message Tom Lane 2013-01-05 22:18:16 Re: [PERFORM] Slow query: bitmap scan troubles