Re: Performance problem in PLPgSQL

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Marc Cousin <cousinmarc(at)gmail(dot)com>
Cc: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, Fábio Telles Rodriguez <fabio(dot)telles(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance problem in PLPgSQL
Date: 2013-08-24 19:16:53
Message-ID: 7518.1377371813@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Marc Cousin <cousinmarc(at)gmail(dot)com> writes:
> On 23/08/2013 23:55, Tom Lane wrote:
>> My previous suggestion was to estimate planning cost as
>> 10 * (length(plan->rangetable) + 1)
>> but on reflection it ought to be scaled by one of the cpu cost constants,
>> so perhaps
>> 1000 * cpu_operator_cost * (length(plan->rangetable) + 1)
>> which'd mean a custom plan has to be estimated to save a minimum of
>> about 5 cost units (more if more than 1 table is used) before it'll
>> be chosen. I'm tempted to make the multiplier be 10000 not 1000,
>> but it seems better to be conservative about changing the behavior
>> until we see how well this works in practice.
>>
>> Objections, better ideas?

> No better idea as far as I'm concerned, of course :)

> But it is a bit tricky to understand what is going on when you get
> hit by it, and using a very approximated cost of the planning time
> seems the most logical to me. So I'm all for this solution.

I've pushed a patch along this line. I verified it fixes your original
example, but maybe you could try it on your real application?
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=005f583ba4e6d4d19b62959ef8e70a3da4d188a5

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Eisentraut 2013-08-24 19:31:54 Call for translations
Previous Message Tom Lane 2013-08-24 17:33:36 Re: PL/pgSQL PERFORM with CTE