From: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, PostgreSQL Developers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: extend pgbench expressions with functions |
Date: | 2016-02-16 06:55:14 |
Message-ID: | CAB7nPqTYotNAQOOdwbBbQip1JjzjQVsjfsaVVoUogTe6ihR8Tg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Feb 16, 2016 at 9:18 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> I experimented with trying to do this and ran into a problem: where
> exactly would you store the evaluated arguments when you don't know
> how many of them there will be? And even if you did know how many of
> them there will be, wouldn't that mean that evalFunc or evaluateExpr
> would have to palloc a buffer of the correct size for each invocation?
> That's far more heavyweight than the current implementation, and
> minimizing CPU usage inside pgbench is a concern. It would be
> interesting to do some pgbench runs with this patch, or the final
> patch, and see what effect it has on the TPS numbers, if any, and I
> think we should. But the first concern is to minimize any negative
> impact, so let's talk about how to do that.
Good point. One simple idea here would be to use a custom pgbench
script that has no SQL commands and just calculates the values of some
parameters to measure the impact without depending on the backend,
with a fixed number of transactions.
--
Michael
From | Date | Subject | |
---|---|---|---|
Next Message | Etsuro Fujita | 2016-02-16 06:56:41 | Re: postgres_fdw join pushdown (was Re: Custom/Foreign-Join-APIs) |
Previous Message | Michael Paquier | 2016-02-16 06:38:12 | Re: Re: Reusing abbreviated keys during second pass of ordered [set] aggregates |