| From: | Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr> |
|---|---|
| To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
| Cc: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, PostgreSQL Developers <pgsql-hackers(at)postgresql(dot)org> |
| Subject: | Re: extend pgbench expressions with functions |
| Date: | 2016-02-16 10:18:39 |
| Message-ID: | alpine.DEB.2.10.1602161052560.31368@sto |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Hello Robert,
>> Good point. One simple idea here would be to use a custom pgbench
>> script that has no SQL commands and just calculates the values of some
>> parameters to measure the impact without depending on the backend,
>> with a fixed number of transactions.
>
> Sure, we could do that. But whether it materially changes pgbench -S
> results, say, is a lot more important.
Indeed. Several runs on my laptop:
~ 400000-540000 tps with master using:
\set naccounts 100000 * :scale
\setrandom aid 1 :naccounts
~ 430000-530000 tps with full function patch using:
\set naccounts 100000 * :scale
\setrandom aid 1 :naccounts
~ 730000-890000 tps with full function patch using:
\set aid random(1, 100000 * :scale)
The performance is pretty similar on the same script. The real pain is
variable management, avoiding some is a win.
However, as you suggest, the tps impact even with -M prepared -S is
nought, because the internal scripting time in pgbench is much smaller
than the time to do actual connecting and querying.
--
Fabien.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Simon Riggs | 2016-02-16 10:21:30 | Re: Identifying a message in emit_log_hook. |
| Previous Message | Pavel Stehule | 2016-02-16 09:57:33 | Re: Identifying a message in emit_log_hook. |