From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr> |
Cc: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, PostgreSQL Developers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: extend pgbench expressions with functions |
Date: | 2016-02-16 12:48:10 |
Message-ID: | CA+TgmoZHKnMhy5LkdaheCUs3C5a-Y-DjNtgwHSn+UWE4hdx9Ng@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Feb 16, 2016 at 5:18 AM, Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr> wrote:
>>> Good point. One simple idea here would be to use a custom pgbench
>>> script that has no SQL commands and just calculates the values of some
>>> parameters to measure the impact without depending on the backend,
>>> with a fixed number of transactions.
>>
>> Sure, we could do that. But whether it materially changes pgbench -S
>> results, say, is a lot more important.
>
>
> Indeed. Several runs on my laptop:
>
> ~ 400000-540000 tps with master using:
> \set naccounts 100000 * :scale
> \setrandom aid 1 :naccounts
>
> ~ 430000-530000 tps with full function patch using:
> \set naccounts 100000 * :scale
> \setrandom aid 1 :naccounts
>
> ~ 730000-890000 tps with full function patch using:
> \set aid random(1, 100000 * :scale)
>
> The performance is pretty similar on the same script. The real pain is
> variable management, avoiding some is a win.
Wow, that's pretty nice.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Craig Ringer | 2016-02-16 12:50:24 | Re: Packaging of postgresql-jdbc |
Previous Message | Robert Haas | 2016-02-16 12:47:28 | Re: extend pgbench expressions with functions |