From: | Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com> |
---|---|
To: | Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>, Mark Wong <mark(at)2ndQuadrant(dot)com> |
Cc: | Alvaro Hernandez <aht(at)ongres(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL Developers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: pgbench - allow to specify scale as a size |
Date: | 2018-03-03 19:37:03 |
Message-ID: | e046f3a3-dda7-164c-f0bd-75eacedea83f@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2/20/18 05:06, Fabien COELHO wrote:
>> Now the overhead is really 60-65%. Although the specification is unambiguous,
>> but we still need some maths to know whether it fits in buffers or memory...
>> The point of Karel regression is to take this into account.
>>
>> Also, whether this option would be more admissible to Tom is still an open
>> question. Tom?
>
> Here is a version with this approach: the documentation talks about
> "actual data size, without overheads", and points out that storage
> overheads are typically an additional 65%.
I think when deciding on a size for a test database for benchmarking,
you want to size it relative to RAM or other storage layers. So a
feature that allows you to create a database of size N but it's actually
not going to be anywhere near N seems pretty useless for that.
(Also, we have, for better or worse, settled on a convention for byte
unit prefixes in guc.c. Let's not introduce another one.)
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2018-03-03 19:39:55 | Re: JIT compiling with LLVM v11 |
Previous Message | Jim Nasby | 2018-03-03 19:32:33 | Re: autovacuum: change priority of the vacuumed tables |