pgbench — run a benchmark test on PostgreSQL
pgbench is a simple program for running benchmark tests on PostgreSQL. It runs the same sequence of SQL commands over and over, possibly in multiple concurrent database sessions, and then calculates the average transaction rate (transactions per second). By default, pgbench tests a scenario that is loosely based on TPC-B, involving five
INSERT commands per transaction. However, it is easy to test other cases by writing your own transaction script files.
Typical output from pgbench looks like:
transaction type: <builtin: TPC-B (sort of)> scaling factor: 10 query mode: simple number of clients: 10 number of threads: 1 number of transactions per client: 1000 number of transactions actually processed: 10000/10000 tps = 85.184871 (including connections establishing) tps = 85.296346 (excluding connections establishing)
The first six lines report some of the most important parameter settings. The next line reports the number of transactions completed and intended (the latter being just the product of number of clients and number of transactions per client); these will be equal unless the run failed before completion. (In
-T mode, only the actual number of transactions is printed.) The last two lines report the number of transactions per second, figured with and without counting the time to start database sessions.
The default TPC-B-like transaction test requires specific tables to be set up beforehand. pgbench should be invoked with the
-i (initialize) option to create and populate these tables. (When you are testing a custom script, you don't need this step, but will instead need to do whatever setup your test needs.) Initialization looks like:
pgbench -i [
dbname is the name of the already-created database to test in. (You may also need
-U options to specify how to connect to the database server.)
pgbench -i creates four tables
pgbench_tellers, destroying any existing tables of these names. Be very careful to use another database if you have tables having these names!
At the default “scale factor” of 1, the tables initially contain this many rows:
table # of rows --------------------------------- pgbench_branches 1 pgbench_tellers 10 pgbench_accounts 100000 pgbench_history 0
You can (and, for most purposes, probably should) increase the number of rows by using the
-s (scale factor) option. The
-F (fillfactor) option might also be used at this point.
Once you have done the necessary setup, you can run your benchmark with a command that doesn't include
-i, that is
In nearly all cases, you'll need some options to make a useful test. The most important options are
-c (number of clients),
-t (number of transactions),
-T (time limit), and
-f (specify a custom script file). See below for a full list.
The following is divided into three subsections. Different options are used during database initialization and while running benchmarks, but some options are useful in both cases.
pgbench accepts the following command-line initialization arguments:
Required to invoke initialization mode.
Perform just a selected set of the normal initialization steps.
init_steps specifies the initialization steps to be performed, using one character per step. Each step is invoked in the specified order. The default is
dtgvp. The available steps are:
Drop any existing pgbench tables.
Create the tables used by the standard pgbench scenario, namely
Generate data and load it into the standard tables, replacing any data already present.
VACUUM on the standard tables.
p(create Primary keys)
Create primary key indexes on the standard tables.
f(create Foreign keys)
Create foreign key constraints between the standard tables. (Note that this step is not performed by default.)
pgbench_branches tables with the given fillfactor. Default is 100.
Perform no vacuuming during initialization. (This option suppresses the
v initialization step, even if it was specified in
Switch logging to quiet mode, producing only one progress message per 5 seconds. The default logging prints one message each 100000 rows, which often outputs many lines per second (especially on good hardware).
Multiply the number of rows generated by the scale factor. For example,
-s 100 will create 10,000,000 rows in the
pgbench_accounts table. Default is 1. When the scale is 20,000 or larger, the columns used to hold account identifiers (
aid columns) will switch to using larger integers (
bigint), in order to be big enough to hold the range of account identifiers.
Create foreign key constraints between the standard tables. (This option adds the
f step to the initialization step sequence, if it is not already present.)
Create indexes in the specified tablespace, rather than the default tablespace.
Create tables in the specified tablespace, rather than the default tablespace.
Create all tables as unlogged tables, rather than permanent tables.
pgbench accepts the following command-line benchmarking arguments:
Add the specified built-in script to the list of scripts to be executed. Available built-in scripts are:
select-only. Unambiguous prefixes of built-in names are accepted. With the special name
list, show the list of built-in scripts and exit immediately.
Optionally, write an integer weight after
@ to adjust the probability of selecting this script versus other ones. The default weight is 1. See below for details.
Number of clients simulated, that is, number of concurrent database sessions. Default is 1.
Establish a new connection for each transaction, rather than doing it just once per client session. This is useful to measure the connection overhead.
Print debugging output.
Define a variable for use by a custom script (see below). Multiple
-D options are allowed.
Add a transaction script read from
filename to the list of scripts to be executed.
Optionally, write an integer weight after
@ to adjust the probability of selecting this script versus other ones. The default weight is 1. (To use a script file name that includes an
@ character, append a weight so that there is no ambiguity, for example
filen@me@1.) See below for details.
Number of worker threads within pgbench. Using more than one thread can be helpful on multi-CPU machines. Clients are distributed as evenly as possible among available threads. Default is 1.
Write information about each transaction to a log file. See below for details.
Transactions that last more than
limit milliseconds are counted and reported separately, as late.
When throttling is used (
--rate=...), transactions that lag behind schedule by more than
limit ms, and thus have no hope of meeting the latency limit, are not sent to the server at all. They are counted and reported separately as skipped.
Protocol to use for submitting queries to the server:
simple: use simple query protocol.
extended: use extended query protocol.
prepared: use extended query protocol with prepared statements.
The default is simple query protocol. (See Chapter 53 for more information.)
Perform no vacuuming before running the test. This option is necessary if you are running a custom test scenario that does not include the standard tables
Run built-in simple-update script. Shorthand for
Show progress report every
sec seconds. The report includes the time since the beginning of the run, the TPS since the last report, and the transaction latency average and standard deviation since the last report. Under throttling (
-R), the latency is computed with respect to the transaction scheduled start time, not the actual transaction beginning time, thus it also includes the average schedule lag time.
Report the average per-statement latency (execution time from the perspective of the client) of each command after the benchmark finishes. See below for details.
Execute transactions targeting the specified rate instead of running as fast as possible (the default). The rate is given in transactions per second. If the targeted rate is above the maximum possible rate, the rate limit won't impact the results.
The rate is targeted by starting transactions along a Poisson-distributed schedule time line. The expected start time schedule moves forward based on when the client first started, not when the previous transaction ended. That approach means that when transactions go past their original scheduled end time, it is possible for later ones to catch up again.
When throttling is active, the transaction latency reported at the end of the run is calculated from the scheduled start times, so it includes the time each transaction had to wait for the previous transaction to finish. The wait time is called the schedule lag time, and its average and maximum are also reported separately. The transaction latency with respect to the actual transaction start time, i.e., the time spent executing the transaction in the database, can be computed by subtracting the schedule lag time from the reported latency.
--latency-limit is used together with
--rate, a transaction can lag behind so much that it is already over the latency limit when the previous transaction ends, because the latency is calculated from the scheduled start time. Such transactions are not sent to the server, but are skipped altogether and counted separately.
A high schedule lag time is an indication that the system cannot process transactions at the specified rate, with the chosen number of clients and threads. When the average transaction execution time is longer than the scheduled interval between each transaction, each successive transaction will fall further behind, and the schedule lag time will keep increasing the longer the test run is. When that happens, you will have to reduce the specified transaction rate.
Report the specified scale factor in pgbench's output. With the built-in tests, this is not necessary; the correct scale factor will be detected by counting the number of rows in the
pgbench_branches table. However, when testing only custom benchmarks (
-f option), the scale factor will be reported as 1 unless this option is used.
Run built-in select-only script. Shorthand for
Number of transactions each client runs. Default is 10.
Run the test for this many seconds, rather than a fixed number of transactions per client.
-T are mutually exclusive.
Vacuum all four standard tables before running the test. With neither
-v, pgbench will vacuum the
pgbench_branches tables, and will truncate
Length of aggregation interval (in seconds). May be used only with
-l option. With this option, the log contains per-interval summary data, as described below.
Set the filename prefix for the log files created by
--log. The default is
When showing progress (option
-P), use a timestamp (Unix epoch) instead of the number of seconds since the beginning of the run. The unit is in seconds, with millisecond precision after the dot. This helps compare logs generated by various tools.
Set random generator seed. Seeds the system random number generator, which then produces a sequence of initial generator states, one for each thread. Values for
SEED may be:
time (the default, the seed is based on the current time),
rand (use a strong random source, failing if none is available), or an unsigned decimal integer value. The random generator is invoked explicitly from a pgbench script (
random... functions) or implicitly (for instance option
--rate uses it to schedule transactions). When explicitly set, the value used for seeding is shown on the terminal. Any value allowed for
SEED may also be provided through the environment variable
PGBENCH_RANDOM_SEED. To ensure that the provided seed impacts all possible uses, put this option first or use the environment variable.
Setting the seed explicitly allows to reproduce a
pgbench run exactly, as far as random numbers are concerned. As the random state is managed per thread, this means the exact same
pgbench run for an identical invocation if there is one client per thread and there are no external or data dependencies. From a statistical viewpoint reproducing runs exactly is a bad idea because it can hide the performance variability or improve performance unduly, e.g., by hitting the same pages as a previous run. However, it may also be of great help for debugging, for instance re-running a tricky case which leads to an error. Use wisely.
Sampling rate, used when writing data into the log, to reduce the amount of log generated. If this option is given, only the specified fraction of transactions are logged. 1.0 means all transactions will be logged, 0.05 means only 5% of the transactions will be logged.
Remember to take the sampling rate into account when processing the log file. For example, when computing TPS values, you need to multiply the numbers accordingly (e.g., with 0.01 sample rate, you'll only get 1/100 of the actual TPS).
pgbench accepts the following command-line common arguments:
The database server's host name
The database server's port number
The user name to connect as
Print the pgbench version and exit.
Show help about pgbench command line arguments, and exit.
pgbench executes test scripts chosen randomly from a specified list. The scripts may include built-in scripts specified with
-b and user-provided scripts specified with
-f. Each script may be given a relative weight specified after an
@ so as to change its selection probability. The default weight is
1. Scripts with a weight of
0 are ignored.
The default built-in transaction script (also invoked with
-b tpcb-like) issues seven commands per transaction over randomly chosen
delta. The scenario is inspired by the TPC-B benchmark, but is not actually TPC-B, hence the name.
UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
If you select the
simple-update built-in (also
-N), steps 4 and 5 aren't included in the transaction. This will avoid update contention on these tables, but it makes the test case even less like TPC-B.
If you select the
select-only built-in (also
-S), only the
SELECT is issued.
pgbench has support for running custom benchmark scenarios by replacing the default transaction script (described above) with a transaction script read from a file (
-f option). In this case a “transaction” counts as one execution of a script file.
A script file contains one or more SQL commands terminated by semicolons. Empty lines and lines beginning with
-- are ignored. Script files can also contain “meta commands”, which are interpreted by pgbench itself, as described below.
Before PostgreSQL 9.6, SQL commands in script files were terminated by newlines, and so they could not be continued across lines. Now a semicolon is required to separate consecutive SQL commands (though a SQL command does not need one if it is followed by a meta command). If you need to create a script file that works with both old and new versions of pgbench, be sure to write each SQL command on a single line ending with a semicolon.
There is a simple variable-substitution facility for script files. Variable names must consist of letters (including non-Latin letters), digits, and underscores, with the first character not being a digit. Variables can be set by the command-line
-D option, explained above, or by the meta commands explained below. In addition to any variables preset by
-D command-line options, there are a few variables that are preset automatically, listed in Table 242. A value specified for these variables using
-D takes precedence over the automatic presets. Once set, a variable's value can be inserted into a SQL command by writing
variablename. When running more than one client session, each session has its own set of variables.
Table 242. Automatic Variables
||unique number identifying the client session (starts from zero)|
||seed used in hash functions by default|
||random generator seed (unless overwritten with
||current scale factor|
Script file meta commands begin with a backslash (
\) and normally extend to the end of the line, although they can be continued to additional lines by writing backslash-return. Arguments to a meta command are separated by white space. These meta commands are supported:
This group of commands implements nestable conditional blocks, similarly to
expression. Conditional expressions are identical to those with
\set, with non-zero values interpreted as true.
varname to a value calculated from
expression. The expression may contain the
NULL constant, Boolean constants
FALSE, integer constants such as
5432, double constants such as
3.14159, references to variables
variablename, operators with their usual SQL precedence and associativity, function calls, SQL
CASE generic conditional expressions and parentheses.
Functions and most operators return
For conditional purposes, non zero numerical values are
TRUE, zero numerical values and
When no final
ELSE clause is provided to a
CASE, the default value is
\set ntellers 10 * :scale \set aid (1021 * random(1, 100000 * :scale)) % \ (100000 * :scale) + 1 \set divx CASE WHEN :x <> 0 THEN :y/:x ELSE NULL END
number[ us | ms | s ]
Causes script execution to sleep for the specified duration in microseconds (
us), milliseconds (
ms) or seconds (
s). If the unit is omitted then seconds are the default.
number can be either an integer constant or a
variablename reference to a variable having an integer value.
\sleep 10 ms
varname to the result of the shell command
command with the given
argument(s). The command must return an integer value through its standard output.
command and each
argument can be either a text constant or a
variablename reference to a variable. If you want to use an
argument starting with a colon, write an additional colon at the beginning of
\setshell variable_to_be_assigned command literal_argument :variable ::literal_starting_with_colon
\setshell, but the result of the command is discarded.
\shell command literal_argument :variable ::literal_starting_with_colon
The arithmetic, bitwise, comparison and logical operators listed in Table 243 are built into pgbench and may be used in expressions appearing in
Table 243. pgbench Operators by increasing precedence
||is not equal||
||is not equal||
||lower or equal||
||greater or equal||
||integer bitwise OR||
||integer bitwise XOR||
||integer bitwise AND||
||integer bitwise NOT||
||integer bitwise shift left||
||integer bitwise shift right||
||division (integer truncates the results)||
The functions listed in Table 244 are built into pgbench and may be used in expressions appearing in
Table 244. pgbench Functions
||double||cast to double||
||double if any
||largest value among arguments||
||integer||cast to int||
||double if any
||smallest value among arguments||
||double||value of the constant PI||
||integer||uniformly-distributed random integer in
||an integer between
||integer||exponentially-distributed random integer in
||an integer between
||integer||Gaussian-distributed random integer in
||an integer between
||integer||Zipfian-distributed random integer in
||an integer between
random function generates values using a uniform distribution, that is all the values are drawn within the specified range with equal probability. The
random_zipfian functions require an additional double parameter which determines the precise shape of the distribution.
For an exponential distribution,
parameter controls the distribution by truncating a quickly-decreasing exponential distribution at
parameter, and then projecting onto integers between the bounds. To be precise, with
f(x) = exp(-parameter * (x - min) / (max - min + 1)) / (1 - exp(-parameter))
max inclusive is drawn with probability:
f(i) - f(i + 1).
Intuitively, the larger the
parameter, the more frequently values close to
min are accessed, and the less frequently values close to
max are accessed. The closer to 0
parameter is, the flatter (more uniform) the access distribution. A crude approximation of the distribution is that the most frequent 1% values in the range, close to
min, are drawn
parameter% of the time. The
parameter value must be strictly positive.
For a Gaussian distribution, the interval is mapped onto a standard normal distribution (the classical bell-shaped Gaussian curve) truncated at
-parameter on the left and
+parameter on the right. Values in the middle of the interval are more likely to be drawn. To be precise, if
PHI(x) is the cumulative distribution function of the standard normal distribution, with mean
mu defined as
(max + min) / 2.0, with
f(x) = PHI(2.0 * parameter * (x - mu) / (max - min + 1)) /
(2.0 * PHI(parameter) - 1)
max inclusive is drawn with probability:
f(i + 0.5) - f(i - 0.5). Intuitively, the larger the
parameter, the more frequently values close to the middle of the interval are drawn, and the less frequently values close to the
max bounds. About 67% of values are drawn from the middle
1.0 / parameter, that is a relative
0.5 / parameter around the mean, and 95% in the middle
2.0 / parameter, that is a relative
1.0 / parameter around the mean; for instance, if
parameter is 4.0, 67% of values are drawn from the middle quarter (1.0 / 4.0) of the interval (i.e., from
3.0 / 8.0 to
5.0 / 8.0) and 95% from the middle half (
2.0 / 4.0) of the interval (second and third quartiles). The minimum
parameter is 2.0 for performance of the Box-Muller transform.
random_zipfian generates an approximated bounded Zipfian distribution. For
parameter in (0, 1), an approximated algorithm is taken from "Quickly Generating Billion-Record Synthetic Databases", Jim Gray et al, SIGMOD 1994. For
parameter in (1, 1000), a rejection method is used, based on "Non-Uniform Random Variate Generation", Luc Devroye, p. 550-551, Springer 1986. The distribution is not defined when the parameter's value is 1.0. The function's performance is poor for parameter values close and above 1.0 and on a small range.
parameter defines how skewed the distribution is. The larger the
parameter, the more frequently values closer to the beginning of the interval are drawn. The closer to 0
parameter is, the flatter (more uniform) the output distribution. The distribution is such that, assuming the range starts from 1, the ratio of the probability of drawing
k versus drawing
((. For example,
random_zipfian(1, ..., 2.5) produces the value
(2/1)**2.5 = 5.66 times more frequently than
2, which itself is produced
(3/2)**2.5 = 2.76 times more frequently than
3, and so on.
hash_fnv1a accept an input value and an optional seed parameter. In case the seed isn't provided the value of
:default_seed is used, which is initialized randomly unless set by the command-line
-D option. Hash functions can be used to scatter the distribution of random functions such as
random_exponential. For instance, the following pgbench script simulates possible real world workload typical for social media and blogging platforms where few accounts generate excessive load:
\set r random_zipfian(0, 100000000, 1.07) \set k abs(hash(:r)) % 1000000
In some cases several distinct distributions are needed which don't correlate with each other and this is when implicit seed parameter comes in handy:
\set k1 abs(hash(:r, :default_seed + 123)) % 1000000 \set k2 abs(hash(:r, :default_seed + 321)) % 1000000
As an example, the full definition of the built-in TPC-B-like transaction is:
\set aid random(1, 100000 * :scale) \set bid random(1, 1 * :scale) \set tid random(1, 10 * :scale) \set delta random(-5000, 5000) BEGIN; UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid; SELECT abalance FROM pgbench_accounts WHERE aid = :aid; UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid; UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid; INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP); END;
This script allows each iteration of the transaction to reference different, randomly-chosen rows. (This example also shows why it's important for each client session to have its own variables — otherwise they'd not be independently touching different rows.)
-l option (but without the
--aggregate-interval option), pgbench writes information about each transaction to a log file. The log file will be named
prefix defaults to
nnn is the PID of the pgbench process. The prefix can be changed by using the
--log-prefix option. If the
-j option is 2 or higher, so that there are multiple worker threads, each will have its own log file. The first worker will use the same name for its log file as in the standard single worker case. The additional log files for the other workers will be named
mmm is a sequential number for each worker starting with 1.
The format of the log is:
client_id indicates which client session ran the transaction,
transaction_no counts how many transactions have been run by that session,
time is the total elapsed transaction time in microseconds,
script_no identifies which script file was used (useful when multiple scripts were specified with
time_us are a Unix-epoch time stamp and an offset in microseconds (suitable for creating an ISO 8601 time stamp with fractional seconds) showing when the transaction completed. The
schedule_lag field is the difference between the transaction's scheduled start time, and the time it actually started, in microseconds. It is only present when the
--rate option is used. When both
--latency-limit are used, the
time for a skipped transaction will be reported as
Here is a snippet of a log file generated in a single-client run:
0 199 2241 0 1175850568 995598 0 200 2465 0 1175850568 998079 0 201 2513 0 1175850569 608 0 202 2038 0 1175850569 2663
Another example with
--latency-limit=5 (note the additional
0 81 4621 0 1412881037 912698 3005 0 82 6173 0 1412881037 914578 4304 0 83 skipped 0 1412881037 914578 5217 0 83 skipped 0 1412881037 914578 5099 0 83 4722 0 1412881037 916203 3108 0 84 4142 0 1412881037 918023 2333 0 85 2465 0 1412881037 919759 740
In this example, transaction 82 was late, because its latency (6.173 ms) was over the 5 ms limit. The next two transactions were skipped, because they were already late before they were even started.
When running a long test on hardware that can handle a lot of transactions, the log files can become very large. The
--sampling-rate option can be used to log only a random sample of transactions.
--aggregate-interval option, a different format is used for the log files:
interval_start is the start of the interval (as a Unix epoch time stamp),
num_transactions is the number of transactions within the interval,
sum_latency is the sum of the transaction latencies within the interval,
sum_latency_2 is the sum of squares of the transaction latencies within the interval,
min_latency is the minimum latency within the interval, and
max_latency is the maximum latency within the interval. The next fields,
max_lag, are only present if the
--rate option is used. They provide statistics about the time each transaction had to wait for the previous one to finish, i.e., the difference between each transaction's scheduled start time and the time it actually started. The very last field,
skipped, is only present if the
--latency-limit option is used, too. It counts the number of transactions skipped because they would have started too late. Each transaction is counted in the interval when it was committed.
Here is some example output:
1345828501 5601 1542744 483552416 61 2573 1345828503 7884 1979812 565806736 60 1479 1345828505 7208 1979422 567277552 59 1391 1345828507 7685 1980268 569784714 60 1398 1345828509 7073 1979779 573489941 236 1411
Notice that while the plain (unaggregated) log file shows which script was used for each transaction, the aggregated log does not. Therefore if you need per-script data, you need to aggregate the data on your own.
-r option, pgbench collects the elapsed transaction time of each statement executed by every client. It then reports an average of those values, referred to as the latency for each statement, after the benchmark has finished.
For the default script, the output will look similar to this:
starting vacuum...end. transaction type: <builtin: TPC-B (sort of)> scaling factor: 1 query mode: simple number of clients: 10 number of threads: 1 number of transactions per client: 1000 number of transactions actually processed: 10000/10000 latency average = 15.844 ms latency stddev = 2.715 ms tps = 618.764555 (including connections establishing) tps = 622.977698 (excluding connections establishing) statement latencies in milliseconds: 0.002 \set aid random(1, 100000 * :scale) 0.005 \set bid random(1, 1 * :scale) 0.002 \set tid random(1, 10 * :scale) 0.001 \set delta random(-5000, 5000) 0.326 BEGIN; 0.603 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid; 0.454 SELECT abalance FROM pgbench_accounts WHERE aid = :aid; 5.528 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid; 7.335 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid; 0.371 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP); 1.212 END;
If multiple script files are specified, the averages are reported separately for each script file.
Note that collecting the additional timing information needed for per-statement latency computation adds some overhead. This will slow average execution speed and lower the computed TPS. The amount of slowdown varies significantly depending on platform and hardware. Comparing average TPS values with and without latency reporting enabled is a good way to measure if the timing overhead is significant.
It is very easy to use pgbench to produce completely meaningless numbers. Here are some guidelines to help you get useful results.
In the first place, never believe any test that runs for only a few seconds. Use the
-T option to make the run last at least a few minutes, so as to average out noise. In some cases you could need hours to get numbers that are reproducible. It's a good idea to try the test run a few times, to find out if your numbers are reproducible or not.
For the default TPC-B-like test scenario, the initialization scale factor (
-s) should be at least as large as the largest number of clients you intend to test (
-c); else you'll mostly be measuring update contention. There are only
-s rows in the
pgbench_branches table, and every transaction wants to update one of them, so
-c values in excess of
-s will undoubtedly result in lots of transactions blocked waiting for other transactions.
The default test scenario is also quite sensitive to how long it's been since the tables were initialized: accumulation of dead rows and dead space in the tables changes the results. To understand the results you must keep track of the total number of updates and when vacuuming happens. If autovacuum is enabled it can result in unpredictable changes in measured performance.
A limitation of pgbench is that it can itself become the bottleneck when trying to test a large number of client sessions. This can be alleviated by running pgbench on a different machine from the database server, although low network latency will be essential. It might even be useful to run several pgbench instances concurrently, on several client machines, against the same database server.
If untrusted users have access to a database that has not adopted a secure schema usage pattern, do not run pgbench in that database. pgbench uses unqualified names and does not manipulate the search path.
If you see anything in the documentation that is not correct, does not match your experience with the particular feature or requires further clarification, please use this form to report a documentation issue.