Re: Idea to enhance pgbench by more modes to generate data (multi-TXNs, UNNEST, COPY BINARY)

From: Ashutosh Bapat <ashutosh(dot)bapat(dot)oss(at)gmail(dot)com>
To: Boris Mironov <boris_mironov(at)outlook(dot)com>
Cc: "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Idea to enhance pgbench by more modes to generate data (multi-TXNs, UNNEST, COPY BINARY)
Date: 2025-11-13 10:17:15
Message-ID: CAExHW5vtdtd5QnFobxOhbVeh5jk3_61zYRxLqCXEZpO3jOeFDg@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi Boris,

On Wed, Nov 12, 2025 at 3:25 AM Boris Mironov <boris_mironov(at)outlook(dot)com> wrote:
>
> Hello hackers,
>
> For some of my specific hardware tests I needed to generate big databases well beyond RAM size. Hence I turned to pgbench tool and its default 2 modes for client- and server-side generation for TPC-B tests. When I use "scale" factor in range of few thousands (eg, 3000 - 5000) data generation phase takes quite some time. I looked at it as opportunity to prove/disprove 2 hypothesises:
>
> will INSERT mode work faster if we commit once every "scale" and turn single INSERT into "for" loop with commits for 3 tables in the end of each loop
> will "INSERT .. SELECT FROM unnest" be faster than "INSERT .. SELECT FROM generate_series"
> will BINARY mode work faster than TEXT even though we send much more data
> and so on
>
> As a result of my experiments I produced significant patch for pgbench utility and though that it might be of interest not just for me. Therefore I'm sending draft version of it in diff format for current development tree on GitHub. As of November 11, 2025 I can merge with main branch of the project on GitHub.
>
> Spoiler alert: "COPY FROM BINARY" is significantly faster than current "COPY FROM TEXT"
>
> Would be happy to polish it if there is interest to such change.

Making pgbench data initialization faster at a higher scale is
desirable and the community might be willing to accept such a change.
Running very large benchmarks is becoming common these days. However,
it's not clear what you are proposing and what's the performance
improvement. Answering following question may help: Your patch
implements all the above methods? Do all of them provide performance
improvement? If each of them performs better under certain conditions,
what are those conditions? Is there one method which performs better
than all others, which is it and why not implement just that method?
What performance numbers are we looking at? Can the methods which use
batch commits, also run those batches in parallel?

--
Best Wishes,
Ashutosh Bapat

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Chao Li 2025-11-13 10:17:37 Re: Suggestion to add --continue-client-on-abort option to pgbench
Previous Message Amit Kapila 2025-11-13 09:55:56 Re: POC: enable logical decoding when wal_level = 'replica' without a server restart