| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | "Michael J(dot) Baars" <mjbaars1977(dot)pgsql-hackers(at)cyberfiber(dot)eu> |
| Cc: | pgsql-hackers(at)lists(dot)postgresql(dot)org |
| Subject: | Re: Postgresql network transmission overhead |
| Date: | 2021-02-26 15:11:57 |
| Message-ID: | 249934.1614352317@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
"Michael J. Baars" <mjbaars1977(dot)pgsql-hackers(at)cyberfiber(dot)eu> writes:
> In the logfile you can see that the effective user data being written is only 913kb, while the actual being transmitted over the network is 7946kb when writing
> one row at a time. That is an overhead of 770%!
So ... don't write one row at a time.
You haven't shown any details, but I imagine that most of the overhead
comes from per-query stuff like the RowDescription metadata. The intended
usage pattern for bulk operations is that there's only one RowDescription
message for a whole lot of data rows. There might be reasons you want to
work a row at a time, but if your concern is to minimize network traffic,
don't do that.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Joel Jacobson | 2021-02-26 16:42:32 | Re: Some regular-expression performance hacking |
| Previous Message | osumi.takamichi@fujitsu.com | 2021-02-26 14:53:58 | RE: [HACKERS] logical decoding of two-phase transactions |