Re: [PATCH] pg_dump: lock tables in batches

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Aleksander Alekseev <aleksander(at)timescale(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: [PATCH] pg_dump: lock tables in batches
Date: 2022-12-07 17:28:03
Message-ID: 4040032.1670434083@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Andres Freund <andres(at)anarazel(dot)de> writes:
> On 2022-12-07 10:44:33 -0500, Tom Lane wrote:
>> I have a strong sense of deja vu here. I'm pretty sure I experimented
>> with this idea last year and gave up on it. I don't recall exactly
>> why, but either it didn't show any meaningful performance improvement
>> for me or there was some actual downside (that I'm not remembering
>> right now).

> IIRC the case we were looking at around 989596152 were CPU bound workloads,
> rather than latency bound workloads. It'd not be surprising to have cases
> where batching LOCKs helps latency, but not CPU bound.

Yeah, perhaps. Anyway my main point is that I don't want to just assume
this is a win; I want to see some actual performance tests.

> I wonder if "manual" batching is the best answer. Alexander, have you
> considered using libpq level pipelining?

I'd be a bit nervous about how well that works with older servers.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andres Freund 2022-12-07 17:34:27 Re: Error-safe user functions
Previous Message Tom Lane 2022-12-07 17:20:55 Re: Error-safe user functions