Re: Should we update the random_page_cost default value?

From: Tomas Vondra <tomas(at)vondra(dot)me>
To: David Rowley <dgrowleyml(at)gmail(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Should we update the random_page_cost default value?
Date: 2025-10-06 08:35:26
Message-ID: d87b4af0-36c9-4636-8183-87461734a0d0@vondra.me
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 10/6/25 07:26, David Rowley wrote:
> On Mon, 6 Oct 2025 at 13:59, Tomas Vondra <tomas(at)vondra(dot)me> wrote:
>> Unless I did some silly mistakes, these results suggest the current 4.0
>> value is a bit too low, and something like ~20 would be better even on
>> SSDs. This is not the first time it was suggested a higher default might
>> be better - see this 2008 post [3]. Of course, that's from before SSDs
>> became a thing, it's about evolution in hard disks and our code.
>
> Thanks for going to all that effort to calculate that. It was an
> interesting read and also very interesting that you found the opposite
> to the typical advice that people typically provide.
>
> I don't have any HDDs around to run the script to check the results. I
> do have a machine with 2 SSDs, one PCIe3 and one PCIe4. I'll see if I
> can get you some results from that.
>

Yeah, that'd be interesting. I don't think it'll be massively different
from the data I collected, but more data is good.

FWIW I suggest modifying the script to use "debug_io_direct = data" and
smaller shared_buffers. That allows using much smaller data sets (and
thus faster runs).

> It would be interesting to see how your calculated values fare when
> given a more realistic workload. Say TPC-H or Join Order Benchmark. I
> recall that TPCH has both a power and a throughput result, one to test
> concurrency and one for single query throughput. I wonder if the same
> random_page_cost setting would be preferred in both scenarios. I can
> imagine that it might be more useful to have more index pages in
> shared buffers when there's strong contention for buffers. It would be
> interesting to run some pg_buffercache queries with GROUP BY relkind
> to see how much of an effect changing random_page_cost has on the
> number of buffers per relkind after each query.
>

Good idea, I'll give TPC-H a try soon. My concern is that for complex
queries it's much harder to pinpoint the problem, and an estimation
error may sometime compensate (or amplify) an earlier one. Worth a try.

As for the concurrency, I don't have a great answer. But perhaps it's
related to Tom's point about AIO. I mean, AIO also turns serial IOs to
concurrent ones, so maybe it's similar to multiple concurrent queries?

> I wonder if the OtterTune people collected any "phone-home"
> information feeding back about what the software picked for GUCs. It
> would be interesting to know if there was some trend to show what the
> best random_page_cost setting was or if the best setting varied based
> on the server and workload.
>

No idea, and given OT is gone I doubt we'd get any data.

regards

--
Tomas Vondra

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Eisentraut 2025-10-06 08:53:57 Re: split func.sgml to separated individual sgml files
Previous Message Nazir Bilal Yavuz 2025-10-06 08:29:56 Re: split func.sgml to separated individual sgml files