Re: Should we update the random_page_cost default value?

From: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
To: David Rowley <dgrowleyml(at)gmail(dot)com>
Cc: wenhui qiu <qiuwenhuifx(at)gmail(dot)com>, Tomas Vondra <tomas(at)vondra(dot)me>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Should we update the random_page_cost default value?
Date: 2025-10-06 05:25:13
Message-ID: CAFj8pRCR1DEbpM9ijJAYW6ix8iL4QbGACUXfoSrJdXAdc0SoEg@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

po 6. 10. 2025 v 6:46 odesílatel David Rowley <dgrowleyml(at)gmail(dot)com> napsal:

> On Mon, 6 Oct 2025 at 17:19, wenhui qiu <qiuwenhuifx(at)gmail(dot)com> wrote:
> > I really can't agree more. Many default values are just too
> conservative, and the documentation doesn't provide best practices.,i think
> reduce to 1.x,Or add a tip in the document, providing a recommended value
> for different SSDs.
>
> Did you read Tomas's email or just the subject line? I think if
> you're going to propose to move it in the opposite direction as to
> what Tomas found to be the more useful direction, then that at least
> warrants providing some evidence to the contrary of what Tomas has
> shown or stating that you think his methodology for his calculation is
> flawed because...
>
> I suspect all you've done here is propagate the typical advice people
> give out around here. It appears to me that Tomas went to great
> lengths to not do that.
>

+1

The problem will be in estimation of the effect of cache. It can be pretty
wide range.

I have a access to not too small eshop in Czech Republic (but it is not
extra big) - It uses today classic stack - Java (ORM), Elastic, Postgres.
The database size is cca 1.9T, shared buffers are 32GB (it handles about
10-20K logged users at one time).

The buffer cache hit ratio is 98.42%. The code is well optimized. This
ratio is not calculated with file system cache.

I believe so for different applications (OLAP) or less well optimized the
cache hit ratio can be much much worse.

Last year I had an experience with customers that had Postgres in clouds,
and common (not extra expensive) discs are not great parameters today. It
is a question if one ratio like random page cost / seq page cost can well
describe dynamic throttling (or dynamic behavior of current clouds io)
where customers frequently touch limits.

Regards

Pavel

> David
>
>
>

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message David Rowley 2025-10-06 05:26:05 Re: Should we update the random_page_cost default value?
Previous Message Kirill Reshke 2025-10-06 05:22:31 Make GiST waldump output more descriptive