From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Tomas Vondra <tomas(at)vondra(dot)me> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Should we update the random_page_cost default value? |
Date: | 2025-10-07 14:35:45 |
Message-ID: | wc7mgalaplotpetwcackcbrm4lwdkvyajdcsi2gsslhknfavzi@t5jo47nnyppa |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
On 2025-10-07 14:08:27 +0200, Tomas Vondra wrote:
> On 10/7/25 01:56, Andres Freund wrote:
> > A correlated index scan today will not do IO combining, despite being
> > accounted as seq_page_cost. So just doing individual 8kB IOs actually seems to
> > be the appropriate comparison. Even with table fetches in index scans doing
> > IO combining as part by your work, the reads of the index data itself won't be
> > combined. And I'm sure other things won't be either.
> >
>
> But that's the point. If the sequential reads do I/O combining and index
> scans don't (and I don't think that will change anytime soon), then that
> makes sequential I/O much more efficient / cheaper. And we better
> reflect that in the cost somehow. Maybe increasing the random_page_cost
> is not the right/best solution? That's possible.
The table fetch portion of an index scan uses seq_page_cost too, with the
degree of it being used determined by the correlation (c.f. cost_index()).
Given that we use random page cost and sequential page cost both for index
scan and non-index scan related costs, I just don't see how it can make sense
to include index related overheads in random_page_cost but not seq_page_cost.
Greetings,
Andres Freund
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2025-10-07 14:54:07 | Re: [PATCH] Better Performance for PostgreSQL with large INSERTs |
Previous Message | Tomas Vondra | 2025-10-07 14:23:36 | Re: Should we update the random_page_cost default value? |