From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | neto brpr <netobrpr(at)gmail(dot)com> |
Cc: | Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com>, pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: Cost Model |
Date: | 2017-12-20 19:34:58 |
Message-ID: | 20171220193458.mrzqs3qjxt3j4omy@alap3.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2017-12-20 17:13:31 -0200, neto brpr wrote:
> Just to explain it better. The idea of differentiating read and write
> parameters (sequential and random) is exactly so that the access plans can
> be better chosen by the optimizer. But for this, the Hash join, merge join,
> sorting and other algorithms should also be changed to consider these new
> parameters.
I'm doubtful that there's that much benefit. Mergejoin doesn't write,
hashjoins commonly don't write , and usually if so there's not that many
alternatives to batched hashjoins. Similar-ish with sorts, although
sometimes that can instead be done using ordered index scans.
What are the cases you forsee where costing reads/writes differently
will lead to better plans?
Greetings,
Andres Freund
From | Date | Subject | |
---|---|---|---|
Next Message | Dan Langille | 2017-12-20 19:48:11 | PGCon 2018 call for papers |
Previous Message | David G. Johnston | 2017-12-20 19:32:03 | Re: Cost Model |