From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
Cc: | Peter Geoghegan <pg(at)bowt(dot)ie>, Greg Stark <stark(at)mit(dot)edu>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: The case for removing replacement selection sort |
Date: | 2017-09-11 15:50:25 |
Message-ID: | CA+TgmoaTbCb8dbL+=JTW7YnHHN2ODbqWLQzGcJe0BZyZiqEGQw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Sep 11, 2017 at 11:47 AM, Tomas Vondra
<tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
> The question is what is the optimal replacement_sort_tuples value? I
> assume it's the number of tuples that effectively uses CPU caches, at
> least that's what our docs say. So I think you're right it to 1B rows
> may break this assumption, and make it perform worse.
>
> But perhaps the fact that we're testing with multiple work_mem values,
> and with smaller data sets (100k or 1M rows) makes this a non-issue?
I am not sure that's the case -- I think that before Peter's changes
it was pretty easy to find cases where lowering work_mem made sorting
ordered data go faster.
But I could easily be wrong.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2017-09-11 16:01:17 | Re: The case for removing replacement selection sort |
Previous Message | Erik Rijkers | 2017-09-11 15:49:02 | Re: Automatic testing of patches in commit fest |