From: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Atri Sharma <atri(dot)jiit(at)gmail(dot)com>, PgHacker <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Randomisation for ensuring nlogn complexity in quicksort |
Date: | 2013-07-01 19:54:19 |
Message-ID: | CAGTBQpZABrMrHAipwGkh1NCoHVs=Zpkyfwu7dO63hW=UTo27cA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Jul 1, 2013 at 4:32 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> This shouldn't be too complex, and should give us a fixed nlogn complexity even for wild data sets, without affecting existing normal data sets that are present in every day transactions. I even believe that those data sets will also benefit from the above optimisation.
>
> The only method of selecting a pivot for quicksort that obtain O(n lg
> n) run time with 100% certainty is have a magical oracle inside the
> computer that tells you in fixed time and with perfect accuracy which
> pivot you should select.
Doesn't a linear median algorithm (say median of medians) get you O(n lg n)?
Granted, with a huge constant (I think 4)... but it should still be O(n lg n).
From | Date | Subject | |
---|---|---|---|
Next Message | Magnus Hagander | 2013-07-01 19:57:42 | Re: Documentation/help for materialized and recursive views |
Previous Message | Andres Freund | 2013-07-01 19:49:42 | Re: extensible external toast tuple support |