On Thu, 22 Dec 2005 08:01:00 +0100, Martijn van Oosterhout
>But where are you including the cost to check how many cells are
>already sorted? That would be O(H), right?
Yes. I didn't mention it, because H < N.
> This is where we come back
>to the issue that comparisons in PostgreSQL are expensive.
So we agree that we should try to reduce the number of comparisons.
How many comparisons does it take to sort 100000 items? 1.5 million?
>Hmm, what are the chances you have 100000 unordered items to sort and
>that the first 8% will already be in order. ISTM that that probability
>will be close enough to zero to not matter...
If the items are totally unordered, the check is so cheap you won't
even notice. OTOH in Tom's example ...
|What I think is much more probable in the Postgres environment
|is almost-but-not-quite-ordered inputs --- eg, a table that was
|perfectly ordered by key when filled, but some of the tuples have since
|been moved by UPDATEs.
... I'd not be surprised if H is 90% of N.
In response to
pgsql-hackers by date
|Next:||From: Simon Riggs||Date: 2005-12-22 22:05:15|
|Subject: Re: Unsplitting btree index leaf pages|
|Previous:||From: Jim C. Nasby||Date: 2005-12-22 21:39:58|
|Subject: Re: Function call with offset and limit|