Re: [HACKERS] Solution for LIMIT cost estimation

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Chris <chris(at)bitmead(dot)com>
Cc: pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: [HACKERS] Solution for LIMIT cost estimation
Date: 2000-02-13 16:53:49
Message-ID: 5419.950460829@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Chris <chris(at)bitmead(dot)com> writes:
> Could it _ever_ be faster to sort the tuples when there is already an
> index that can provide them in sorted order?

Yes --- in fact usually, for large tables. Once your table gets too
big for the disk cache to be effective, indexscan performance approaches
one random-access page fetch per tuple. Sort performance behaves more
or less as p*log(p) accesses for p pages; and a far larger proportion
of those accesses are sequential than in the indexscan case. So the
sort will win if you have more than log(p) tuples per page. Do the
arithmetic...

The optimizer's job would be far simpler if no-brainer rules like
"indexscan is always better" worked.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2000-02-13 17:13:00 Re: [HACKERS] Solution for LIMIT cost estimation
Previous Message Hannu Krosing 2000-02-13 16:24:59 Re: [HACKERS] Solution for LIMIT cost estimation