Re: [HACKERS] Solution for LIMIT cost estimation

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Don Baccus <dhogaza(at)pacifier(dot)com>
Cc: Chris <chris(at)bitmead(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: [HACKERS] Solution for LIMIT cost estimation
Date: 2000-02-13 23:43:31
Message-ID: 9411.950485411@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Don Baccus <dhogaza(at)pacifier(dot)com> writes:
>> The optimizer's job would be far simpler if no-brainer rules like
>> "indexscan is always better" worked.

> Yet the optimizer currently takes the no-brainer point-of-view that
> "indexscan is slow for tables much larger than the disk cache, therefore
> treat all tables as though they're much larger than the disk cache".

Ah, you haven't seen the (as-yet-uncommitted) optimizer changes I'm
working on ;-)

What I still lack is a believable approximation curve for cache hit
ratio vs. table-size-divided-by-cache-size. Anybody seen any papers
about that? I made up a plausible-shaped function but it'd be nice to
have something with some actual theory or measurement behind it...

(Of course the cache size is only a magic number in the absence of any
hard info about what the kernel is doing --- but at least it will
optimize big tables differently than small ones now.)

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Don Baccus 2000-02-14 00:06:13 Re: [HACKERS] Solution for LIMIT cost estimation
Previous Message Don Baccus 2000-02-13 23:29:12 Re: [HACKERS] Solution for LIMIT cost estimation