On Tue, Oct 27, 2009 at 11:08 AM, <jesper(at)krogh(dot)cc> wrote:
> In my example the seq-scan evaulates 50K tuples and the heap-scan 40K.
> The question is why does the "per-tuple" evaluation become that much more
> expensive (x7.5) on the seq-scan than on the index-scan, when the
> complete dataset indeed is in memory?
[ ... thinks a little more ... ]
The bitmap index scan returns a TID bitmap. From a quick look at
nodeBitmapHeapScan.c, it appears that the recheck cond only gets
evaluated for those portions of the TID bitmap that are lossy. So I'm
guessing what may be happening here is that although the bitmap heap
scan is returning 40K rows, it's doing very few (possibly no) qual
evaluations, and mostly just checking tuple visibility.
>> If your whole database fits in RAM, you could try changing your
>> seq_page_cost and random_page_cost variables from the default values
>> of 1 and 4 to something around 0.05, or maybe even 0.01, and see
>> whether that helps.
> This is about planning the query. We're talking actual runtimes here.
Sorry, I assumed you were trying to get the planner to pick the faster
plan. If not, never mind.
In response to
pgsql-performance by date
|Next:||From: Denis BUCHER||Date: 2009-10-28 12:11:28|
|Subject: Postgresql optimisation|
|Previous:||From: jesper||Date: 2009-10-27 15:08:08|
|Subject: Re: bitmap heap scan way cheaper than seq scan on the
same amount of tuples (fts-search).|