Re: Slow query using the Cube contrib module.

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Yeb Havinga <yebhavinga(at)gmail(dot)com>
Cc: Liviu Mirea-Ghiban <liviu(dot)mirea(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Slow query using the Cube contrib module.
Date: 2010-07-27 22:18:50
Message-ID: 16994.1280269130@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Yeb Havinga <yebhavinga(at)gmail(dot)com> writes:
> Liviu Mirea-Ghiban wrote:
>> My question is: Why is it doing a Bitmap Heap Scan / Recheck Cond?
>> I've executed dozens of such queries and not once did the rechecking
>> remove any rows. Is there any way to disable it, or do you have any
>> other suggestions for optimizations (because I'm all out of ideas)?

> It's probably because the index nodes store data values with a lossy
> compression, which means that the index scan returns more rows than
> wanted, and that in turn is filtered out by the rescanning.

The recheck expression is only executed if the index reports that it's
not executed the search exactly. If you don't see any difference
between the indexscan and bitmapscan output counts, it's probably
because the index can do the case exactly, so the recheck expression
isn't really getting used. The planner has to include the expression
in the plan anyway, because the decision about lossiness is not known
until runtime. But it's not costing any runtime.

The OP is mistaken to think there's anything wrong with this plan choice
---- more than likely, it's the best available plan. The reason there's
a significant gap between the indexscan runtime and the bitmapscan
runtime is that that's the cost of going and actually fetching all those
rows from the table. The only way to fix that is to buy a faster disk
or get more RAM so that more of the table can be held in memory.

regards, tom lane

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Peter Hussey 2010-07-27 23:08:16 Questions on query planner, join types, and work_mem
Previous Message Josh Berkus 2010-07-27 21:13:42 Re: how to handle a big table for data log