I have a simple table which has a cube column and a cube GiST index. The
column contains 3-dimensional points (not cubes or intervals). The
problem is that I'm getting very slow queries when I'm using the index.
The table has about 130,000 rows and is full-vacuumed after any
updates/inserts (it is updated only once every 24 hours).
CREATE TABLE picof.photo_colors
photo_id integer NOT NULL,
color_percent real NOT NULL,
lab_color picof.cube NOT NULL
CREATE INDEX photo_colors_index
SELECT photo_id FROM photo_colors
WHERE lab_color <@ cube_enlarge('0, 0, 0', 10, 3)
"Bitmap Heap Scan on photo_colors (cost=13.40..421.55 rows=135 width=4)
(actual time=7.958..15.493 rows=14313 loops=1)"
" Recheck Cond: (lab_color <@ '(-10, -10, -10),(10, 10, 10)'::cube)"
" -> Bitmap Index Scan on photo_colors_index (cost=0.00..13.36
rows=135 width=0) (actual time=7.556..7.556 rows=14313 loops=1)"
" Index Cond: (lab_color <@ '(-10, -10, -10),(10, 10, 10)'::cube)"
"Total runtime: 16.849 ms"
(Executed in PostgreSQL 8.4.4 on Windows and CentOS - same query plan)
Now, while it might not seem much, this is part of a bigger query in
which several such subqueries are being joined. The cost really adds up.
My question is: Why is it doing a Bitmap Heap Scan / Recheck Cond? I've
executed dozens of such queries and not once did the rechecking remove
any rows. Is there any way to disable it, or do you have any other
suggestions for optimizations (because I'm all out of ideas)?
Thank you in advance!
pgsql-performance by date
|Next:||From: Yeb Havinga||Date: 2010-07-25 18:35:11|
|Subject: Re: Testing Sandforce SSD|
|Previous:||From: Dimitri Fontaine||Date: 2010-07-25 08:40:19|
|Subject: Re: Pooling in Core WAS: Need help in performance tuning.|