Skip site navigation (1) Skip section navigation (2)

Re: Query Optimizer makes a poor choice

From: "Tyler Hains" <thains(at)profitpointinc(dot)com>
To: "scott(dot)marlowe" <scott(dot)marlowe(at)gmail(dot)com>
Cc: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: Query Optimizer makes a poor choice
Date: 2011-11-29 21:43:52
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-general
On Tue, Nov 29, 2011 at 11:21 AM, Tyler Hains 
<thains(at)profitpointinc(dot)com> wrote:
> # explain analyze select * from cards where card_set_id=2850 order by
> card_id limit 1;
>                                                                QUERY 
>  Limit  (cost=0.00..105.19 rows=1 width=40) (actual 
> rows=1 loops=1)
>    ->  Index Scan using cards_pkey on cards  (cost=0.00..2904875.38
> rows=27616 width=40) (actual time=6026.945..6026.945 rows=1 loops=1)

There's a huge disconnect here between what the query planner expects
(27k rows) and how many there are (1).  Also, getting a single row
from an index should be faster than this, even if the table and index
are quite large.  Have you checked for bloat on this index?

There are actually more like 27 million rows in the table. That's why it 
really should be filtering the rows using the index on the other column 
before ordering for the limit.

The documentation does not seem to give a clear reason for changing the 
value used in default_statistics_target or why you would override it 
with ALTER TABLE SET STATISTICS. My gut is telling me that this may be 
our answer if we can figure out how to tweak it.

In response to


pgsql-general by date

Next:From: Filip RembiałkowskiDate: 2011-11-29 22:06:34
Subject: Re: Query Optimizer makes a poor choice
Previous:From: Tyler HainsDate: 2011-11-29 21:28:36
Subject: Re: Query Optimizer makes a poor choice

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group