Re: LIMIT causes planner to do Index Scan using a less optimal index

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Joel Jacobson <joel(at)gluefinance(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: LIMIT causes planner to do Index Scan using a less optimal index
Date: 2010-04-06 18:33:51
Message-ID: o2k603c8f071004061133qbfeccea6iec13efdbc3c080d7@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Fri, Apr 2, 2010 at 2:19 PM, Joel Jacobson <joel(at)gluefinance(dot)com> wrote:
> Is this a bug? I'm using version 8.4.1.

It's not really a bug, but it's definitely not a feature either.

>  Limit  (cost=0.00..43.46 rows=1 width=4) (actual time=1023.213..1023.214
> rows=1 loops=1)
>    ->  Index Scan using transactions_pkey on transactions
>  (cost=0.00..493029.74 rows=11345 width=4) (actual time=1023.212..1023.212
> rows=1 loops=1)
>          Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))
>  Total runtime: 1023.244 ms
> (4 rows)

The planner's idea here is that rows matching the filter criteria will
be common enough that an index scan over transactions_pkey will find
one fairly quickly, at which point the executor can return that row
and stop. But it turns out that those rows aren't as common as the
planner thinks, so the search takes a long time.

...Robert

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Scott Marlowe 2010-04-06 18:45:59 Re: Some question
Previous Message Ireneusz Pluta 2010-04-06 16:49:48 Re: 3ware vs. MegaRAID