Many thanks for the quick reply and suggestion! Indeed we do have many
records in these tables - 20/50 million rows, and we do have index on
merchant_id already which is a NOT NULLable column as well. In my
opinion the duplicate index we have on the "id" column which is a
NONCLUSTERED Pkey as well is confusing the optimizer because if we drop
it all goes well. The problem is that without it a few other queries we
run for reporting are running forever which is really hard to understand
why because the Pkey assumes that an index will be created by default. I
tried to analyze then vacuum/analyze/reindex/analyze and even after that
the results were the same. I just tried your suggestion and IT WORKED!
Thanks a lot again,
From: hubert depesz lubaczewski [mailto:depesz(at)depesz(dot)com]
Sent: Thursday, June 05, 2008 2:41 PM
To: Lawrence Cohan
Subject: Re: [BUGS] BUG #4224: issue with LIMIT and ORDER BY
On Thu, Jun 05, 2008 at 06:15:29PM +0000, Lawrence Cohan wrote:
> Following queries run FOREVER in PG if an index exists on the "id"
> which is a integer - serial and PKey on the table.
> SELECT id FROM orders WHERE merchant_id = xxxxxx ORDER BY id DESC
> -- or 30, 29, 28, 27, 26, 25
> SELECT id FROM clients WHERE merchant_id = XXXXXX ORDER BY id LIMIT 3
> 1, 2.
> With different limits we get different results but the queris are
> forever with DESC as well.
my guess is that you:
1. don't have index on merchant_id
2. have a lot of rows in this table
3. very little rows have given merchant_id
you can easily fix the situation with:
create index q on clients (merchant_id, id);
In response to
pgsql-bugs by date
|Next:||From: Alvaro Herrera||Date: 2008-06-05 19:57:01|
|Subject: Re: BUG #4212: Documentation re upgrading|
|Previous:||From: hubert depesz lubaczewski||Date: 2008-06-05 18:41:27|
|Subject: Re: BUG #4224: issue with LIMIT and ORDER BY|