Re: quickly getting the top N rows

From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Ben <bench(at)silentmedia(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-performance(at)postgresql(dot)org
Subject: Re: quickly getting the top N rows
Date: 2007-10-04 20:23:05
Message-ID: 1191529385.4223.231.camel@ebony.site
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Thu, 2007-10-04 at 12:52 -0700, Ben wrote:

> The original, slow query:
>
> explain analyze SELECT * FROM log WHERE clientkey in
> ('000000004500000000010000000001') AND premiseskey in
> ('000000004500000000010000000001') and logicaldel = 'N'
> ORDER BY logtime desc, logkey desc, clientkey desc, premiseskey desc LIMIT 20 offset 0;
>
> QUERY PLAN
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> Limit (cost=356402.58..356402.63 rows=20 width=563) (actual time=215858.481..215858.527 rows=20 loops=1)
> -> Sort (cost=356402.58..357598.25 rows=478267 width=563) (actual time=215858.478..215858.498 rows=20 loops=1)
> Sort Key: logtime, logkey, clientkey, premiseskey
> -> Seq Scan on log (cost=0.00..52061.67 rows=478267 width=563) (actual time=29.340..100043.313 rows=475669 loops=1)
> Filter: ((clientkey = '000000004500000000010000000001'::bpchar) AND (premiseskey = '000000004500000000010000000001'::bpchar) AND (logicaldel = 'N'::bpchar))
> Total runtime: 262462.582 ms
> (6 rows)
>
>
> Every row in log has identical clientkey and premiseskey values, so if I
> just remove those columns from the order by clause, I get this far
> superior plan:
>
> explain analyze SELECT * FROM log WHERE clientkey in
> ('000000004500000000010000000001') AND premiseskey in
> ('000000004500000000010000000001') and logicaldel = 'N'
> ORDER BY logtime desc, logkey desc LIMIT 20 offset 0;
> QUERY PLAN
> -------------------------------------------------------------------------------------------------------------------------------------------
> Limit (cost=0.00..12.33 rows=20 width=563) (actual time=0.047..0.105 rows=20 loops=1)
> -> Index Scan Backward using log_ak1 on log (cost=0.00..294735.70 rows=478267 width=563) (actual time=0.044..0.076 rows=20 loops=1)
> Index Cond: ((clientkey = '000000004500000000010000000001'::bpchar) AND (premiseskey = '000000004500000000010000000001'::bpchar))
> Filter: (logicaldel = 'N'::bpchar)
> Total runtime: 0.165 ms
> (5 rows)
>
>
> ...which made me to think that maybe postgres is not using log_ak1 in the
> former case because two of the columns in the order by match every row.
>
> Unfortunately, in this case it's not an option to alter the query. I'm
> just trying to figure out an explaination.

In the first query, Postgres cannot use the index because the sort order
of the index does not match the sort order of the query. When you change
the sort order of the query so that it matches that of the index, then
the index is used.

If you define your index on (logtime, logkey, clientkey, premiseskey)
rather than on (clientkey, premiseskey, logtime, logkey) you will have a
fast query. Yes, the column order matters.

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Scott Marlowe 2007-10-04 20:32:55 Re: quickly getting the top N rows
Previous Message Ben 2007-10-04 19:52:36 Re: quickly getting the top N rows