Skip site navigation (1) Skip section navigation (2)

Re: limit clause produces wrong query plan

From: PFC <lists(at)peufeu(dot)com>
To: Andrus <kobruleht2(at)hot(dot)ee>, "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: limit clause produces wrong query plan
Date: 2008-11-24 22:58:50
Message-ID: op.uk5bgcmqcigqcu@soyouz (view raw or flat)
Thread:
Lists: pgsql-performance
> SELECT ... FROM bigtable ORDER BY intprimarykey OFFSET 100 LIMIT 100

	I think pagination is overrated.

	If the query produces, for instance, something like 100 rows or less,  
more often than not, getting all the rows will take the exact same time as  
getting a portion of the rows... in all those cases, it is much better to  
cache the results somewhere (user session, table, whatever) and paginate  
based on that, rather than perform the same query lots of times.  
Especially when some non-indexed sorting takes place in which case you are  
gonna fetch all the rows anyway. Something like row-id can be stored  
instead of the full rows, also. There are exceptions of course.

	And if the query produces 20.000 results... who is ever going to scroll  
to page 1257 ?

> The example which I  posted shows that
>
> SELECT ... FROM bigtable ORDER BY intprimarykey LIMIT 100
>
> this is extremely *slow*: seq scan is  performed over whole bigtable.

	This is wrong though. It should use an index, especially if you have a  
LIMIT...


In response to

pgsql-performance by date

Next:From: Scott CareyDate: 2008-11-24 23:10:08
Subject: Re: limit clause produces wrong query plan
Previous:From: Scott CareyDate: 2008-11-24 22:50:56
Subject: Re: Increasing pattern index query speed

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group