Re: limit clause produces wrong query plan

From: PFC <lists(at)peufeu(dot)com>
To: Andrus <kobruleht2(at)hot(dot)ee>, "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: limit clause produces wrong query plan
Date: 2008-11-24 22:58:50
Message-ID: op.uk5bgcmqcigqcu@soyouz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

> SELECT ... FROM bigtable ORDER BY intprimarykey OFFSET 100 LIMIT 100

I think pagination is overrated.

If the query produces, for instance, something like 100 rows or less,
more often than not, getting all the rows will take the exact same time as
getting a portion of the rows... in all those cases, it is much better to
cache the results somewhere (user session, table, whatever) and paginate
based on that, rather than perform the same query lots of times.
Especially when some non-indexed sorting takes place in which case you are
gonna fetch all the rows anyway. Something like row-id can be stored
instead of the full rows, also. There are exceptions of course.

And if the query produces 20.000 results... who is ever going to scroll
to page 1257 ?

> The example which I posted shows that
>
> SELECT ... FROM bigtable ORDER BY intprimarykey LIMIT 100
>
> this is extremely *slow*: seq scan is performed over whole bigtable.

This is wrong though. It should use an index, especially if you have a
LIMIT...

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Scott Carey 2008-11-24 23:10:08 Re: limit clause produces wrong query plan
Previous Message Scott Carey 2008-11-24 22:50:56 Re: Increasing pattern index query speed