Skip site navigation (1) Skip section navigation (2)

Re: limit clause produces wrong query plan

From: "Andrus" <kobruleht2(at)hot(dot)ee>
To: "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>
Cc: <pgsql-performance(at)postgresql(dot)org>
Subject: Re: limit clause produces wrong query plan
Date: 2008-11-24 20:04:54
Message-ID: 372C245439584A6CBF49F996503BF162@andrusnotebook (view raw, whole thread or download thread mbox)
Lists: pgsql-performance

>And how exactly should it be optimized?  If a query is even moderately
>interesting, with a few joins and a where clause, postgresql HAS to
>create the rows that come before your offset in order to assure that
>it's giving you the right rows.

SELECT ... FROM bigtable ORDER BY intprimarykey OFFSET 100 LIMIT 100

It should scan primary key in index order for 200 first keys and skipping 
first 100 keys.

>> SELECT ... FROM bigtable ORDER BY intprimarykey OFFSET 0 LIMIT 100
> That should be plenty fast.

The example which I  posted shows that

SELECT ... FROM bigtable ORDER BY intprimarykey LIMIT 100

this is extremely *slow*: seq scan is  performed over whole bigtable.

> A standard workaround is to use some kind of sequential, or nearly so,
> id field, and then use between on that field.
> select * from table where idfield between x and x+100;

Users can delete and insert any rows in table.
This appoarch requires updating x in every row in big table after each
insert, delete or order column change and is thus extremely slow.
So I do'nt understand how this can be used for large tables.


In response to


pgsql-performance by date

Next:From: AndrusDate: 2008-11-24 20:33:37
Subject: Re: Increasing pattern index query speed
Previous:From: Brad NicholsonDate: 2008-11-24 19:52:06
Subject: Re: Monitoring buffercache...

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group