Re: Problem with 11 M records table

From: "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>
To: "idc danny" <idcdanny(at)yahoo(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Problem with 11 M records table
Date: 2008-05-13 18:02:20
Message-ID: dcc563d10805131102y6a840088hd96c32a507b9bfb3@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Tue, May 13, 2008 at 10:57 AM, idc danny <idcdanny(at)yahoo(dot)com> wrote:
> Hi everybody,
>
> I'm fairly new to PostgreSQL and I have a problem with
> a query:
>
> SELECT * FROM "LockerEvents" LIMIT 10000 OFFSET
> 10990000
>
> The table LockerEvents has 11 Mlillions records on it
> and this query takes about 60 seconds to complete.
> Moreover, even after making for each column in the
> table a index the EXPLAIN still uses sequential scan
> instead of indexes.

Yep. The way offset limit works is it first materializes the data
needed for OFFSET+LIMIT rows, then throws away OFFSET worth's of data.
So, it has to do a lot of retrieving.

Better off to use something like:

select * from table order by indexfield where indexfield between
10000000 and 10001000;

which can use an index on indexfield, as long as the amount of data is
small enough, etc...

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message James Mansion 2008-05-13 20:12:58 Re: RAID controllers for Postgresql on large setups
Previous Message Craig James 2008-05-13 17:57:08 Re: Problem with 11 M records table