Re: browsing table with 2 million records

From: Scott Marlowe <smarlowe(at)g2switchworks(dot)com>
To: aurora <aurora00(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: browsing table with 2 million records
Date: 2005-10-26 21:06:38
Message-ID: 1130360798.2872.57.camel@state.g2switchworks.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Wed, 2005-10-26 at 15:41, aurora wrote:
> I am running Postgre 7.4 on FreeBSD. The main table have 2 million
> record (we would like to do at least 10 mil or more). It is mainly a
> FIFO structure with maybe 200,000 new records coming in each day that
> displace the older records.
>
> We have a GUI that let user browser through the record page by page at
> about 25 records a time. (Don't ask me why but we have to have this
> GUI). This translates to something like
>
> select count(*) from table <-- to give feedback about the DB size
> select * from table order by date limit 25 offset 0
>
> Tables seems properly indexed, with vacuum and analyze ran regularly.
> Still this very basic SQLs takes up to a minute run.
>
> I read some recent messages that select count(*) would need a table
> scan for Postgre. That's disappointing. But I can accept an
> approximation if there are some way to do so. But how can I optimize
> select * from table order by date limit x offset y? One minute
> response time is not acceptable.

Have you run your script without the select count(*) part and timed it?

What does

explain analyze select * from table order by date limit 25 offset 0

say?

Is date indexed?

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Joshua D. Drake 2005-10-26 21:09:37 Re: browsing table with 2 million records
Previous Message Mark Lewis 2005-10-26 20:59:44 Re: browsing table with 2 million records