From: | Mark Lewis <mark(dot)lewis(at)mir3(dot)com> |
---|---|
To: | aurora <aurora00(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: browsing table with 2 million records |
Date: | 2005-10-26 20:59:44 |
Message-ID: | 1130360385.1156.12.camel@archimedes |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Do you have an index on the date column? Can you post an EXPLAIN
ANALYZE for the slow query?
-- Mark Lewis
On Wed, 2005-10-26 at 13:41 -0700, aurora wrote:
> I am running Postgre 7.4 on FreeBSD. The main table have 2 million
> record (we would like to do at least 10 mil or more). It is mainly a
> FIFO structure with maybe 200,000 new records coming in each day that
> displace the older records.
>
> We have a GUI that let user browser through the record page by page at
> about 25 records a time. (Don't ask me why but we have to have this
> GUI). This translates to something like
>
> select count(*) from table <-- to give feedback about the DB size
> select * from table order by date limit 25 offset 0
>
> Tables seems properly indexed, with vacuum and analyze ran regularly.
> Still this very basic SQLs takes up to a minute run.
>
> I read some recent messages that select count(*) would need a table
> scan for Postgre. That's disappointing. But I can accept an
> approximation if there are some way to do so. But how can I optimize
> select * from table order by date limit x offset y? One minute
> response time is not acceptable.
>
> Any help would be appriciated.
>
> Wy
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2005-10-26 21:06:38 | Re: browsing table with 2 million records |
Previous Message | aurora | 2005-10-26 20:41:17 | browsing table with 2 million records |