From: | aurora <aurora00(at)gmail(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | browsing table with 2 million records |
Date: | 2005-10-26 20:41:17 |
Message-ID: | cbd177510510261341l4ed7a214lda9d67af12f2ec21@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
I am running Postgre 7.4 on FreeBSD. The main table have 2 million record
(we would like to do at least 10 mil or more). It is mainly a FIFO structure
with maybe 200,000 new records coming in each day that displace the older
records.
We have a GUI that let user browser through the record page by page at about
25 records a time. (Don't ask me why but we have to have this GUI). This
translates to something like
select count(*) from table <-- to give feedback about the DB size
select * from table order by date limit 25 offset 0
Tables seems properly indexed, with vacuum and analyze ran regularly. Still
this very basic SQLs takes up to a minute run.
I read some recent messages that select count(*) would need a table scan for
Postgre. That's disappointing. But I can accept an approximation if there
are some way to do so. But how can I optimize select * from table order by
date limit x offset y? One minute response time is not acceptable.
Any help would be appriciated.
Wy
From | Date | Subject | |
---|---|---|---|
Next Message | Mark Lewis | 2005-10-26 20:59:44 | Re: browsing table with 2 million records |
Previous Message | Edward Di Geronimo Jr. | 2005-10-26 20:33:36 | Performance issues with custom functions |