From: | Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au> |
---|---|
To: | aurora <aurora00(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: browsing table with 2 million records |
Date: | 2005-10-27 01:43:24 |
Message-ID: | 436030BC.1000404@familyhealth.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> We have a GUI that let user browser through the record page by page at
> about 25 records a time. (Don't ask me why but we have to have this
> GUI). This translates to something like
>
> select count(*) from table <-- to give feedback about the DB size
> select * from table order by date limit 25 offset 0
Heh, sounds like phpPgAdmin...I really should do something about that.
> Tables seems properly indexed, with vacuum and analyze ran regularly.
> Still this very basic SQLs takes up to a minute run.
Yes, COUNT(*) on a large table is always slow in PostgreSQL. Search the
mailing lists for countless discussions about it.
Chris
From | Date | Subject | |
---|---|---|---|
Next Message | Christopher Kings-Lynne | 2005-10-27 01:46:14 | Re: browsing table with 2 million records |
Previous Message | Steinar H. Gunderson | 2005-10-27 00:57:15 | Re: Materializing a sequential scan |