| From: | Dror Matalon <dror(at)zapatec(dot)com> |
|---|---|
| To: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
| Subject: | count(*) slow on large tables |
| Date: | 2003-10-02 19:15:47 |
| Message-ID: | 20031002191547.GZ87525@rlx11.zapatec.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers pgsql-performance |
Hi,
I have a somewhat large table, 3 million rows, 1 Gig on disk, and growing. Doing a
count(*) takes around 40 seconds.
Looks like the count(*) fetches the table from disk and goes through it.
Made me wonder, why the optimizer doesn't just choose the smallest index
which in my case is around 60 Megs and goes through it, which it could
do in a fraction of the time.
Dror
--
Dror Matalon
Zapatec Inc
1700 MLK Way
Berkeley, CA 94709
http://www.zapatec.com
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tomasz Myrta | 2003-10-02 19:36:42 | Re: count(*) slow on large tables |
| Previous Message | Bruno Wolff III | 2003-10-02 19:15:33 | Re: Thoughts on maintaining 7.3 |
| From | Date | Subject | |
|---|---|---|---|
| Next Message | scott.marlowe | 2003-10-02 19:34:16 | further testing on IDE drives |
| Previous Message | Bill Moran | 2003-10-02 19:00:19 | Re: low cardinality column |