Performing a count(*) on a large table, how to optimize

From: Martin Weinberg <weinberg(at)osprey(dot)phast(dot)umass(dot)edu>
To: pgsql-general(at)postgresql(dot)org
Cc: weinberg(at)osprey(dot)phast(dot)umass(dot)edu
Subject: Performing a count(*) on a large table, how to optimize
Date: 1999-04-15 15:06:28
Message-ID: 199904151506.LAA08377@osprey.phast.umass.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

PG folks,

I have 5 Gb table (mostly numerical values) and want to be able to
count sources in a range of values, e.g.:

select count(*) from mytable where x between 1.0 and 2.0;

I have indexed on x and vacuumed.

I notice that these take about 3 minutes on my 450 Xeon Linux box.
A very complicated where clause returning data etc. takes about
the same time.

Is there a way to optimize this type of query?

Thanks!

--M

===========================================================================

Martin Weinberg Phone: (413) 545-3821
Dept. of Physics and Astronomy FAX: (413) 545-2117/0648
530 Graduate Research Tower
University of Massachusetts
Amherst, MA 01003-4525

Browse pgsql-general by date

  From Date Subject
Next Message Michael J Davis 1999-04-15 16:29:38 RE: [GENERAL] case problem with MS Access export
Previous Message darold 1999-04-15 14:50:29 The WWW of PostgreSQL