Speedier count(*)

From: Dan Harris <fbsd(at)drivefaster(dot)net>
To: pgsql-performance(at)postgresql(dot)org
Subject: Speedier count(*)
Date: 2005-08-10 23:37:49
Message-ID: 3E43C7D5-9DB8-4132-BF4F-607B000866E1@drivefaster.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

I have a web page for my customers that shows them count of records
and some min/max date ranges in each table of a database, as this is
how we bill them for service. They can log in and check the counts
at any time. I'd like for the counts to be as fresh as possible by
keeping this dynamic, but I will use a periodic 'snapshot'/cron job
if that is the only option to speed this up. I have thought about
using the table statistics, but the estimate error is probably
unacceptable because of the billing purposes.

For some reason, the SQL Server we migrated the app from can return
count(*) in a split second on multi-million row tables, even though
it is a MUCH slower box hardware-wise, but it's now taking many
seconds to run. I have read in the archives the problems MVCC brings
into the count(*) dilemma forcing Pg to run a seq scan to get
counts. Does SQLServer not use MVCC or have they found another
approach for arriving at this number? Compounding all the min/max
and counts from other tables and all those queries take about a
minute to run. The tables will contain anywhere from 1 million to 40
million rows.

Also, I am using "select ... group by ... order by .. limit 1" to get
the min/max since I have already been bit by the issue of min() max()
being slower.

-Dan

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Joshua D. Drake 2005-08-11 00:23:39 Re: Speedier count(*)
Previous Message Tom Lane 2005-08-10 22:55:24 Re: Planner doesn't look at LIMIT?