Re: Slow "Select count(*) ..." query on table with 60 Mio. rows

From: Matthew Wakeling <matthew(at)flymine(dot)org>
To: tom <toabctl(at)googlemail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Slow "Select count(*) ..." query on table with 60 Mio. rows
Date: 2010-01-14 15:11:39
Message-ID: alpine.DEB.2.00.1001141505300.6195@aragorn.flymine.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Thu, 14 Jan 2010, tom wrote:
> i have a db-table "data_measurand" with about 60000000 (60 Millions)
> rows and the following query takes about 20-30 seconds (with psql):
>
> mydb=# select count(*) from data_measurand;
> count
> ----------
> 60846187
> (1 row)

Sounds pretty reasonable to me. Looking at your table, the rows are maybe
200 bytes wide? That's 12GB of data for Postgres to munch through. 30
seconds is really rather quick for that (400MB/s). What sort of RAID array
is managing to give you that much?

> I use a software raid and LVM for Logical Volume Management. Filesystem
> is ext3

Ditch lvm.

This is an FAQ. Counting the rows in a table is an expensive operation in
Postgres. It can't be answered directly from an index. If you want, you
can keep track of the number of rows yourself with triggers, but beware
that this will slow down write access to the table.

Matthew

--
Nog: Look! They've made me into an ensign!
O'Brien: I didn't know things were going so badly.
Nog: Frightening, isn't it?

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Florian Weimer 2010-01-14 15:14:46 Re: Inserting 8MB bytea: just 25% of disk perf used?
Previous Message Aidan Van Dyk 2010-01-14 15:07:35 Re: Inserting 8MB bytea: just 25% of disk perf used?