Re: Slow "Select count(*) ..." query on table with 60 Mio. rows

From: "A(dot) Kretschmer" <andreas(dot)kretschmer(at)schollglas(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Slow "Select count(*) ..." query on table with 60 Mio. rows
Date: 2010-01-14 15:18:14
Message-ID: 20100114151813.GH30196@a-kretschmer.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

In response to tom :
> Hi,
>
> === Problem ===
>
> i have a db-table "data_measurand" with about 60000000 (60 Millions)
> rows and the following query takes about 20-30 seconds (with psql):
>
> mydb=# select count(*) from data_measurand;
> count
> ----------
> 60846187
> (1 row)
>
>
> === Question ===
>
> - What can i do to improve the performance for the data_measurand table?

Short answer: nothing.

Long answer: PG has to check the visibility for each record, so it
forces a seq.scan.

But you can get an estimation, ask pg_class (a system table), the column
reltuples there contains an estimated row rount.
http://www.postgresql.org/docs/current/static/catalog-pg-class.html

If you really needs the correct row-count you should create a TRIGGER
and count with this trigger all INSERTs and DELETEs.

Regards, Andreas
--
Andreas Kretschmer
Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Kevin Grittner 2010-01-14 15:47:53 Re: Slow "Select count(*) ..." query on table with 60 Mio. rows
Previous Message Florian Weimer 2010-01-14 15:14:46 Re: Inserting 8MB bytea: just 25% of disk perf used?