In response to tom :
> === Problem ===
> i have a db-table "data_measurand" with about 60000000 (60 Millions)
> rows and the following query takes about 20-30 seconds (with psql):
> mydb=# select count(*) from data_measurand;
> (1 row)
> === Question ===
> - What can i do to improve the performance for the data_measurand table?
Short answer: nothing.
Long answer: PG has to check the visibility for each record, so it
forces a seq.scan.
But you can get an estimation, ask pg_class (a system table), the column
reltuples there contains an estimated row rount.
If you really needs the correct row-count you should create a TRIGGER
and count with this trigger all INSERTs and DELETEs.
Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99
In response to
pgsql-performance by date
|Next:||From: Kevin Grittner||Date: 2010-01-14 15:47:53|
|Subject: Re: Slow "Select count(*) ..." query on table with 60 Mio. rows|
|Previous:||From: Florian Weimer||Date: 2010-01-14 15:14:46|
|Subject: Re: Inserting 8MB bytea: just 25% of disk perf used?|