Re: Really really slow select count(*)

From: Greg Smith <greg(at)2ndquadrant(dot)com>
To: felix <crucialfelix(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Really really slow select count(*)
Date: 2011-02-04 14:56:12
Message-ID: 4D4C138C.7010304@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

felix wrote:
> explain analyze select count(*) from fastadder_fastadderstatus;
>
> Aggregate (cost=62458.73..62458.74 rows=1 width=0) (actual
> time=77130.000..77130.000 rows=1 loops=1)
> -> Seq Scan on fastadder_fastadderstatus (cost=0.00..61701.18
> rows=303018 width=0) (actual time=50.000..76930.000 rows=302479 loops=1)
> Total runtime: *77250.000 ms*
>

PostgreSQL version? If you're running on 8.3 or earlier, I would be
suspicous that your Free Space Map has been overrun.

What you are seeing is that the table itself is much larger on disk than
it's supposed to be. That can be caused by frequent UPDATEs if you
don't have vacuum cleanup working effectively, you'll get lots of dead
sections left behind from UPDATEs in the middle. The best way to fix
all this is to run CLUSTER on the table. That will introduce a bit of
downtime while it holds a lock on the table (only a few minutes based on
what you've shown here), but the copy you'll have afterwards won't be
spread all over disk anymore.

--
Greg Smith 2ndQuadrant US greg(at)2ndQuadrant(dot)com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Shaun Thomas 2011-02-04 15:00:51 Re: Really really slow select count(*)
Previous Message Nick Lello 2011-02-04 14:55:01 Re: [HACKERS] Slow count(*) again...