Skip site navigation (1) Skip section navigation (2)

Re: count(*) slow on large tables

From: Bruno Wolff III <bruno(at)wolff(dot)to>
To: Dror Matalon <dror(at)zapatec(dot)com>
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: count(*) slow on large tables
Date: 2003-10-02 19:39:05
Message-ID: 20031002193905.GD18417@wolff.to (view raw or flat)
Thread:
Lists: pgsql-hackerspgsql-performance
On Thu, Oct 02, 2003 at 12:15:47 -0700,
  Dror Matalon <dror(at)zapatec(dot)com> wrote:
> Hi,
> 
> I have a somewhat large table, 3 million rows, 1 Gig on disk,  and growing. Doing a
> count(*) takes around 40 seconds.
> 
> Looks like the count(*) fetches the table from disk and goes through it.
> Made me wonder, why the optimizer doesn't just choose the smallest index
> which in my case is around 60 Megs and goes through it, which it could
> do in a fraction of the time.

Because it can't tell from the index if a tuple is visible to the current
transaction and would still have to hit the table to check this. So that
performance would be a lot worse instead of better.

In response to

Responses

pgsql-performance by date

Next:From: Oleg LebedevDate: 2003-10-02 19:39:55
Subject: Re: TPC-R benchmarks
Previous:From: Tomasz MyrtaDate: 2003-10-02 19:36:42
Subject: Re: count(*) slow on large tables

pgsql-hackers by date

Next:From: Neil ConwayDate: 2003-10-02 19:58:06
Subject: Re: minor view creation weirdness
Previous:From: Tomasz MyrtaDate: 2003-10-02 19:36:42
Subject: Re: count(*) slow on large tables

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group