> Hello,
>
> Sorry if it's wrong list for the question. Could you suggest some
> tweaks to the PostgreSQL 7.2.1 to handle the following types of tables
> faster?
>
> Here we have table "stats" with something over one millon records.
> Obvious "SELECT COUNT(*) FROM stats " takes over 40 seconds to
> execute, and this amount of time does not shorten considerably in
> subsequent similar requests. All the databases are vacuumed nightly.
>
> CREATE TABLE "stats" (
> "url" varchar(50),
> "src_port" varchar(10),
> "ip" varchar(16),
> "dst_port" varchar(10),
> "proto" varchar(10),
> "size" int8,
> "login" varchar(20),
> "start_date" timestamptz,
> "end_date" timestamptz,
> "aggregated" int4
> );
> CREATE INDEX "aggregated_stats_key" ON "stats" ("aggregated");
> CREATE INDEX "ip_stats_key" ON "stats" ("ip");
>
> stats=> explain select count(*) from stats;
> NOTICE: QUERY PLAN:
>
> Aggregate (cost=113331.10..113331.10 rows=1 width=0)
> -> Seq Scan on stats (cost=0.00..110085.28 rows=1298328 width=0)
>
> EXPLAIN
> stats=> select count(*) from stats;
> count
> ---------
> 1298328
> (1 row)
>
> The system is FreeBSD-4.6-stable, softupdates on, Athlon XP 1500+, 512
> Mb DDR, ATA 100 HDD.
>
> Thanks in advance,
> Yar
>
I have been dealing with a similar problem.. First I switched to scsi,
second I installed enough memory and increased shared memory (in both
freebsd kernel and pg.conf) so that the entire database could fit into
ram; this combined with the summary table idea keeps me out of most
trouble