Skip site navigation (1) Skip section navigation (2)

Re: Select max(foo) and select count(*) optimization

From: Rod Taylor <pg(at)rbt(dot)ca>
To: siracusa(at)mindspring(dot)com
Cc: Postgresql Performance <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Select max(foo) and select count(*) optimization
Date: 2004-01-05 19:52:06
Message-ID: 1073332325.8958.8.camel@jester (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
> Especially with very large tables, hearing the disks grind as Postgres scans
> every single row in order to determine the number of rows in a table or the
> max value of a column (even a primary key created from a sequence) is pretty
> painful.  If the implementation is not too horrendous, this is an area where
> an orders-of-magnitude performance increase can  be had.

Actually, it's very painful. For MySQL, they've accepted the concurrancy
hit in order to accomplish it -- PostgreSQL would require a more subtle

Anyway, with Rules you can force this:

ON INSERT UPDATE counter SET tablecount = tablecount + 1;

ON DELETE UPDATE counter SET tablecount = tablecount - 1;

You need to create a table "counter" with a single row that will keep
track of the number of rows in the table. Just remember, you've now
serialized all writes to the table, but in your situation it may be
worth while.

max(foo) optimizations requires an extension to the aggregates system.
It will likely happen within a few releases. A work around can be
accomplished today through the use of LIMIT and ORDER BY.

In response to


pgsql-performance by date

Next:From: David TeranDate: 2004-01-05 19:57:47
Subject: Re: optimizing Postgres queries
Previous:From: Stephan SzaboDate: 2004-01-05 19:48:26
Subject: Re: deferred foreign keys

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group