> Especially with very large tables, hearing the disks grind as Postgres scans
> every single row in order to determine the number of rows in a table or the
> max value of a column (even a primary key created from a sequence) is pretty
> painful. If the implementation is not too horrendous, this is an area where
> an orders-of-magnitude performance increase can be had.
Actually, it's very painful. For MySQL, they've accepted the concurrancy
hit in order to accomplish it -- PostgreSQL would require a more subtle
Anyway, with Rules you can force this:
ON INSERT UPDATE counter SET tablecount = tablecount + 1;
ON DELETE UPDATE counter SET tablecount = tablecount - 1;
You need to create a table "counter" with a single row that will keep
track of the number of rows in the table. Just remember, you've now
serialized all writes to the table, but in your situation it may be
max(foo) optimizations requires an extension to the aggregates system.
It will likely happen within a few releases. A work around can be
accomplished today through the use of LIMIT and ORDER BY.
In response to
pgsql-performance by date
|Next:||From: David Teran||Date: 2004-01-05 19:57:47|
|Subject: Re: optimizing Postgres queries |
|Previous:||From: Stephan Szabo||Date: 2004-01-05 19:48:26|
|Subject: Re: deferred foreign keys|