As far as I can tell from EXPLAIN, there isn't any optimization done
currently on queries involving the min or max of an indexed field.
What I'm interested in is predecessor/successor queries, eg, "find
the largest value less than X". In SQL this becomes
SELECT max(field1) FROM table WHERE field1 < X
(for a constant X). Currently Postgres always seems to read all the
table records with field1 < X to execute this query.
Now, if field1 has a btree index then it should be possible to answer
this query with just a probe into the index, never reading any table
entries at all. But that implies understanding the semantics of max()
and its relationship to the ordering used by the index, so I can see
that teaching Postgres to do this in a type-independent way might be
For now, I can live with scanning all the table entries, but it would be
nice to know that someone is working on this and it'll be there by the
time my tables get huge ;-). I see something about
* Use indexes in ORDER BY, min(), max()(Costin Oproiu)
in the TODO list, but is this actively being worked on, and will it
solve my problem or just handle simpler cases?
Alternatively, is there a better way to do predecessor/successor
queries in SQL?
regards, tom lane
pgsql-hackers by date
|Next:||From: Bruce Momjian||Date: 1998-07-13 21:31:34|
|Subject: Re: [HACKERS] Anyone working on optimizing subset min/max queries?|
|Previous:||From: Tom Lane||Date: 1998-07-13 20:29:33|
|Subject: Sequence objects have no global currval operator?|