> SQL92 says
> Whether a sort key value that is null is considered greater
> or less than a non-null value is implementation-defined, but
> all sort key values that are null shall either be considered
> greater than all non-null values or be considered less than
> all non-null values.
> Postgres appears to me to satisfy the spec. There's nothing here that
> says we can't choose different NULL treatments in different queries...
> regards, tom lane
If we assume we want to behave as written, postgres may seem to be OK.
But, let's think in real-life terms...
There are 2 problems:
1. Imagine such a situation:
DROP TABLE NUTKI ;
CREATE TABLE NUTKI (
ID INT4 PRIMARY KEY,
CREATE INDEX NUTKI_VAL ON NUTKI(VAL);
INSERT INTO NUTKI VALUES(1,1);
INSERT INTO NUTKI VALUES(2,null);
-- force sorting
SET ENABLE_INDEXSCAN TO OFF ;
SET ENABLE_SEQSCAN TO ON ;
SET ENABLE_SORT TO ON ;
In this case both of these selects :
SELECT * FROM NUTKI ORDER BY VAL DESC ;
SELECT * FROM NUTKI ORDER BY VAL ;
return the same set. (1,null). It may be OK accoring to the SQL92 spec,
but (as I was) someone uses 'cursor' like queries to view data forward and
backward not using cursors, it's really annoying.
2. You wrote 'in different queries'... But the same query on different
configurations gives different results (vide my bug-report). Is this OK?
Personaly, I would like postgres to define compilation-definable null
compare method ( or at least use some constant method ).
Ofcourse, these methods couldn't be used in WHERE clauses, but for sorting
it could prove useful.
Or maybe I'm to lame to get the right view on these things :)
In response to
pgsql-bugs by date
|Next:||From: Victor Wagner||Date: 2001-04-27 13:27:47|
|Subject: Invalid outer joins with subquery|
|Previous:||From: Tom Lane||Date: 2001-04-27 12:57:17|
|Subject: Re: Optimalisation options change query results |