On Wed, Mar 28, 2012 at 9:13 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Francois Deliege <fdeliege(at)gmail(dot)com> writes:
>> I have the following table with millions of rows:
>> CREATE TABLE table1
>> col1 text,
>> col2 text,
>> col3 text,
>> col4 text,
>> col5 text,
>> col6 text
>> select col1 from table1 group by col1 limit 1;
>> select distinct on (col1) col1 from table1 limit 1;
>> select col1 from table1 group by col1 limit 2;
>> select distinct on (col1) col1 from table1 limit 2;
>> Performing any of these following queries results in a full sequential
> scan, followed by a hash aggregate, and then the limit.
> Well, if you had an index on the column, you would get a significantly
> better plan ...
>> Similarly, the following query results in a sequential scan:
>> select * from table1 where col1 <> col1;
>> This query is generated by the Sequel library abstraction layer in Ruby when filtering record based on a empty array of values. We fixed this by handling that case on the client side, but originally thought the server would have rewritten it and sent a empty result set.
> It does not, and never will, because that would be an incorrect
> optimization. "col1 <> col1" isn't constant false, it's more like
> "col1 is not null". I'd suggest "WHERE FALSE", or "WHERE 1 <> 1"
> if you must, to generate a provably false constraint.
'col1 is distinct from col1' could be optimized like that. all though
it would be pretty hard to imagine a use case for it.
In response to
pgsql-performance by date
|Next:||From: Joshua Berkus||Date: 2012-03-28 18:37:38|
|Subject: Re: Linux machine aggressively clearing cache|
|Previous:||From: Tom Lane||Date: 2012-03-28 14:13:01|
|Subject: Re: Distinct + Limit |