Re: Sequential Scan with LIMIT

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: John Meinel <john(at)johnmeinel(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Sequential Scan with LIMIT
Date: 2004-10-24 20:11:53
Message-ID: 17689.1098648713@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

John Meinel <john(at)johnmeinel(dot)com> writes:
> I was looking into another problem, and I found something that surprised
> me. If I'm doing "SELECT * FROM mytable WHERE col = 'myval' LIMIT 1.".
> Now "col" is indexed, by mytable has 500,000 rows, and 'myval' occurs
> maybe 100,000 times. Without the LIMIT, this query should definitely do
> a sequential scan.

> But with the LIMIT, doesn't it know that it will return at max 1 value,
> and thus be able to use the index?

But the LIMIT will cut the cost of the seqscan case too. Given the
numbers you posit above, about one row in five will have 'myval', so a
seqscan can reasonably expect to hit the first matching row in the first
page of the table. This is still cheaper than doing an index scan
(which must require reading at least one index page plus at least one
table page).

The test case you are showing is probably suffering from nonrandom
placement of this particular data value; which is something that the
statistics we keep are too crude to detect.

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message John Meinel 2004-10-24 21:51:24 Re: Sequential Scan with LIMIT
Previous Message John Meinel 2004-10-24 19:27:59 Sequential Scan with LIMIT